Compare commits

..

368 Commits

Author SHA1 Message Date
Nikita Malinin
a090abbc92 Update remove_converts pass with shape inference (#10474) 2022-02-17 18:17:07 +03:00
Yegor Kruglov
6e5eb87340 Add note to YOLO-v3 conversion instructions (#10428)
* added note to yolo v3 conversion instructions

* fix typo

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md

style fix

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-02-17 18:00:38 +03:00
Ivan Tikhonov
ade4c6c7f9 OpExtension: fix framework attributes handling (#10445)
* Fix attribute handling in OpExtension, add unit tests

* add missed file

* fix warning

* fix warning

* rename convert_from_py_object method to py_object_to_any, fix PEP8

* fix PEP8

* delete redundant include dir, fix includes
2022-02-17 17:42:12 +03:00
Anton Pankratov
61f657795c Streams property with special values (#10411)
* Streams  property with special values

* Fixed clang
2022-02-17 16:39:06 +03:00
Fedor Zharinov
198f44fdc7 Fix for missing throughput in case of Multi device (#10407)
* Fix for missing throughput in case of Multi device

* stylefix
2022-02-17 16:32:19 +03:00
Ilya Lavrenov
306b7611d9 repair TF FE tests after build (#10432)
* repair TF FE tests after build

* Small improvements

* Fixed static build
2022-02-17 16:28:24 +03:00
Maxim Gordeev
3144c5fab8 Added processing of layout for speech sample (#10254)
* Added processing of layout for speech sample

* fixed notes

* some improvements

* Code style format

* changed NCC value for NullStatement

* improved batch processing

* added loading batch for imported model

* fixed notes

* fixed notes

* added layout parameter to azure tests
2022-02-17 16:11:57 +03:00
Irina Efode
ccd7104108 [IE TESTS][CONFORMANCE] Add support of dynamic shapes in SubgraphDumper (#10380)
* Initial commit. Need to remove debug code

* Remove extra flags. Fix comparation in the matchers

* Fix small issue with the default args

* Update eltwise.hpp

* Update ov_subgraph.cpp
2022-02-17 15:52:37 +03:00
Nikolay Tyukaev
fc1157cf68 add api folder if enable python (#10477) 2022-02-17 15:24:29 +03:00
Egor Shulman
8ae4bc95fd [CPU] Coverity fixes (#10392) 2022-02-17 15:11:18 +03:00
Anton Pankratov
0882f863d6 Any compilation time optimization (#10335)
* Optimized any compilation time

* Fixed Any  compilation time

* any::addressof

* reverted

* Fixed read write

* format fix

* Fixed build

* format fix

* Moved any tests back

* removed inline

* fix format

* used static inline

* format fix

* removed inline static

* fixed merge confkicts
2022-02-17 14:55:37 +03:00
Anton Pankratov
7ce9801ec3 Added mkldnn ov properties test for compile_model (#10442)
* Added mkldnn ov properties test

* fixed  macos build
2022-02-17 13:54:02 +03:00
Anton Pankratov
d1378d94b8 Fixed default inference precision in benchmark app (#10443) 2022-02-17 13:53:50 +03:00
Vladislav Golubev
ff4e97ab09 [LPT] Security fixes (#10465) 2022-02-17 13:47:27 +03:00
Anton Chetverikov
e444715c8d [MO] Restore inputs order in IR Reader (#10403)
* Restore inputs order in IR Reader

* Add saving of outputs order
2022-02-17 13:07:34 +03:00
Tomasz Dołbniak
83a8ac800c ONNX model validator enhancements (#10456) 2022-02-17 11:01:47 +01:00
Anton Voronov
61f915b4f6 [CPU] changed checks with_cpu_x86...() to mayiuse() (#9911) 2022-02-17 12:56:55 +03:00
Pavel Esir
43784e2cec fix convert_nms_gather_path_to_unsigned: added opset8::Slice into patter_list (#10439) 2022-02-17 12:47:25 +03:00
Aleksandr Korolev
8abb949af9 [VPU] Coverity fixes (#10396)
Tickets:
-79244
-78866
2022-02-17 12:29:28 +03:00
Aleksandr Korolev
5ace7bb96f [MYX] Added missing supported properties in GetMetric method (#10440) 2022-02-17 12:23:41 +03:00
Anton Pankratov
a7b28953e2 Added Import export device capability into hetero plugin (#10455) 2022-02-17 12:15:45 +03:00
hyunback kim
8148921fa7 [GPU] Fix deconv b32 onednn regression in onednn (#10462)
After enabling deconv b32 onednn, colorization-siggraph f16 b32 has regresison,
Fix it. Add to check sum post ops in case deconv onednn.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-02-17 18:09:51 +09:00
Irina Efode
68f523010e [IE TESTS][CONFORMANCE] Support dynamic shapes in Operation Conformance (#10400)
* emove namespeca unity

* [IE TESTS][IE CONFORMANCE] Suppot dynamic shapes in Operation Conformance runner

* Update CMakeLists.txt

* Fix dim generation
2022-02-17 11:27:45 +03:00
hyunback kim
ed323afc93 [GPU] Remove default bfyx quantize in get_preferred_format (#9654)
* [GPU] Remove default bfyx quantize in get_preferred_format

Default bfyx occurs redundant reorder in fsv-format network.
And remove onednn concat limitation for depdendency input should be
onednn impl.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-02-17 17:25:55 +09:00
Taylor Yeonbok Lee
d35335193a [GPU] Adjust build batch size to 9 from 10 due to the compiler limitation w.r.t the entire module size (#10450) 2022-02-17 11:01:31 +03:00
Anastasia Kuporosova
861d43e06d [Python API] Fix benchmark hanging (#10457) 2022-02-17 10:59:55 +03:00
Liubov Talamanova
be6a3c34f1 [POT] Throw exception for IRv10 (#10345)
* [POT] Throw exception for IRv10

* Update reference models

* Updated AC framework name from dldt to openvino
2022-02-17 10:54:08 +03:00
Vladimir Dudnik
29883a152a fix 79520 (#10449) 2022-02-17 10:52:30 +03:00
Egor Shulman
ff293f5560 [CPU] Disable display of constant layers in PerfMap (#10307) 2022-02-17 10:51:07 +03:00
Egor Duplensky
541627d319 [CPU] [SANITIZER] Avoid possible stack-use-after-scope (#10377) 2022-02-17 10:27:58 +03:00
Ivan Tikhonov
3597ae61f9 Fix increased build time and memory consumption caused by multiple ov::Any instantiation (#10452)
* Fix increased build time and memory consumption caused by multiple instansion of ov::Any.

* delete unused method, correct exception message

* codestyle

* Resolve review comment

* fix exception: throw it in else branch
2022-02-17 09:08:55 +03:00
Gleb Kazantaev
926460e603 Fix Coverity issues (#10427) 2022-02-17 08:54:57 +03:00
Mateusz Tabaka
ab4a11b3bd Remove unnecessary AutoBroadcastSpec parameter in MatMulMultiplyFusion (#10005) 2022-02-17 08:51:32 +03:00
Julia Kamelina
1fc61299c8 update omz submodule (#10441) 2022-02-17 00:50:21 +03:00
Tomasz Dołbniak
90a100d5f6 Default opset bump in ONNX FE (#10437) 2022-02-17 00:47:07 +03:00
Fedor Zharinov
00abcbacc4 Fix for Layout and image_info related issues (#10258)
* bugfix78627

* stylefix

* fix
2022-02-17 00:42:51 +03:00
Maxim Vafin
5cadee20eb Fix issue with constants having inputs in TF FE (#10393) 2022-02-16 20:40:23 +03:00
Andrey Zaytsev
abeb910ce2 Removing the old Intel logo from docs (#10429)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Removed the Intel logo

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
2022-02-16 17:26:26 +03:00
Yuan Xu
4f000b780d update pypi installation (#10217)
* Add Overview page

* update pypi installation

* Revert "Add Overview page"

* integrate review comments

* update formatting

* Update docs/install_guides/installing-openvino-pip.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/install_guides/installing-openvino-pip.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/install_guides/installing-openvino-pip.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

Co-authored-by: Adrian Boguszewski <adekboguszewski@gmail.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-02-16 17:09:56 +03:00
Egor Shulman
5bf9631073 Fixed ProfilingInfo layer status (#10342) 2022-02-16 16:10:19 +03:00
Anton Grishin
05650551b7 [GNA] Fix static analyzer issues (#10379)
* fix incorrect braces

* move pointer check

* add pointer check to VerifyConcat

* Prevent iterator invalidation
2022-02-16 15:46:32 +03:00
Ilya Churaev
434d7bbecc Fixed 4458 warning for Windows (#10418) 2022-02-16 11:39:43 +00:00
Anton Pankratov
5b8b698f88 Fixed ICore GetSupportedProperties (#10394)
* Added ICore::get_property

* Added tests

* Format fix

* All properties
2022-02-16 14:36:01 +03:00
Andrey Noskov
7a24f53b57 [GNA] Moved am_intel_dnn tests (#10294)
* [GNA] am_intel_dnn tests moved from deprecated tests

* fixed code style

* [GNA]fixed copyright date
2022-02-16 14:21:12 +03:00
Andrey Noskov
e2948a807c [GNA] Moved cpp_wrapper test (#10297)
* [GNA] Moved cpp_wrapper test

* [GNA] fixed copyright data
2022-02-16 14:19:29 +03:00
Nadezhda Ageeva
fc5a416423 [SAMPLES][GNA] Update C++ speech sample with new config API (#10357)
* [SAMPLES][GNA] Update speech sample with new cofig API

* Review comments

* Some additional checks
2022-02-16 13:23:50 +03:00
Alexander Zhogov
2e71fccd82 Azure CI: Disable tests on Mac due to long building 2022-02-16 13:12:06 +03:00
Anton Dudchenko
483b3828ca [VPU] Enable CheckTensorPrecision tests (#10390)
Enable CheckTensorPrecision tests for the myriad plugin.
-75944
2022-02-16 13:06:13 +03:00
Artyom Anokhov
ba69bae055 [Scripts] Remove MacOS install dependencies (#10397)
* OpenVINO scripts: Removed legacy install install_guide.html. Removed installation of scripts for MacOS

* scripts/CMakeLists: optimized if case
2022-02-16 12:52:57 +03:00
Chen Xu
4d954d0c13 [CPU] Fix the unnecessary calculation of blk_stride for dynamic shape (#10385) 2022-02-16 12:20:01 +03:00
Andrew Kwangwoong Park
2a1d8d7e99 [GPU] Minor fix for dump layer (#10291)
- Replace find with compare func to avoid dumping all layers that contain layer name

Signed-off-by: Andrew Kwangwoong Park <andrew.kwangwoong.park@intel.com>
2022-02-16 12:02:28 +03:00
Nikolay Tyukaev
0c4d50239a update requirements to fix tabs (#10409) 2022-02-16 11:47:11 +03:00
Gleb Kazantaev
709084888a Remove deprecated classes from openvino headers (#10371)
* Remove deprecated classes from openvino headers

* Fix tests
2022-02-16 11:41:16 +03:00
Ilya Churaev
0b27fb80b1 Fix for new coverity issues (#10378)
* Fix for new coverity issues

* Fixed cc coverity

* Fixed code style

* Revert some changes

* Fixed build
2022-02-16 11:12:24 +03:00
Nikita Malinin
c8ce93290e [POT] Sync mode only for gna_sample (#10355)
* Sync mode only for gna_sample

* Disable test
2022-02-16 11:00:13 +03:00
Vladimir Zinoviev
e22a2b3076 [CommonTransformations] Fix default output take from Split/VariadicSplit (#10395) 2022-02-16 10:59:11 +03:00
Mateusz Bencer
0a056857c5 fix handling stride_y (#10398) 2022-02-16 07:57:56 +00:00
Mingyu Kim
c0d54e48bb [GPU] Bugfix for onednn post op optimization (#10416)
When post-op has pattern like below, binary_mul was ignored previously.
1. binary_add
2. eltwise_linear
3. binary_mul
4. binary_add

It happens when prev_post_op_idx == 2, cur_post_op_idx == 4.
prev_post_op_idx was supposed to proceed to idx 3, but it did not.
2022-02-16 10:44:42 +03:00
Vladislav Golubev
fa4246d531 [LPT] Security fixes (#10381) 2022-02-16 10:31:17 +03:00
Taylor Yeonbok Lee
cbb5dff9c1 Fix coverity errors (#10384) 2022-02-16 10:10:10 +03:00
Ivan Tikhonov
06eb74b77f Fix MakeStateful transformation: use tensor names instead of friendly names (#8997)
* Use tensor names instead of friendly names, handle one output tensor to several Result ops case

* fix python tests

* fix python test

* fix incorrect merge

* remove redundant files

* fix variable names generation, fix python test

* Apply review comments

* fix python test
2022-02-16 09:26:31 +03:00
Jan Iwaszkiewicz
e71f23fc7e [PYTHON] Add __repr__ to main objects (#10365) 2022-02-15 21:30:33 +00:00
Evgenya Stepyreva
d14f1e54a5 MatMul Shape Inference (#10348)
* Proper dynamic dimension broadcasting

* make shape infer race condition reproducer

* Use ngraph only

* MatMul shape inference

* Style

* Dynamic rank case covered

* Build fix

Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>
2022-02-16 00:22:46 +03:00
Vladimir Dudnik
eda4cbf30e [OMZ] rest of public models with layout (#10293)
* update OMZ submodule, rest of public models with layout

* sync with PR-10150

* ac fixes for WB

* fix CVS-78616
2022-02-15 23:42:41 +03:00
Maxim Shevtsov
317b956d2e fixed possible situation when auto-batching returns zero requests (#10388) 2022-02-15 15:13:25 +00:00
Mikhail Nosov
d5e8e0fb88 Fix coverity findings - add nullptr check before dereferencing (#10375)
Even though it is not possible to hit into this situation using existing plugins - there is theoretical possibility that some plugin may return 'nullptr' as it is allowed.
So this check shall remain in generic part which should not rely on plugin-specific behavior
2022-02-15 18:01:05 +03:00
Maxim Andronov
dc905f972a [CPU] AdaptivePooling child edges number check fix (#10372) 2022-02-15 17:59:51 +03:00
Ivan Novoselov
fa6865d569 [CPU] Disable MatMul+FQ(I8 out) if MatMul cant execute in I8 (#10316) 2022-02-15 17:59:06 +03:00
Maxim Vafin
0793a56260 Fix Conv3D translator in TF FE (#10387) 2022-02-15 17:53:13 +03:00
Mikhail Letavin
f150e2ad09 [GPU] Remove debug suffix from OpenCL.dll on Windows (#10361) 2022-02-15 16:43:40 +03:00
Sergey Lyubimtsev
498d865ea6 Correction for install guides: (#10373)
- OpenVINO installer path for macOS
- Default install pathnon macOS
- Red Hat Enterprise Linux 8.x, 64-bit is not part of IRC installer
2022-02-15 16:22:26 +03:00
Gleb Kazantaev
b837b7e32c Fix Coverity Isues (#10376) 2022-02-15 15:26:04 +03:00
Pavel Esir
121d59aa80 [MO] move importlib-metadata into setup.py (#10319)
* handle 'and' marker in requirements

* Revert "handle 'and' marker in requirements"

This reverts commit 952bb949ca.

* moved importlib-metadata from requirements.txt into setup.py
2022-02-15 15:01:27 +03:00
Indira Salyahova
f1557c06de [POT] Fix inference sample in fbc when get list prediction (#10159)
* fix: inference sample in fbc when get list prediction

* update reference metrics
2022-02-15 14:42:40 +03:00
Wilson Seok
e168c9b1c3 Add slt in template plugin/tensor iterator (#9692)
* Remove fp16 of Convert layer test from skip_tests.config.cpp as it works now

* update repo

* add initial op reference code of TensorIterator with LSTM body function

* add GRU/RNN case in setup

* add all other test cases

* add visitor api test

* remove unnecessary header files

* fix clang-format issue

* fix copyright year and remove ngraph_helper namespace

* rename ti.cpp to tensor_iterator.cpp in core unit test

* apply suggestions
2022-02-15 13:48:18 +03:00
Ivan Novoselov
68c390f679 [Snippets][CPU] MKLDNNSnippetNode adopts canBeInPlace logics from eltwise node (#10334) 2022-02-15 13:13:35 +03:00
Maksim Kutakov
788a5bb9f2 [CPU] Convolution plus sum fusing in the case of dynamic shapes (#10235) 2022-02-15 13:12:07 +03:00
Anastasia Kazantaeva
ccc38d22a8 Upgrade MO message for 2022.1 (#10315) 2022-02-15 13:10:45 +03:00
Alexander Zhogov
2b8e1ec49a Azure CI: no ARM triggers on docs/* (#10322)
* Azure CI: no triggers on docs/*

* Remove "PR:"
2022-02-15 13:04:44 +03:00
Taylor Yeonbok Lee
f5283300f0 Reduced available host VRAM & phys mem limitation (#10360) 2022-02-15 19:01:05 +09:00
Mateusz Tabaka
a875f6ed9c Add transformation that aligns elementwise input ranks (#10125)
* [CPU] Add transformation that aligns elementwise input ranks

* fix tests - check also aBcd16b format

* add support for fq

* add test for sqr diff

* move to moc transformations

* fix tests

* align only for numpy autobroadcast type

* fix fetching autob from fq

* [CPU] Eltwise tests corrected & callback for CPU removed

* remove transformation callback call

* revert changes to getMKLDNNOutputMemoryFormats

* remove comment

* use single wrap_type

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2022-02-15 12:47:54 +03:00
Ilya Znamenskiy
523adff17a [GPU] Fully connected int8 optimizations, some fixes, better fused ops support (#10035) 2022-02-15 12:33:16 +03:00
Andrei Gorbachev
64812fd635 [GPU] disable options in batch compilation (#10311) 2022-02-15 08:50:58 +00:00
Ilya Znamenskiy
0099755434 [GPU] Gemm opt tile_n min size fix (#10369) 2022-02-15 11:48:02 +03:00
Artur Kulikowski
004daca1fa Clear outputs vector after run TestCase (#10279) 2022-02-15 09:41:01 +01:00
wood-ghost
ded2d00711 Add paddle logical and reduce ops support. (#10352) 2022-02-15 16:23:50 +08:00
Anton Pankratov
39c90e9d48 Streams number fix (#10336)
* Streams number fix

* fixed perfomance hint

* fixed format

* removed dbg

* simplified code

* reverted becnhmark_app
2022-02-15 08:04:45 +00:00
Bartek Szmelczynski
2b03d5fe66 [MO args][ONNX FE]fix cutting graph with input, output or both (#9698)
* fix cutting graph with input, output or both

* fix collisions

* add regex

* revert changes to regex, fix decond_name_with_port function

* fix collisions

* optimize try_get_node function

* swap bool with enum

* revert accidental import

* optimize the code

* Update tools/mo/unit_tests/mo/moc_frontend/moc_extractor_test_actual.py

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* Update tools/mo/unit_tests/mo/moc_frontend/moc_extractor_test_actual.py

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* Update tools/mo/unit_tests/mo/moc_frontend/moc_extractor_test_actual.py

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* Update tools/mo/unit_tests/mo/moc_frontend/moc_extractor_test_actual.py

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* Update tools/mo/unit_tests/mo/moc_frontend/moc_extractor_test_actual.py

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* remove redundant check

* fix wrong nodes returns

* fix decode_with_port_name implementation, add comments

* reduce code duplicates

* remove redundant imports

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>
2022-02-15 10:55:40 +03:00
Vladislav Golubev
d48dd1f26c [Transformaitons] BatchNormDecomposition fix (#10310)
* [Transformaitons] Changed BN decomposition

* matcher updated to cover dynamic shape in opset1 case

* BatchNormDecomposition: added positive test-cases

* removed WA
2022-02-15 10:48:30 +03:00
Alexey Lebedev
e85c473d59 [tools] fix bin processing in benchmark app (#10366)
* fix bin reading

* Remove unsupported type
2022-02-15 10:34:14 +03:00
Indira Salyahova
acf6185bf3 Update load image in sample (#10223) 2022-02-15 10:18:27 +03:00
Mingyu Kim
13c024b7a3 Remove unnecessary cout message (#10346) 2022-02-15 16:14:56 +09:00
Ilya Churaev
8020a7abcc Disabled LTO for frontend_common (#10362) 2022-02-15 06:03:20 +00:00
bell
f75e50cc88 limit gpu compiling threads (#10349)
* limit gpu compiling threads

Signed-off-by: fishbell <bell.song@intel.com>

* switch to 2.0

Signed-off-by: fishbell <bell.song@intel.com>

* clang format

Signed-off-by: fishbell <bell.song@intel.com>
2022-02-15 08:52:49 +03:00
Maxim Andronov
c3c52bae63 [CPU] Convolution caching support (#9954) 2022-02-15 08:47:03 +03:00
Anton Chetverikov
84ee38d89e [MO] Move redundant checks in ScatterUpdate operation shape infer (#10306)
* Add extender for ScatterUpdate operation

* Remove scatterupdate extender

* Remove redundant checks in Scatter shape inference function

* Move checks to ScatterElementsUpdate operations

* mava checks to appropriate place
2022-02-15 04:55:38 +03:00
Jacek Skowron
a0ad849c19 [DOCS] add install guides minor changes (#10317)
* [DOCS] add minor changes to install guides

[DOCS] add minor changes to install guides

[DOCS] add minor changes to install guides

[DOCS] add minor changes to install guides

[DOCS] add minor changes to install guides

[DOCS] add minor changes to install guides

* [DOCS] add minor changes to install guides
2022-02-15 02:43:50 +03:00
Maxim Andronov
1ab9c07ccd [CPU] Skip dynamic tests which executed via legacy API (#10358) 2022-02-15 00:45:50 +03:00
Daniil Lyakhov
2f9c5df271 [Ngraph transformation][Pruning]Matmul ops pruning support (#10211)
* Linear pruning support

* Minor fix

* Fix types

* Fix: stop 1d multiply propagation
2022-02-14 22:00:29 +03:00
Mikhail Nosov
2f876e3b5b Fix ONNX's PriorBoxClustered accuracy (#10091)
* Fix ONNX's PriorBoxClustered accuracy
If step_heights == 0 and step_heights == 0, but 'step' is 16, then we should treat this as both = 16

* Removed workaround for ONNX frontend
2022-02-14 20:55:41 +03:00
Alexey Lebedev
d3712a148b [tools] cross check tool with api 2.0 (#10058)
* save work

* save work

* save work

* basic changes with api 2.0

* Support input file mapping and bin files

* Some impovements

* remove mapping support

* Add -ref_layers parameter

* Fix error handler

* Update Readme and remove old parameters

* Fix readme

* remove info about precision

* rename layer to op

* rename blob to tensor

* remove info about shape

* remove unused imports
2022-02-14 20:25:31 +03:00
Katarzyna Mitrus
0050643e9b Add BroadcastConstRangeReplacement transformation (#10318) 2022-02-14 19:42:51 +03:00
Dmitry Pigasin
3a5d821219 [IE Python Sample] Update docs (#9807)
* update hello_classification readme

* update classification_async readme

* update hello_query_device readme

* Fix hello_classification launch line

* Update hello_reshape_ssd readme

* update speech sample docs

* update ngraph sample docs

* fix launch command

* refactor py ngraph imports

* Replace `network` with `model`

* update example section with openvino-dev

* Update samples/python/classification_sample_async/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/classification_sample_async/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/hello_classification/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/hello_classification/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/hello_reshape_ssd/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Replace `Inference Engine` with `OpenVINO`

* fix ngraph ref

* Replace `Inference Engine` by `OpenVINO™ Runtime`

* Fix IR mentions

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-02-14 19:03:45 +03:00
Dmitry Pigasin
310eb81403 [IE Samples] Update docs for C++ samples (#9937)
* update hello classification readme

* update hello classification readme

* update classification async readme

* replace `network` with `model`

* update example section with openvino-dev

* update hello query device readme

* Update hello reshape readme

* Update ngraph func creation readme

* update speech sample readme

* update hello nv12 readme

* Apply suggestions from code review

review comments accepted

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Replace `Inference Engine` with `OpenVINO`

* fix model ref

* Replace `Inference Engine` by `OpenVINO™ Runtime`

* Fix IR mentions

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
2022-02-14 19:03:19 +03:00
Egor Duplensky
3fcff15166 [CPU] Fix performance hint property handling (#10351) 2022-02-14 18:42:57 +03:00
Ilya Lavrenov
2d3bd40c3d Removed dead code (#10331) 2022-02-14 17:57:27 +03:00
Katarzyna Mitrus
e1197065fe [Docs] Add Slice-8 op cpp constructors docs (#10320) 2022-02-14 17:46:45 +03:00
Xuejun Zhai
9b41aa707d Modify for CVS-69023: hint configuration (#10259)
Signed-off-by: xuejun <xuejun.zhai@intel.com>
2022-02-14 17:46:11 +03:00
Gleb Kazantaev
a3d5b6501d Fix get_constant_from_source (#10350) 2022-02-14 16:03:12 +03:00
Pavel Esir
d1477b8569 fixed setting 'out_ports_count' in ir_reader (#10265) 2022-02-14 16:01:22 +03:00
Mateusz Tabaka
08eb4766f2 [CPU] Don't change inputs child precision if it has Subgraph consumers (#10238) 2022-02-14 15:54:35 +03:00
Andrey Zaytsev
25bd2c8aee Feature/azaytsev/docs dlsteamer revision (#10155)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Revised dlstreamer documentation

* Minor edits

* Fixed link

* Fix

* Edits after review

* Shorten DL Streamer name in the TOC

* Update documentation.md

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
2022-02-14 12:43:52 +00:00
Maxim Andronov
9ac542c455 [CPU] Restore legacy SetBlob and GetBlob to API 1.0 version (#10094) 2022-02-14 15:35:13 +03:00
Daniil Lyakhov
56be1a5438 Change User Transformations applying order in MO (#10241)
* Fix user transformation order in mo

* Move user transformation behind FP16 compression

* Move user transformation call before fp16 compression
2022-02-14 15:06:09 +03:00
Tomasz Dołbniak
a9b6eaf5c0 Multiple ONNX opset imports handling (#10332) 2022-02-14 12:59:41 +01:00
Dmitry Pigasin
9b1e4b801b Add -layout option (#10272) 2022-02-14 14:47:10 +03:00
Nikolay Shchegolev
3cb7592607 [CPU] Gather node. Support case with batchDims == indicesRank. (#10170) 2022-02-14 14:44:10 +03:00
Gorokhov Dmitriy
be4464ca2b [CPU] Migrated legacy post ops mechanism on runtime data pointers (#9938) 2022-02-14 14:17:45 +03:00
Jan Iwaszkiewicz
9e89ee2478 [PYTHON] New Python docs and refactor/improvements (#10032) 2022-02-14 10:24:33 +01:00
Irina Efode
7ff5f5ea70 [IE TESTS][IE CONFORMANCE] Move Read_ir tests to Conformance (#10300) 2022-02-14 12:15:37 +03:00
hyunback kim
c5b26bc10c [GPU] Support deconv double blocked format for b=32 (#10164)
* [GPU] Support batch32 deconv onednn

onednn rls-v2.6-pc2 support deconv batch32,
so remove the batch size limitation.

Signed-off-by: hyunback <hyunback.kim@intel.com>

* Update to merge duplicated checking onednn condidton in deconv.

Signed-off-by: hyunback <hyunback.kim@intel.com>

* Update to use is_node_for_onednn func in get_preferred_impl_type

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-02-14 17:39:26 +09:00
Anastasia Kuporosova
931f4c077d [Python API] Update python test installation (#10283) 2022-02-14 11:24:29 +03:00
Anton Pankratov
be8e15c180 fix HETERO with branching without splits (#10325)
* Default value of streams in ba is AUTO

* Fixed hetero cases with branches

* Fixed format
2022-02-14 10:36:41 +03:00
Roman Lyamin
d13e04a693 [GPU] convolution_kernel_bfyx_1x1_opt fix (#10338) 2022-02-14 10:32:19 +03:00
Maksim Derbasov
bb0d82f724 Fix warnings (#10278) 2022-02-14 07:48:41 +03:00
Mikhail Nosov
d85715f991 Remove dynamism from API 1.0 (#10167)
* Refresh the PR

* Added check for dynamic inputs to LoadNetwork/QueryNetwork

* Fix review comment

* Added 'validation' callback to 'load network from file'

* Fix MockICore

* Added null-pointer check
2022-02-13 20:41:14 +03:00
Ilya Lavrenov
ba19551b13 Fixed typo (#10313) 2022-02-13 16:20:41 +03:00
Anastasia Popova
ac2e639ff8 Added telemetry to modules names list. (#10295) 2022-02-13 10:28:17 +03:00
Ilya Lavrenov
80a901e103 Add TF FE to OpenVINO package (#10314)
* Add TF FE to OpenVINO package

* Add double install for TF FE
2022-02-12 23:42:12 +03:00
Indira Salyahova
ea00eae922 [POT] Fix for measuring input shape when inference model with batch greater 1 in FBC (#10063)
* fix: diffrent batch shape in prediction and target in ac

* add calculate metric in engine True

* resolve conflicts
2022-02-12 19:12:58 +03:00
Nikita Malinin
8e43987cd7 [POT] Update IEEngine for SW API support (#10304)
* Update IEEngine for SW API support

* Change Engine for GNA sample

* Change stacks into reshape
2022-02-12 18:57:35 +03:00
Indira Salyahova
976a20cedf [POT] Update input pattern (#10220)
* Update special_patterns.py

* Update IgnoredPatterns.md
2022-02-12 18:56:41 +03:00
Vladislav Volkov
78281fef74 [CPU] [Ngraph] Fix of memory leak in PassBase::get_name and leak in jit_avx2_1x1_convolution_with_dw_conv_fwd_t kernel (#10199) 2022-02-12 15:48:49 +03:00
Maksim Kutakov
451453c4ce [CPU] Fixes for CpuBlockedMemoryDesc constructor and reorder availability checks (#10299) 2022-02-12 15:29:55 +03:00
Alexander Zhogov
e49370c008 Azure CI: Enable tests on Mac again 2022-02-12 14:22:37 +03:00
Alexander Zhogov
74475e216d Azure CI: Add ccache on Mac (#10290)
* Azure CI: Add ccache on Mac

* Temp OFF

* disable tests
2022-02-12 11:52:07 +03:00
Ivan Tikhonov
9989db5ae0 Rename frontend extension files (#10257)
* Delete _extension suffix in file names; add extension.hpp header to include all extensions

* add extension.hpp file to include all extensions

* codestyle
2022-02-12 09:19:20 +03:00
Maxim Shevtsov
e3cc4833f4 Auto batch smart reshape strict testing (once we moved to dim tracking) (#10253)
* fixed perf-counters

* explicit auto-batching params that should guarantee the auto-batching is triggered ( to avoid fallback to no-batching when the selected batch1 size is just 1)

* makeConvPoolReluNoReshapes and using that whenever applicable to gaurantee the auto-batching is required (not important for things like plugin/executable-network config tests, but important for the inference-requests)

* getDefaultNGraphFunctionForTheDevice moved to the ov_behavior_test_utils.hpp
2022-02-12 02:00:34 +03:00
Pavel Esir
653ed4a34c [MO] use revision hashes to compare IE & MO versions (#10230)
* fixed version comparison: for comparsion extracted hashes are used

* shortened 7 -> 11 to match the current version fromat from nightly

* corrected regex, added comparing by minimal hash len
2022-02-12 00:13:48 +03:00
Anton Pankratov
897e2acd91 Default value of streams in ba is AUTO (#10305) 2022-02-12 00:09:31 +03:00
Roman Lyamin
7b288d125a [GPU] Gather fusion tests fix (#10308) 2022-02-11 20:57:44 +03:00
Aleksandr Korolev
c2a9036482 [VPU] Fix performance hint (#10309) 2022-02-11 19:39:00 +03:00
guozhong wang
14c1e98e8c Guozhong/check format (#10184)
* remove formatTimeMilli from time_utils.cpp

* add traceCallStacks test case

* add traceCallStacks test case in format_test.cpp

* add param:"test" to function TraceCallStacks()

* catch the exception of checkFormat

* add space for try catch

* rollback time_utils.cpp time_utils.hpp and log_utils_format_test.cpp

* modify testcase for log.hpp

* modify testcase from format_s to format_s_d_ld_u_lu2
2022-02-11 19:10:13 +03:00
Yuan Hu
7abd61f867 [AUTOPLUGIN] OV config 2.0 support (#10191)
* add support for LOG_LEVEL and supported_properties

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix compile error

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add test case for log_level and full_name

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update to ov 2.0

Signed-off-by: fishbell <bell.song@intel.com>

* fix benchmark_app faild for AUTO:GPU, GPU

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add case

Signed-off-by: fishbell <bell.song@intel.com>

* refine logic

Signed-off-by: fishbell <bell.song@intel.com>

* add test cases

Signed-off-by: fishbell <bell.song@intel.com>

* add more cases

Signed-off-by: fishbell <bell.song@intel.com>

* fix redifinition

Signed-off-by: fishbell <bell.song@intel.com>

* cpu plugin only in cpu tests

Signed-off-by: fishbell <bell.song@intel.com>

* typo in parameter

Signed-off-by: fishbell <bell.song@intel.com>

* use _core directly

Signed-off-by: fishbell <bell.song@intel.com>

* fix multi case failure

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: fishbell <bell.song@intel.com>
2022-02-11 23:39:09 +08:00
Ivan Novoselov
cc602ac6fd [Snippets] Convert Load/Store to slarar versions if shape ends with 1 (#10292) 2022-02-11 17:27:32 +03:00
Liubov Talamanova
4d61600077 [POT] Fix cascade model names (#10112) 2022-02-11 15:54:41 +03:00
Victor Kuznetsov
bcd192e882 Revert changes with single-image-super-resolution-1032 - memcheck precommit (#10271) 2022-02-11 20:30:09 +08:00
Anton Pankratov
f36d3303d2 Added callback capturing notes (#10256)
* Added callback capturing notes

* fixed spelling
2022-02-11 14:51:32 +03:00
Sergey Shlyapnikov
03566b4e4d [GPU] Fix outputs blobs allocation for U16/I16 data types (#10180)
* [GPU] Fix outputs blobs allocation for U16/I16 data types

* [GPU] Add U32, U64, FP64 data types support; add information about legacy fused activations to .info file

* Update auto_batch.cpp

fixed u64 inputs for the auto-batching

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
2022-02-11 14:08:36 +03:00
Nikita Demashov
20d2633af0 removed defaultPrecisions as global variable and added as field in Params class (#9185)
fix canConvolutionBeTransformed arguments

fix isAsymmetricOnWeights in GPU plugin

added defaultPrecisions in TestTransformationParams

set new default attribute precisions

try to set const default precisions in network_helper.cpp

apply precision_set

[LPT] Default precisions

rebase

remove extra const

used defaultPrecision in tests

fixed SimpleLowPrecisionTransformer default argument

fixed AttributeParameters default argument

added defaultPrecisions in functions

fix assign_and_read_value_transformation tests

fixed wrong defaultPrecisions definition

fixed ConcatWithNeighborsWithConvolutionTransformation tests

remove getDefaultPrecisions

rebase

remove getDefaultPrecisions from gpu plugin

remove getDefaultPrecisions from lpt_mkldnn_plugin.cpp

use predefined member

update mkldnn_plugin.cpp & lpt_mkldnn_plugin.cpp

resolved conversations

make all lambda captures by ref
2022-02-11 13:41:03 +03:00
tgubanova-lohika
04c1b9760c [GPU] Implement ExperimentalDetectronTopKROIs operation (#10208) 2022-02-11 13:32:49 +03:00
Vladimir Paramuzov
dc1e9aa9bd [GPU] 6d broadcast support (#10280) 2022-02-11 13:29:09 +03:00
Vladimir Paramuzov
013b5f5b5f [GPU] Added cl batched headers post-processing (#10093) 2022-02-11 13:26:22 +03:00
Nikita Malinin
d758a21d6e Update gna_sample with API 2.0 features (#10236) 2022-02-11 13:23:02 +03:00
Alexey Lebedev
31501a7992 Fix random (#10240) 2022-02-11 13:06:07 +03:00
Mateusz Tabaka
6e1bc49862 Update xfail reason for ssd mobilenet models (#10287) 2022-02-11 10:34:42 +01:00
Nikolay Tyukaev
f03590d245 fix edit on github for pot and ovsa (#10285) 2022-02-11 12:13:12 +03:00
Mikhail Nosov
5535fdefa9 Fix coverity scan issues (#10266)
* Fix coverity scan issues

For virtual 'noexcept' functions everything that can throw exception shall be handled inside function

* Remove 'noexcept'
2022-02-11 10:44:57 +03:00
Tomasz Dołbniak
c186449735 Do not process null nodes in JSON analysis (#10269) 2022-02-11 08:42:25 +01:00
Mang Guo
8bbabf8720 [CPU] Get interpolate scales input during interpolate node init if the input is Constant. (#10229) 2022-02-11 10:27:50 +03:00
Maxim Andronov
cf805b17b9 [CPU] Support legacy dynamic batch via dynamic shapes (#9646) 2022-02-11 10:17:58 +03:00
Min, Byungil
281e38bd83 Use onednn reorder for newly added format (#10273)
+ Added new format to onednn optimized format list

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-02-11 15:58:11 +09:00
Anton Pankratov
1621a5a0b5 Used new config for streams and threads (#10150)
* Used new config for streams and threads

* Fixed review coments in ba

* format fix

* fixed hello_query_device

* Added STL string io

* fixed tests

* Fixed test

* Fixed build

* fixed format

* Fixed build

* try fix win

* other any io specialization

* Fixed after merge

* renamed streams

* build fixed

* fixed build

* fixed format

* fix for old mac build

* Fixed type of exception

* test fix
2022-02-11 09:22:45 +03:00
Nikolay Tyukaev
437bc3280d Feature/ntyukaev/add doxygen snippet sphinx (#10277)
* add doxygensnippet directive

* update MANIFEST.in
2022-02-11 09:19:46 +03:00
Jade Cho
dedcbeafa8 [GPU] Binary post-op support for full tensor. (#9856)
* [GPU] Binary post-op support for full tensor.

* Add unit tests

* Add a reorder if output dtype of conv layer is fp32.
2022-02-11 11:33:31 +09:00
Paul Youngsoo Ahn
fa69ee9596 [GPI] Update kernels to cache.json (#10260) (#10260) 2022-02-11 10:51:37 +09:00
Irina Efode
fd79ca91a1 [IE TESTS] Rename Op_impl_check (#10275) 2022-02-10 21:39:54 +03:00
Maxim Shevtsov
e41e1f51a0 Auto batch smart reshape (relies on the dim tracking) (#9964) 2022-02-10 20:43:06 +03:00
Andrey Somsikov
510e5fb746 Do not publish coverity submission to azure (#10274) 2022-02-10 19:09:36 +03:00
Vladimir Gavrilov
7b1b6f22e5 Added i64 and i32 as admissible element types of input port 0 into op::v4::Interpolate::validate_and_infer_types(). (#10263) 2022-02-10 18:59:38 +03:00
Anton Pankratov
7c455c7f23 Removed ov::Any rvalue cast (#10267) 2022-02-10 18:53:21 +03:00
hyunback kim
efbfd957ff [GPU] Enable disabled network fro oneDNNv2.6-pc2 (#10226)
Some networks newly uses wtags in oneDNN.
Add g_os_is_yx_osa2_isa8_osv8_isv2

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-02-10 18:13:33 +03:00
Anton Chetverikov
50dffb80bb Add missed DeformableConvolution to back transformations (#10255) 2022-02-10 17:20:11 +03:00
Anastasiya Ageeva
87f8ff5918 Reviewed header files for new APIs (#9873)
* Reviewed header files for new APIs

* Update compiled_model.hpp

* Resolved conflicts

* Implemented review comments

* Fixed code style issues

* Fixed code style issues

* Fixed code style issues

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-02-10 17:12:18 +03:00
Anton Chetverikov
9af8d9339c [MO] Avoid maskedconstant to array conversion (#10233)
* Avoid maskedconstant to array conversion

* remove redundant input

* Add link to github issue
2022-02-10 16:24:05 +03:00
Sergey Shlyapnikov
bc21e52912 [GPU] Fix FC 3D input size propagation and bias fusion (#10249) 2022-02-10 16:16:12 +03:00
Anton Grishin
d94cff59a3 [GNA] Add PReLu and LeakyReLu activations in tests (#10194)
original commit #9414
2022-02-10 16:15:47 +03:00
Andrey Zaytsev
54f56be077 Feature/azaytsev/docs update openvino readme (#10270)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Added POT, replaced IE with OV Runtime

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
2022-02-10 15:44:35 +03:00
Aleksandr Korolev
8b7aeb7f52 [VPU] Coverity fixes (#10090) 2022-02-10 15:31:00 +03:00
Evgenya Stepyreva
9ad09f2120 Shape inference adoption for dimension tracking (#10016)
* Shape inference adoption for dimension tracking

* Style

* test adj

* tests fixed
2022-02-10 15:30:18 +03:00
Anton Voronov
d5c837cc1b [CPU] added some legacy parallel methods to fix perf issues (#9758) 2022-02-10 15:13:05 +03:00
Andrei Molotkov
80be557605 [GPU] Fix Backpropagation issue with BS >= 16 (#10228) 2022-02-10 14:53:27 +03:00
Ilya Lavrenov
ea26ec32b3 Removed runtime namespace (#10231) 2022-02-10 14:53:13 +03:00
Alexey Lebedev
d484411f39 [tools] Fix image_info detection in benchmark app (#10192)
* Fix image_info detection

* exception instead warning in case input data is not compatible with input
2022-02-10 14:32:56 +03:00
Andrew Kwangwoong Park
51c89dff26 [GPU] Fix detection output stage-0 kernel (#10262)
- Change the constant value to the maximum work group size
- Add CLK_GLOBAL_MEM_FENCE barrier to synchronize storing result in intermediate buffer
- Add condition to prevent access local array out of range

Signed-off-by: Andrew Kwangwoong Park <andrew.kwangwoong.park@intel.com>
2022-02-10 19:43:16 +09:00
Evgenya Stepyreva
89c3a18f83 Fix TensorIterator dynamic rank output (#10247)
* Fix TensorIterator dynamic rank output

* style
2022-02-10 13:03:16 +03:00
Ivan Tikhonov
3f0e532dce Fix the issue in values_from_const method in OVTF integration with TF FE (#10225)
* Fix the issue in values_from_const method in OVTF integration with TF FE

* fix comment
2022-02-10 11:33:42 +03:00
Min, Byungil
334e9e994e Revert WA for onednn first conv (#9783)
+ Reverted WA for fsv32 format first conv
+ Applied blocked input format bsv8fsv4 & bsv8fsv2 for onednn first conv
+ Implemented onednn usage for first conv of feature size 1
+ Added new weight format ABcd16a4b
+ Bugfix in fetch_weight
+ Updated thirdparty onednn_gpu
+ Known issue : AcdB16a4b is not supported

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-02-10 12:12:09 +09:00
Bartek Szmelczynski
36de4e8e28 [Model Enablement] fix default onnx domain (#10106) 2022-02-10 03:16:24 +03:00
Gleb Kazantaev
87c6e09cae Fix Add/MulFQFusion transformations (#10252) 2022-02-10 01:22:16 +03:00
Maxim Andronov
36afedd93d [CPU] Increase executor cache capacity (#10232) 2022-02-09 21:49:37 +03:00
Alexandra Sidorova
fce49e6d80 [Transformations] Added interchangeable reshape elimination (#9691)
* [Transformations] Added interchangeable reshape elimination

* Applied comments #2

* returned Reshape in condition

* applied comments #3

* applied comments #4

* added comment in plugin with reason about transformation
2022-02-09 21:11:49 +03:00
Mikhail Ryzhov
a002b26294 Fixed import for the new api 2.0 (#10175) 2022-02-09 20:51:49 +03:00
Vladislav Golubev
d28f8b7857 [LPT] Security fixes (#10243) 2022-02-09 20:46:39 +03:00
Tomasz Dołbniak
aedd902cd8 Use double quotes in JSON analysis (#10237) 2022-02-09 20:41:49 +03:00
Egor Shulman
840d2fb80d [CPU] Coverity fixes (#10207) 2022-02-09 20:39:50 +03:00
Gorokhov Dmitriy
6ea20340d1 [CPU] Fixed out of bounds read in JIT planar convolution (#7025) 2022-02-09 20:26:57 +03:00
Irina Efode
a37195492c [IE TESTS] Add exception to comparation to provide correct conformance results (#10197)
* [IE TESTS] Add exception to comparation to provide correct conformance results

* Apply comments
2022-02-09 19:15:00 +03:00
Nikolay Tyukaev
e81ca9f975 DOCS: change doc tests (#10213)
* change doc tests

* fixes

* fixes

* fixes

* fixes

* fixes
2022-02-09 18:28:54 +03:00
Maxim Shevtsov
c0a375f844 adding I64/U64 support to the auto-batching (#10234)
* adding I64/U64/etc support

* inputs precisions tests instantiations for the GPU and BATCH:GPU
2022-02-09 18:28:13 +03:00
Mikhail Nosov
f56c640550 SmartReshape: support Param->Convert->Reshape->Proposal pattern (#10204)
Current SmartReshape finds matched to Param->Reshape->Proposal patterns

    For FP16 models, there is additional 'Convert' is inserted after 'Parameter'.

    It causes transformation is not applied and 'ov::set_batch' or CNNNetwork::set_batch will throw

    Proposal1Scales and Proposal4Scales transformations were updated to handle these conditions
2022-02-09 17:44:54 +03:00
Tomasz Dołbniak
a60c110b96 Use i64 for ONNX Split attribute (#10203) 2022-02-09 17:30:00 +03:00
Vitaliy Urusovskij
c186069025 Fix several coverity issues (#10205)
* Update def value for GetParamAsBool() in legacy parseParams()

* Remove extra check from legacy convertFunctionToICNNNetwork()
2022-02-09 15:43:04 +03:00
Chen Xu
c93c9ec3d5 [CPU] Fix bug in topk_bubble_BLK_on_channel_horiz method (#10218) 2022-02-09 14:40:46 +03:00
Victor Kuznetsov
21601398d6 Remove dynamism from time_tests (API 1.0) (#10193) 2022-02-09 19:15:16 +08:00
Vladislav Golubev
051724f0d5 [LPT][Dynamic shapes] MoveFakeQuantize trasformation fix (#10178)
* [LPT] MoveFQ fix

* [LPT] MoveFQ: added check on dynamic channel in case of per-channel fq

* [LPT] MoveFQ: tests extending
2022-02-09 13:55:50 +03:00
Taylor Yeonbok Lee
603ea50277 Fix max batch size to respect available virtual memory in linux environment. (#10201) 2022-02-09 19:40:29 +09:00
Ilya Churaev
79fceddd7e Fixed some coverity issues (#10165) 2022-02-09 12:37:19 +03:00
Gleb Kazantaev
60011b6eb2 Fix EltwiseBroadcastFusion pass (#10214) 2022-02-09 12:35:38 +03:00
Pavel Esir
654b025a26 [MO] set explicitly argument dtype to int for np.split (#9988)
* forced split argument dtype to int

* added unit-test

* fixed typo in split_test.py

* set explicitly np.int64 instead of np.int

* use split_length's dtype
2022-02-09 12:16:33 +03:00
Anton Chetverikov
25ca17e789 [MO IR Reader] Update *Sequence backend_attrs (#10041)
* Update LSTMSequence backend_attrs

* Add missed attribute clip

* Update backend_attrs for all *sequence operations

* Add extender for GRUSequence

* Add GRUSequence to custom ops list

* use has_and_set instead if direct acces to attributes
2022-02-09 12:13:23 +03:00
Gleb Kazantaev
4fdf71cdc1 Preserve RTInfo in output ports (#10114)
* Automation for preserving rt info in output ports; Update FunctionComparator to compare rt info correctly

* Update LPT tests to use real rt_info attributes, so they can be compared

* Fix tests
2022-02-09 12:09:23 +03:00
Daniil Lyakhov
0168bda833 [Offline Transformations] Reshape Layer Pruning Transformation Support (#9350)
* Reshape op pruning support

* Minor reshape fix

* GroupConv reshape extended support

* Comment ir test

* Fix: reshape can only work with constants

* Apply comments

* Fix output shape computing for reshape op

* Fix comment
2022-02-09 12:03:56 +03:00
Maxim Shevtsov
320c64de24 disabling auto-batching when batch<4 (as batch1 kernels are heavily optimized) (#10188) 2022-02-09 12:02:30 +03:00
Anastasia Kuporosova
04194b292d [Python API] Add if for yocto cross-compilation (#10216) 2022-02-09 11:56:42 +03:00
Maxim Vafin
52374a4b8b Write runtime version and how IR was genarated (legacy path or not) (#10196) 2022-02-09 11:41:49 +03:00
Daria Mityagina
6dd6fb6c12 [VPU][XLink] Printf over XLink fails on OpenVINO 2021.4.2 - fix (#10099)
The XLinkReadDataWithTimeout() is used with and incorrect value for timeout parameter.
2022-02-09 11:30:29 +03:00
Edward Shogulin
c4e54d882b [LPT] StridedSlice extending (#10148)
* [LPT] StridedSlice extending

* [LPT] tests
2022-02-09 11:23:18 +03:00
Ilya Churaev
9d40c5184f Removed legacy names and environment variables from the code (#10195)
* Removed legacy names and environment variables from the code

* Support documented legacy variables

* Fixed core unit tests

* Fixed some test
2022-02-09 11:04:25 +03:00
Vitaliy Urusovskij
532a1da548 Fix "Error handling issue" (#10119)
* Fix coverity 'error handling issue' in ~CacheGuard()

* Fix coverity 'error handling issue' in reshape()
2022-02-09 11:04:02 +03:00
Sergey Lyubimtsev
acf8cacfbc requirements markers clean up (#10179)
* requirements markers clean up

* formatting & comments

* typos
2022-02-09 10:18:24 +03:00
Roman Lyamin
0d64afc2c8 [GPU] program_helpers::are_layouts_identical fix (#10109) 2022-02-09 09:27:15 +03:00
Sergey Shlyapnikov
8f0e974ee6 [GPU] Add new properties and fix bechmark_app (#10149) 2022-02-09 09:18:54 +03:00
Maxim Vafin
1970baeb1c Apply RIC for dynamic dimension in legacy MO (#10130)
* Apply RIC for dynamic dimension in legacy MO and fail if RIC wasn't applied to any input

* Fix moc tests
2022-02-08 22:17:19 +03:00
Smirnov Grigorii
d951433b12 fix bug in Serialize (#74447) (#9840)
* fix bug in Serialize (#74447)

add simple serialization test to check pads changes

clang fix

add check and change pads in conv

refactor ov::clone_model

fix

check in test

* fix FrameworkNode and add test

* fix assert in identiry.cpp

* fix clone_nodes

* remove for node and constructor for node_input.cpp

add spaces

add space
2022-02-08 22:00:20 +03:00
Evgenya Stepyreva
a18069926e Partial Value propagation from partial to static value (#10162)
* Partial Value propagation from partial to static value

* Style

* Tests ajustment
2022-02-08 21:55:17 +03:00
Yury Gaydaychuk
0dfdadb531 [CPU] Clamp reduces float boundaries in the case of integer data (#6668) 2022-02-08 19:50:45 +03:00
Ivan Novoselov
b47b8ad4bf [CPU] Snippets throughput mode fixes (#9488) 2022-02-08 17:58:42 +03:00
Jacek Skowron
dfc738b493 [docs] update macos installation guide 2 (#9636)
* update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

update macos installation guide

* update macos installation guide
2022-02-08 16:44:57 +03:00
Nikita Malinin
0c855ee8b2 [POT] Renaming NXModel (#10168)
* NXModel -> CompressedModel renaming

* Update references & remove Dicts

* Pylint fixes
2022-02-08 14:07:12 +03:00
Indira Salyahova
f17c26506f Update utils.py (#10186) 2022-02-08 13:51:29 +03:00
Alexey Lebedev
24c4ccc621 [PYTHON API] add __hash__ for Type (#10059)
* define hash operator for type

* Fix code style
2022-02-08 13:28:25 +03:00
Evgenya Stepyreva
47b8c77a59 Q-DQ pairs folding where applicable (#10181) 2022-02-08 13:18:26 +03:00
Maxim Andronov
42a0ce0514 [CPU] Fixed dummy shape creation for Pooling (#10147) 2022-02-08 12:54:00 +03:00
Maksim Kutakov
7406b1ffc3 [CPU] Memory manager was introduced in MKLDNNMemory (#7925) 2022-02-08 12:34:17 +03:00
Anton Chetverikov
f9eaaa9ff6 [MO] Sqrt operation implementation (#9950)
* Add sqrt extender

* Update check to not use default infer in infer was set before

* Update comment

* Fix comment

* Remove Sqrt extender

* Remove unnecessary changes

* Add MO implementation of SQRT operation
2022-02-08 11:41:13 +03:00
Maxim Shevtsov
863c74471f Auto batch fix default val +test (#10169)
* default config value for the AUTO_BATCH_TIMEOUT

* test for default config value for the AUTO_BATCH_TIMEOUT

* default val for timeout var
2022-02-08 10:14:15 +03:00
Vladimir Dudnik
0a316216f3 update open_model_zoo submodule (#10182) 2022-02-08 09:31:22 +03:00
Vladislav Golubev
c4c46beb6b [CPU] Optimize*SequenceTransposes: Gather7->Gather8 (#10122) 2022-02-08 08:56:38 +03:00
Mingyu Kim
67e2bdfc28 [GPU] Update onednn to rls-v2.6-pc2 (#10156)
It is expected to have functional improvements
2022-02-08 09:47:33 +09:00
Nadezhda Ageeva
2215440188 [GNA] Set performance mode to undefined (#10174) 2022-02-07 23:04:29 +03:00
Jacek Skowron
65701e12ef [docs] update raspbianos installation guide (#9728)
* update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

update raspbianos installation guide

* update raspbianos installation guide

* update raspbianos installation guide

* update raspbianos installation guide
2022-02-07 20:01:28 +00:00
Yegor Kruglov
9d3028a9f7 [MO] Pip installation message for not satisfied dependencies (#9952)
* changed message for not satisfied package

* changed warning message
2022-02-07 22:19:02 +03:00
Jacek Skowron
2d9a248912 [docs] update uninstall guide (#9725)
* CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

CVS-71850 update uninstall guide

* CVS-71850 update uninstall guide
2022-02-07 18:19:09 +00:00
Edward Shogulin
c6c9a06d41 [LPT] getDataPrecision extending (#10071)
* [LPT] getDataPrecision extending

* [LPT] getDataPrecision unit tests addition
2022-02-07 19:49:01 +03:00
Anton Pankratov
e34ff009e0 Fix for mac caching test (#10151)
* Fix for mac

* Fixed rtti comparison

* used defined
2022-02-07 19:22:21 +03:00
Andrey Zaytsev
d62d185ac5 Feature/azaytsev/docs omz revision (#10176)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Added a link to the omz repo

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
2022-02-07 16:12:28 +00:00
Yegor Kruglov
bde1d5edb0 added condition for optional outputs (#10097) 2022-02-07 18:24:28 +03:00
Victor Kuznetsov
857c0bd9dd [Time tests] Update reshape pipeline (use default inputs before reshape for data generation) (#10129) 2022-02-07 22:50:12 +08:00
Maxim Shevtsov
14fcd196a3 updated the mem_statistics ( since "current" is removed) and TOTAL_MEM as it is now types thru Any (and hence needs the as<>()) (#10135) 2022-02-07 17:31:12 +03:00
Ivan Tikhonov
707a1b9377 FrontEnd OpExtension (#10153)
* Squash commits: OpExtension, pybindings, unit tests

* fix incorrect merge

* fix builds

* fix macro on Windows

* Update OPENVINO_FRAMEWORK_MAP to support any cnt of attributes, fix pybinding, resolve review comments, add unit tests

* Fix PEP8, fix unit tests build

* Remove exports from template classes

* fix MacOS build, fix copyrights, clean up

* investigate issue with reshape py tests: temporary delete OpExtension python tests

* Revert "investigate issue with reshape py tests: temporary delete OpExtension python tests"

This reverts commit 2ea2bc9e2e.

* fix model name for onnx tests

* fix python unit tests

* add new lines in the end of files

* fix unicode support on Win OS

* fix codestyle

* Update ends_with function implementation

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* update copyrights

* resolve review comments

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2022-02-07 16:21:18 +03:00
Alexey Lebedev
89f071f5fa [PYTHON API] Forbid building python api with debug postfix (#10158)
* Forbid building python api with library postfix

* Fix condition
2022-02-07 13:58:53 +03:00
Alexandra Sidorova
57b08583cc [Benchmark] Align comments with command argument 'data_shape' (#9897) 2022-02-07 13:31:38 +03:00
Pavel Esir
3d6e90b8f9 concat['override_output_shape'] = True in StridedSliceNormalizer.py (#10045) 2022-02-07 13:24:56 +03:00
Mikhail Nosov
abda6eb4af Remove 'evaluate' from I420toRGB/BGR operations (#10128) 2022-02-07 13:05:51 +03:00
Nikita Demashov
74fa60cf86 [LPT] Fixed an incorrect condition & added test to MoveFakeQuantize transformation (#10009)
* fixed an incorrect condition & added test

* fixed an incorrect condition & added test
2022-02-07 12:32:49 +03:00
Anton Grishin
b365e67561 [GNA] Add support for non-functional subgraphs (#9732)
* [GNA] Add support for non-functional subgraphs

Details:
* Insert copy before the last layer to allow nonfunc subgraphs

Tickets:
57363

* Traverse graph in upstream order

* Add param-reshape-result tests

* Fix insert condition

* review comments
2022-02-07 12:21:23 +03:00
Anastasia Kuporosova
3c13cea02b [Python API] Fix import/export of model + update speech sample (#10103)
* Fix import/export of model

* update speech sample

* fix code-style

Co-authored-by: jiwaszki <jan.iwaszkiewicz@intel.com>
2022-02-07 12:12:06 +03:00
Fedor Zharinov
38f470c184 set U8 precision for image-like inputs even in case of random filling (#10140) 2022-02-07 12:09:16 +03:00
Ilya Znamenskiy
ac28063b19 [GPU] Gemm onednn implementation (#9984)
* [GPU] Gemm onednn implementation

* [GPU] Added implementation choice logic
2022-02-07 11:48:42 +03:00
Mikhail Nosov
9f9df184c4 Added compatibility check of layout with partial shape (#10144)
* Added compatibility check of layout with partial shape

E.g. layout "NC" in not compatible with PartialShape{1,3,224,224}

Check is added:
- For parameter set_layout
- For parameter set_partial_shape
- For result set_layout
- Checked also compatibility for all results after 'validate_and_infer_types'

* Fix incorrect tests

* Fix of more incorrect tests

* Removed couple of obsoleted error-handling tests - these are catched now on earlier stages

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-02-07 11:17:28 +03:00
Vladimir Paramuzov
ae4c727b31 [GPU] StridedSlice shape propagation fix (#10095) 2022-02-07 10:08:06 +03:00
Vladimir Paramuzov
12746efbe5 [GPU] Fixed safe index func for per-channel case (#10136)
Co-authored-by: Ilya Znamenskiy <ilya.znamenskiy@intel.com>
2022-02-07 09:59:52 +03:00
Ilya Churaev
a2ca1d4499 Merge IE & nGraph DG (#10055)
* Changed folder for documentation

* Fixed links

* Merged nGraph DG to OpenVINO Runtime UG

* Fixed errors

* Fixed some issues

* Fixed tree

* Fixed typo

* Update docs/documentation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update README.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update README.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Fixed name

* FIxed snippets

* Small fixes

* Update docs/HOWTO/Custom_Layers_Guide.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Fixed comments

* Try to fix doc

* Try to fix doc issue

* Update docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-02-07 06:57:35 +03:00
Ilya Churaev
788fb5c010 Improvement for AtomicGuard (#10120) 2022-02-06 15:18:54 +03:00
Ilya Churaev
eff6084ec9 Fixed coverity issues for core and frontends (#10123)
* Fixed coverity issues for core and frontends

* Fixed code style

* Fixed comments

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-02-05 14:52:55 +03:00
Anastasia Kuporosova
7d1ad47611 [Python API] Install only one openvino/__init__.py (#10145)
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-02-05 14:48:11 +03:00
Vladislav Volkov
a365ee768b Fix for leaked ExecutorManager (#10070)
* Fix for leaked ExecutorManager

* Code style fix

* Fixed plugin pointer access from ExecutableNetwork
2022-02-05 14:03:50 +03:00
Oleg Pipikin
502c89e4a7 [HETERO] Fix segfault in supported/unsuppoterd layers check (#10104) 2022-02-05 13:35:25 +03:00
Anton Pankratov
ced90de0a5 PERF_COUNT replaced with ov::enable_profiling (#10118)
* String conversions in any

* Fixed chaching tests

* Fixed tests

* fixed build

* PERF_COUNT replaced with ov::enable_profiling

* fixed format

* fixed format

* fixed optimal config

* merge fix

* fix build

* format fix

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-02-05 13:27:46 +03:00
Anton Pankratov
213e02f3b0 Import Export using capabilities (#10081)
* String conversions in any

* Fixed chaching tests

* Fixed tests

* fixed build

* Fixed gpu
2022-02-05 11:16:55 +03:00
Anton Pankratov
5f5bea2c5a Fix for android type info comparison (#10142)
* Any value can fromm inner string

* Fixed review coment

* strict str to value conversion

* fix format

* [VPU] update config header (#9857)

* [VPU] update config header

* Review fixes

* Performance hint config update

* Removal deprecated vpu config stuff

* Review changes

* Rename myriad properties from camelCase to snake_case

* Review changes

* Review fixes

* Removal intel_myriad::common namespace

* OV throughput stream option

* Test fix

* Reverted disable_convert & disable_reorder

* Bugfixes

* Change default value for PerformanceHintNumRequestsOption

* fixed excessive outputs copying (in case when the fallback happened) and updated the test for that (#10110)

* fixed excessive outputs copying (in case when the fallback happened) and updated the test for that

* enum eExecutionFlavor to cover initial state

* Transformations: eltwise and FQ fusings fixes (#10078)

* FQ fusings fixes

* FQ Fusings: added negative test-cases for non-broadcasted constant

* Disable single-image-super-resolution-1032  from MemCheck precommit (#10133)

* add performance hint to time infer

* disable model from memcheck

* Fixed input cut for case when port is not specified. (#10134)

* Fix for android type info comparison

Co-authored-by: Aleksandr Korolev <aleksandr.korolev@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
Co-authored-by: Victor Kuznetsov <victor.kuznetsov@intel.com>
Co-authored-by: Anastasia Popova <anastasia.popova@intel.com>
2022-02-05 11:15:16 +03:00
Alexander Zhogov
18fd46a447 Revert "FrontEnd OpExtension (#9917)" (#10146)
This reverts commit 768f353300.
2022-02-05 10:42:17 +03:00
Alexander Zhogov
e0114fd22d Azure CI: increase timeout on Windows 2022-02-05 08:48:42 +03:00
Alexander Zhogov
6ac54df960 Azure CI: remove IB from running test (#10141) 2022-02-05 08:37:29 +03:00
Ivan Tikhonov
768f353300 FrontEnd OpExtension (#9917)
* Squash commits: OpExtension, pybindings, unit tests

* fix incorrect merge

* fix builds

* fix macro on Windows

* Update OPENVINO_FRAMEWORK_MAP to support any cnt of attributes, fix pybinding, resolve review comments, add unit tests

* Fix PEP8, fix unit tests build

* Remove exports from template classes

* fix MacOS build, fix copyrights, clean up

* investigate issue with reshape py tests: temporary delete OpExtension python tests

* Revert "investigate issue with reshape py tests: temporary delete OpExtension python tests"

This reverts commit 2ea2bc9e2e.

* fix model name for onnx tests

* fix python unit tests

* add new lines in the end of files

* fix unicode support on Win OS

* fix codestyle

* Update ends_with function implementation

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* update copyrights

* resolve review comments

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2022-02-04 22:28:13 +03:00
Egor Duplensky
c83d265416 [CPU] Add support for OV2.0 configuration API (#9997) 2022-02-04 22:26:42 +03:00
Maxim Andronov
a8c520878d [CPU] Dummy shape creation fix for Deconvolution (#10079) 2022-02-04 21:43:25 +03:00
Anton Pankratov
69b118ed7b ov::Any can get value from stored string (#10131)
* Any value can fromm inner string

* Fixed review coment

* strict str to value conversion

* fix format
2022-02-04 20:41:37 +03:00
Nikolay Tyukaev
1abc6e2a16 edit log parse regex (#10117) 2022-02-04 20:15:26 +03:00
Anastasia Popova
12a310636d Fixed input cut for case when port is not specified. (#10134) 2022-02-04 19:03:12 +03:00
Victor Kuznetsov
b3a990b0a7 Disable single-image-super-resolution-1032 from MemCheck precommit (#10133)
* add performance hint to time infer

* disable model from memcheck
2022-02-04 18:00:00 +03:00
Vladislav Golubev
265ab03314 Transformations: eltwise and FQ fusings fixes (#10078)
* FQ fusings fixes

* FQ Fusings: added negative test-cases for non-broadcasted constant
2022-02-04 17:57:13 +03:00
Maxim Shevtsov
8a85bfa312 fixed excessive outputs copying (in case when the fallback happened) and updated the test for that (#10110)
* fixed excessive outputs copying (in case when the fallback happened) and updated the test for that

* enum eExecutionFlavor to cover initial state
2022-02-04 16:58:37 +03:00
Aleksandr Korolev
9743784f91 [VPU] update config header (#9857)
* [VPU] update config header

* Review fixes

* Performance hint config update

* Removal deprecated vpu config stuff

* Review changes

* Rename myriad properties from camelCase to snake_case

* Review changes

* Review fixes

* Removal intel_myriad::common namespace

* OV throughput stream option

* Test fix

* Reverted disable_convert & disable_reorder

* Bugfixes

* Change default value for PerformanceHintNumRequestsOption
2022-02-04 16:32:00 +03:00
Mateusz Tabaka
72216a9b95 [ONNX] Replace subgraph's inputs from parent with Parameter before node is created (#10113)
This patch fixes case when If operator has subgraph with just Identity op,
which input comes from parent graph. Since Identity is eliminated,
its input is incorrectly pulled to this subgraph's body.
For example:
this ONNX subgraph:
```
               +-----------+
               |AveragePool|
               +-+---+-----+
                 |   |
            +----+   v
            |      .....
            |        |
            |        v
    +-------|--------------------------+
    |       |       If                 |
    |   then|branch      else branch   |
    +-------|--------+-----------------+
    |       |        |                 |
    |       v        |                 |
    |  +-----------+ |                 |
    |  | Identity  | |    .........    |
    |  +-----------+ |                 |
    |                |                 |
    |                |                 |
    +----------------+-----------------+
```
was converted to following (incorrect) nGraph representation:
```
              +-------------+
              | AveragePool |
              +--+---+------+
                 |   |
            +----+   v
            |      .....
            |        |
            |        v
    +-------|---------------------------+
    |       |        If                 |
    |   then|branch       else branch   |
    +-------|---------+-----------------+
    |       v         |                 |
    |  +-----------+  |                 |
    |  | Parameter |  |                 |
    |  +-----------+  |                 |
    |       |         |                 |
    |       v         |                 |
    | +-------------+ |                 |
    | | AveragePool | |    .........    |
    | +-------------+ |                 |
    |       |         |                 |
    |       v         |                 |
    |   +--------+    |                 |
    |   | Result |    |                 |
    |   +--------+    |                 |
    |                 |                 |
    +-----------------+-----------------+
```

With this change, subgraph's inputs from parent scope are replaced with
Parameter before nGraph node is created. In that case Identity's input
is a Parameter (and not AveragePool) and therefore 'then branch' looks like:
```
     +-----------+
     | Parameter |
     +-----------+
           |
           v
     +-----------+
     |  Result   |
     +-----------+

```

Ticket: 73895.
2022-02-04 12:23:27 +01:00
Ivan Novoselov
b7c62fcfbc [CPU] Improve weights sharing sync on multiple outputs (#10060) 2022-02-04 12:26:57 +03:00
Tomasz Dołbniak
797b2221be ONNX pooling - extended auto_pad attribute support (#10092) 2022-02-04 10:23:31 +01:00
Alexey Lebedev
7478915ef3 [PYTHON API] Fix InferQueue.is_ready() call (#10096)
* Fix is_ready and add tests

* remove wrong comment

* refactor test

* Fix code style
2022-02-04 11:57:56 +03:00
Indira Salyahova
da02951d67 [POT] Fix get layout from model (#10018)
* fix: layout pot

* layout

* fix: layout

* pylint

* add logger

* Update image_loader.py

* pylint

* repeat layout in data free

* resolve conflicts

* sample

* resolve comments
2022-02-04 11:46:54 +03:00
Victor Kuznetsov
ed6bb8ab2d Update models folder for TimeTests (#10107)
* add performance hint to time infer

* upd time models
2022-02-04 11:33:15 +03:00
Ilya Lavrenov
70ca4b6e40 Fix template plugin tests (#10124)
* Fix template plugin tests

* Fix template plugin tests
2022-02-04 11:25:46 +03:00
Ilya Churaev
7b5a4e8c5e Remove WA from ImportNetwork (#10111) 2022-02-04 07:16:57 +03:00
Taylor Yeonbok Lee
54678f47cf [GPU] Adjust preferred format of resample operation (#9919)
* Adjust preferred format of resample operation

* Applied review comment

* Not to fix resample layout when there is permute user unless the permute order is rotating
2022-02-04 09:57:57 +09:00
Vladimir Dudnik
f9b88c385c upd OMZ submodule. first part public models with layout as MO param (#10108) 2022-02-04 02:57:06 +03:00
Edward Shogulin
e8b88b9021 [LPT] foldFakeQuantize extending to support empty shapes (#10116) 2022-02-03 23:01:27 +03:00
Roman Kazantsev
64aabc74d1 Check the selected frontend to correspond use_new/legacy_frontend options (#10084)
* Check the selected frontend to correspond use_new/legacy_frontend options

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix a default case when no frontend is found

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-02-03 20:34:07 +03:00
Ilya Lavrenov
f2f281e60b Renamed ov_runtime => openvino, ov_ => openvino_ prefix (#10069)
* Renamed ov_runtime => openvino, ov_ => openvino_ prefix

* Coverage fix

* More fixes

* Fixed MO tests with custom FE
2022-02-03 20:03:41 +03:00
Anastasia Popova
86faa25724 Fix of output tensor names for mask-rcnn* models (#10042)
* Added op names to tensor names for MaskRCNN replacement transformation. Fixed output layout for MaskRCNN.

* Applied commentes left from PR with tensor names fix.

* Added tests for remove_tensor_names().

* Added checks in emitter.

* Removed debug output.

* Small fix.

* Small fix.
2022-02-03 19:44:47 +03:00
Evgeny Kotov
d30365f3d5 fix (#9868) 2022-02-03 19:14:57 +03:00
Anton Pankratov
8993c4c18a Deprecated ov::Any implicit cast to any types (#9409)
* Depricated Any implicit cast

* Fixed test

* fixed gna build

* Fixed warnings in benchmark_app

* Fixed test build

* ncc exception for PrintTo

* Error mesage in test

* Error mesage in test

* fixed build
2022-02-03 19:10:52 +03:00
Krzysztof Bruniecki
6677079821 Set proper precision for added output (#9496) 2022-02-03 18:34:55 +03:00
Anton Pankratov
5c9b6915dc Added undefined perfomnance hint value (#10082)
* Added undefined perfomnance hint value

* Added tests

* Fixed tests

* fixed dormat
2022-02-03 18:03:45 +03:00
Ilya Lavrenov
168bfe58c4 Fix NCC (#10105) 2022-02-03 16:51:26 +03:00
Ilya Lavrenov
3c35cf73c2 Build only static libraries on Linux Azure (#10062) 2022-02-03 16:26:21 +03:00
Anastasia Popova
ca45bf430a Fixed tensor names set in InputCut and AutomlEfficientDet transformation. (#9998)
* Fixed tensor names setting in InputCut, fixed tensor names losing in AutomlEfficientDet.

* Changed op name adding to tensor names in InputCut for output port case only.
2022-02-03 15:55:16 +03:00
Artyom Anokhov
f57be8fdd8 configs: Updated path to licensing (#10102) 2022-02-03 15:24:40 +03:00
Anton Dudchenko
711d6de33b [VPU] Fix precisions for execGraph (#9767)
ExecGraph didn't contain the parameter node and precisions
65013
2022-02-03 13:20:59 +03:00
Sergey Shlyapnikov
ccf4f4e420 [GPU] Update config api 2.0 (#9649) 2022-02-03 13:04:36 +03:00
Nikolay Shchegolev
b34cb55081 [CPU] Gather JIT implementation + Gather8 support. (#10083) 2022-02-03 12:32:23 +03:00
Ilya Churaev
0b75589e27 Fix cc build (#10073)
* Try to fix cc build

* Fixed build
2022-02-03 11:43:51 +03:00
Wilson Seok
3d9da2901e Template slt bug fix/mish partial dynamic (#9976)
* Remove fp16 of Convert layer test from skip_tests.config.cpp as it works now

* update repo

* fix demension dynamic support bug in mish op reference test
2022-02-03 11:32:39 +03:00
Wilson Seok
8d27103f06 Add slt in template plugin/rnn sequence (#9526)
* Remove fp16 of Convert layer test from skip_tests.config.cpp as it works now

* update repo

* add initial op reference test of rnn_sequence

* add op reference test of GRUSequence

* replace input and refOut data to hard coded value

* update copyright year and namespace of Tensor

* rename S_t to sequence_lengths
2022-02-03 11:32:08 +03:00
Jan Iwaszkiewicz
db334efbbd Fix vector casting for Constants with float16 type (#10088) 2022-02-03 09:15:28 +01:00
Vladislav Golubev
38ed0de9cf Test enabled (#9341) 2022-02-03 10:58:03 +03:00
Liubov Talamanova
b4206fe0a1 Supported Simplified mode without provided config (#10049)
* Support Simplified mode without provided config

* Change data-source default location
2022-02-03 10:56:25 +03:00
Eugeny Volosenkov
e7d8284e4d fix pot (#9980) 2022-02-03 10:47:31 +03:00
Maxim Gordeev
cf69c97765 Added new correct gna frequency result for Alder Lake (#10047)
* Added new correct gna frequency result for Alder Lake

* Update samples/cpp/speech_sample/utils.hpp

Co-authored-by: Krzysztof Bruniecki <krzysztof.bruniecki@intel.com>

Co-authored-by: Krzysztof Bruniecki <krzysztof.bruniecki@intel.com>
2022-02-03 10:38:25 +03:00
Ilya Churaev
03c38ca3fd Changed code which check newAPI flag from Core (#10080)
* Changed code which check newAPI flag from Core

* Fixed typo
2022-02-03 10:36:23 +03:00
Fedor Zharinov
9219242dbd Benchmark_app: JSON writer for statistics (#9887)
* Refactored statistics output with JSON support

* Detailed/average reports are added

* stylefix

* Update samples/cpp/benchmark_app/statistics_report.hpp

Co-authored-by: Ivan Vikhrev <ivan.vikhrev@intel.com>

* Linux Fixes

* stylefixes

* data_shape field format is changed

* stylefix

Co-authored-by: Ivan Vikhrev <ivan.vikhrev@intel.com>
2022-02-03 01:47:46 +03:00
Alina Kladieva
552454a3f0 Revert "[CPU] Gather jit implementation. (#6601)" (#10077)
This reverts commit fbe8aa94a4.
2022-02-02 20:12:24 +03:00
Ilya Churaev
5406839e3f Removed layouts config (#10067) 2022-02-02 15:56:26 +03:00
Nikolay Shchegolev
fbe8aa94a4 [CPU] Gather jit implementation. (#6601) 2022-02-02 15:02:49 +03:00
Nikolay Tyukaev
7a88daa8f7 enable doc html artifact (#10065) 2022-02-02 14:43:20 +03:00
Tomasz Dołbniak
8a05ef2514 Softmax tests fixed (#10051) 2022-02-02 12:28:39 +01:00
Tomasz Dołbniak
0700ba781b ONNX ConvInteger - handling of scalar zero points (#10057) 2022-02-02 12:16:08 +01:00
dependabot[bot]
53af687a0c Bump jinja2 (#9966)
Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.2 to 2.11.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/2.11.2...2.11.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-02 14:09:45 +03:00
Nikita Malinin
04f5b233f2 [POT] Introduce saturation_fix option (#9940)
* Introduce statiration_fix option

* Pylint fix

* Update namings and pipelilne

* Change node_input target
2022-02-02 13:46:20 +03:00
Andrey Somsikov
9dd4476c58 Reduce noise from security tests (#9774)
* Mute noicy undefined behavior checks

* Fix GCC build error with unsupported option

* Fix missprint
2022-02-02 12:48:21 +03:00
Andrey Somsikov
176bc2d83d Set -DENABLE_FASTER_BUILD=OFF for coverity (#10044) 2022-02-02 12:47:32 +03:00
Victor Kuznetsov
0dd8d895a0 [Time tests] Add API 2.0 support (#9878)
* add performance hint to time infer

* init commit - add api 2 support

* change imInfo filling

* change copyright dates

* check hw positions to default

* add debug info

* fix mistake

* add check layout funcs for api2 time infer

* reformat code (2 -> 4)

* upd with reshape api2

* upd with master

* --

* fix fillTensors - set as template

* fix common_utils.cpp after merge master
2022-02-02 12:33:02 +03:00
Anastasia Kuporosova
70f65bdb74 [Python API] Rename configuration API + update tests/tools (#9927)
* [Python API] Rename configuration API + update tests/tools

* keep old api for compatibility

* add deprecation warnings

* apply comments to query sample

* remove convert to pyobject

* use Any instead of string

* update tests

* update set_property

* fix sample

* update test + try-except for pot

* add docstrings

* fix codestyle for pot
2022-02-02 11:28:41 +03:00
Anton Grishin
336fc37b94 Define static variable (#10053) 2022-02-02 11:26:26 +03:00
Smirnov Grigorii
83b1a247ec move convert_broadcast3_test.cpp to op_coversions (#10043)
* move

* remove convolution_ie header
2022-02-02 10:59:11 +03:00
Mateusz Tabaka
bf908e9bdf Enable argmin/argmax test cases (#10056)
Ticket: 35473
2022-02-02 08:39:58 +03:00
Anton Pankratov
4cbcf4b4e3 Added get property additional arguments (#9993)
* Added get property additional arguments

* Fixed build

* Fixed error

* Added api wiht property and map

* Fixed gna build

* reverted available_devices
2022-02-01 23:56:52 +03:00
Tatiana Troilova
c715fde8f0 Update third party files (#9992)
* Update third party files

* Update third party files (OMZ added)
2022-02-01 21:06:06 +03:00
Nikolay Tyukaev
172cbe7340 DOCS: Fix js and add ipython (#9995)
* js and ipython

* add to suppress warnings

* fixes

* fixes

* fixes

* fixes
2022-02-01 20:39:17 +03:00
Evgenya Stepyreva
ff8c217e03 Not tracking fixed, tracking restored (#10040) 2022-02-01 19:58:29 +03:00
Maxim Andronov
ba736e2bcd [CPU] Fix dynamic RNNSeq with native order (#9932) 2022-02-01 18:52:57 +03:00
Vitaliy Urusovskij
89fe26e3db Copy RandomUniform m_state during clone_with_new_inputs() (#10031)
* Copy RandomUniform m_state during clone_with_new_inputs()

* Add `get_state()` for RandomUniform op

* Add copy.random_uniform test
2022-02-01 18:46:09 +03:00
Jacek Skowron
56759d9cdc [docs] update linux/win installation guide (#9720)
* CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

CVS-71745 update linux installation guide

* CVS-71745 update linux installation guide

* CVS-71745 update linux installation guide

* CVS-71745 update linux installation guide

* lfs

* CVS-71745 update linux installation guide

* CVS-71745 update linux installation guide

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2022-02-01 18:33:36 +03:00
Svetlana Dolinina
5e8f997262 Fix bug in AddReshapeTransposeAroundConvPool for Kaldi LSTM networks (#9885)
* change order of transformations to work correctly with Convolutions in Kaldi LSTM networks

* removed unneeded changes and add unit tests

* remove comment

* remove changes from memory_offset_adjustment, move all fixes inside add_reshape_transpose_around_conv_pool to avoid new bugs

* removed test for deleted changes

* replace -1 by None
2022-02-01 17:06:49 +03:00
Fedor Zharinov
c848e55f5e Benchmark_app: Command line args processing is modified to use both tensor and corresponding node names (#9968)
* Node/name conversions

* stylefix
2022-02-01 16:05:00 +03:00
Fedor Zharinov
6845392aa6 Benchmark_app: incorrect indexing during precision set is fixed (#10033)
* Precision problem fix. Behavior of auto precision conversion to U8 (in case of image) is changed

* stylefix
2022-02-01 15:58:48 +03:00
Liubov Talamanova
ca09ddd123 [POT] Implement DataFreeEngine (#9484)
* [POT] Implement DataFreeEngine

* Add CLI

* Updated CLI

* Moved logic to SynteticImageLoader

* Fix bug with draw modes

* Fix bug in DataFreeEngine

* Fix multiprocessing

* Fix pylint

* Add DataFreeEngine test

* Download models

* Fill background

* Fix test

* Fix args

* Support config option for DataFree mode

* Minor fixes

* Add data_free config

* Add more test cases

* Enable RCNN models quantization
2022-02-01 15:15:20 +03:00
Ekaterina Aidova
09f53b56e6 [OMZ]: update submodule (#10036) 2022-02-01 15:03:17 +03:00
Katarzyna Mitrus
52d53d187d Enable reshape sequence fusion transformation based target shape bounds (#9886)
* Calculate value bounds in ReshapeSequenceFusion

* Reashape fusion upper bounds check

* Revert last return to false

* Add transformation unit tests

* Use output node as check param

* Use evaluate helper and remove deprecation macro

* Header update

* Checks refactor and comments

* Update unit tests

* Get element type from node_out
2022-02-01 14:51:47 +03:00
Lidia Toropova
2ce7becc6b Moved memory tests to OV API 2.0 (#9924)
* Moved memory tests to OV API 2.0

* Added configs for OV api 2, updated configs for api 1

* Commented several models in configs (no such models on omz)

* Updated fillTensors

* Fix to get network inputs

* Updated fillTensors and configs
2022-02-01 14:36:05 +03:00
Yuan Hu
8892b7b327 add Debug statistic log for devices infer nums (#9825)
* add statics log

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* change LOG_DEBUG to LOG_INFO

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix type

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: fishbell <bell.song@intel.com>
2022-02-01 14:18:29 +03:00
Alina Kladieva
f25c450534 Exclude gpu registerPluginsXMLUnicodePath test due to 76197 (#10029) 2022-02-01 13:43:38 +03:00
Pavel Esir
9bb7697b2f [MO] fix simplified MO import for PyCharm Debug (#9866)
* fix simplified MO import for PyCharm Debug

* package_BOM update
2022-02-01 13:14:48 +03:00
Eugeny Volosenkov
e0af970d62 Fix yolov3 documentation (#9901) 2022-02-01 13:12:12 +03:00
Anton Pankratov
b8a4b0742b Streams executor configured using OV2.0 configuration API (#9587)
* Streams executor config OV2.0

* Fixed error

* Reverted CPU tests
2022-02-01 13:08:32 +03:00
Anton Pankratov
8ca6aeae83 New configuration API in set get property (#10012)
* New configuration API in set|get property

* removed supported metrics and keys

* Fixed build

* Fixed build

* Fixed samples build

* Fixed samples build

* Fixed build

* Removed old properties in plugin

* Fixed build
2022-02-01 13:05:14 +03:00
Maxim Andronov
6866ced978 [Transformations] ConvertBroadcast3 for boolean fix (#10001) 2022-02-01 12:53:05 +03:00
Alexey Suhov
e1e467f23f [CMake] Add debug postfix on mac (#10027) 2022-02-01 12:41:26 +03:00
Daria Mityagina
a3f2a4ef99 [VPU] - I64 issue with ONNX models - fix (#9978)
i32 or i64 is used for index_element_type. So it is more convenient to get rid of the condition and stay only with the i32 option.
Tickets:
75748
75747
75029
2022-02-01 11:42:55 +03:00
Roman Kazantsev
298cced3b3 [MO, TF frontend] Correct loaders for StridedSlice and Pack operations (#10034)
* Correct Loaders for TensorFlow StridedSlice and Pack operations

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Supress INFO and WARNING messages from TensorFlow

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-02-01 11:02:28 +03:00
Ilya Lavrenov
4717e7639c Param/Const => Result tests (#9294)
* Tests for param => result

* Added const => result, param => result tests

* Disabled tests on CPU

* Added more tests

* Enabled import / export for template

* clang-format

* Reverted scatter tests

* Rename back

* Fixed typo

* Fixed compilation for GNA

* Fixed comments

* Fixed collisions

* Revert renaming back

* Added skip filters for GNA / MYRIAD
2022-02-01 11:01:12 +03:00
Anton Grishin
75abee2500 [GNA] Refactor and install libGNAConfig.cmake (#9793)
* Install libGNAconfig.cmake

* Refactor gnaConfig to correctly find from OV package

* remove ENABLE_INTEL_GNA option from CI

* Apply comments and fix CI

* re-trigger CI (demos issue)

* Enable GNA/samples smoke tests

* rename GNA to GNA_EXT_DIR

* re-trigger CI (mxnet cpu test issue)

* Pick azhogov changes to check CI

* try win wa

* fix win build

* re-trigger onnx

* tests

* disable win samples tests

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-02-01 10:51:07 +03:00
Pavel Finashov
ab3207a81b POT: Fixed command line to convert models for Windows platform. (#10024)
* For testing purpose

* Fixed command line for windows: removed re-writing PYTHOPATH

* Changed command line for re-writing PYTHONPATH
2022-02-01 10:31:16 +03:00
Vladislav Volkov
ff784ed6ab [CPU] I420toRGB and I420toBGR operations for CPU plugin (#9118) 2022-02-01 09:26:14 +03:00
Alexandra Sidorova
44362c97be [CPU] Fixed TensorIterator/Loop dynamism leftovers (#9722) 2022-02-01 09:17:03 +03:00
Edward Shogulin
cc19ff74f1 [LPT] [GPU] Multiply to group convolution (#9971)
* [LPT] MultiplyToGroupConvolution optimization for GPU

* [LPT] MatMul in FP32 in GPU workarround support

* [LPT] GPU plugin tests
2022-02-01 08:10:27 +03:00
Ilya Lavrenov
8c7e0d9479 Update include of legacy in tests (#10030) 2022-02-01 06:43:39 +03:00
Mikhail Ryzhov
bcdf7b0cad [GNA] Fixed output convert transformation (#9808)
* Fixed output convert transformation

* Update src/plugins/intel_gna/transformations/remove_converts.cpp

* Reverted wa
2022-01-31 20:08:24 +00:00
Pavel Esir
73e9eb4c61 [MO] add reinterp_shape for StridedSlice (#9622)
* added reinterp_shape for StridedSlice

* package_BOM update

* corrected unit-tests

* returned removed tests
2022-01-31 22:17:15 +03:00
1640 changed files with 53301 additions and 26497 deletions

View File

@@ -1,3 +1,12 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
resources:
repositories:
- repository: openvino_contrib

View File

@@ -23,9 +23,9 @@ jobs:
- job: Lin
strategy:
matrix:
Dynamic:
CMAKE_BUILD_SHARED_LIBS: 'ON'
PYTHON_STATIC_ARGS:
# Dynamic:
# CMAKE_BUILD_SHARED_LIBS: 'ON'
# PYTHON_STATIC_ARGS:
Static:
CMAKE_BUILD_SHARED_LIBS: 'OFF'
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
@@ -147,7 +147,6 @@ jobs:
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS)
-DENABLE_INTEL_GNA=$(CMAKE_BUILD_SHARED_LIBS)
-DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS)
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
-DENABLE_WHEEL=ON
@@ -237,8 +236,16 @@ jobs:
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) --junitxml=TEST-Pyngraph.xml --ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_utils/test_utils.py --ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py --ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_backend.py
displayName: 'nGraph Python Bindings Tests'
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) --junitxml=TEST-Pyngraph.xml --ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py --ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_backend.py
displayName: 'nGraph and IE Python Bindings Tests'
continueOnError: false
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) --junitxml=TEST-Pyngraph.xml --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py
displayName: 'Python API 2.0 Tests'
continueOnError: false
- script: |
@@ -246,7 +253,6 @@ jobs:
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_DIR)/tests/mo/unit_tests --junitxml=TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
continueOnError: false
enabled: true
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
workingDirectory: $(INSTALL_TEST_DIR)
@@ -277,7 +283,6 @@ jobs:
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/gnaUnitTests --gtest_output=xml:TEST-gnaUnitTests.xml
displayName: 'GNA UT'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'ON')
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/vpuUnitTests --gtest_output=xml:TEST-vpuUnitTests.xml
displayName: 'VPU UT'
@@ -338,16 +343,6 @@ jobs:
workingDirectory: $(INSTALL_DIR)/samples_bin
displayName: 'Samples Smoke Tests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'ON')
enabled: true
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
cd $(REPO_DIR)/src/bindings/python/tests_compatibility/test_inference_engine
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest --junitxml=TEST-PythonAPI.xml $(PYTHON_STATIC_ARGS)
displayName: 'Python API Tests'
continueOnError: false
- script: |
. $(SETUPVARS)
@@ -358,7 +353,6 @@ jobs:
workingDirectory: $(LAYER_TESTS_DIR)
displayName: 'Layer Tests'
continueOnError: false
enabled: true
- task: PublishTestResults@2
condition: always()

View File

@@ -1,3 +1,12 @@
trigger:
branches:
include:
- master
- releases/*
paths:
exclude:
- docs/*
resources:
repositories:
- repository: openvino_contrib

View File

@@ -79,11 +79,12 @@ jobs:
- task: CMake@1
inputs:
# Coverity has too many PARSE_ERROR errors with ENABLE_FASTER_BUILD=ON. Disabling FASTER_BUILD.
cmakeArgs: >
-GNinja
-DVERBOSE_BUILD=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_FASTER_BUILD=ON
-DENABLE_FASTER_BUILD=OFF
-DENABLE_STRICT_DEPENDENCIES=OFF
-DENABLE_REQUIREMENTS_INSTALL=OFF
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
@@ -112,11 +113,6 @@ jobs:
workingDirectory: $(BUILD_DIR)
displayName: 'Pack cov-int folder for submission'
- publish: $(BUILD_DIR)/openvino.tgz
artifact: openvino.tgz
continueOnError: true
displayName: 'Publish submission'
- script: |
curl --form token=$(COVERITY_TOKEN) \
--form email=$(COVERITY_USER) \

View File

@@ -69,9 +69,9 @@ jobs:
- script: >
env -C ~/work
./buildreleasenolto.sh
libinference_engine_preproc.so
ov_intel_cpu_plugin
ov_intel_gpu_plugin
libopenvino_gapi_preproc.so
openvino_intel_cpu_plugin
openvino_intel_gpu_plugin
clDNN_unit_tests64
gpuFuncTests
displayName: Build Lin

View File

@@ -40,6 +40,8 @@ jobs:
INSTALL_DIR: $(WORK_DIR)/install_pkg
INSTALL_TEST_DIR: $(INSTALL_DIR)/tests
SETUPVARS: $(INSTALL_DIR)/setupvars.sh
TMP_DIR: /tmp
CCACHE_DIR: $(WORK_DIR)/ccache/mac
steps:
- script: |
@@ -87,6 +89,7 @@ jobs:
python3 -m pip install -r $(REPO_DIR)/src/core/tests/requirements_test_onnx.txt
# Speed up build
brew install ninja
brew install ccache
# Speed up tests
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
@@ -96,17 +99,36 @@ jobs:
export PATH="/usr/local/opt/cython/bin:$PATH"
export CC=gcc
export CXX=g++
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: ls -alR $(REPO_DIR)/temp/
displayName: 'List temp SDKs'
- script: ninja
- task: Cache@2
inputs:
key: 'ccache | "$(Agent.OS)"'
path: $(CCACHE_DIR)
restoreKeys: |
ccache | "$(Agent.OS)"
displayName: Cache
- script: ccache --zero-stats --max-size=10G --show-config
displayName: 'Clean ccache stats'
- script: |
export CCACHE_DIR=$(CCACHE_DIR)
export CCACHE_TEMPDIR=$(TMP_DIR)/ccache
export CCACHE_BASEDIR=$(Pipeline.Workspace)
export CCACHE_MAXSIZE=10G
ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Mac'
- script: ccache --show-stats
displayName: 'Show ccache stats'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List bin files'
@@ -132,34 +154,42 @@ jobs:
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'OV Core UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/InferenceEngineUnitTests --gtest_print_time=1 --gtest_filter=-MKLDNNGraphStructureTests.TestNoRedundantReordersBeforeDWConvolution:TestConvolution/MKLDNNGraphConvolutionTests.TestsConvolution/0:TestConvolutionDefaultPrimitivesPriority/MKLDNNGraphConvolutionTests.TestsConvolution/0 --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
displayName: 'IE UT old'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ieUnitTests --gtest_output=xml:TEST-ieUnitTests.xml
displayName: 'IE UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/cpuUnitTests --gtest_output=xml:TEST-cpuUnitTests.xml
displayName: 'CPU UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/vpuUnitTests --gtest_output=xml:TEST-vpuUnitTests.xml
displayName: 'VPU UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/onnxImporterUnitTests --gtest_output=xml:TEST-onnxImporterUnitTests.xml
displayName: 'ONNX Importer UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ieFuncTests --gtest_output=xml:TEST-ieFuncTests.xml
displayName: 'IE FuncTests'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/cpuFuncTests --gtest_filter=*smoke*:-smoke_LPT/ReduceMinTransformation.CompareWithRefImpl/f32_Shape* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
@@ -172,6 +202,7 @@ jobs:
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:TEST-InferenceEngineCAPITests.xml
displayName: 'IE CAPITests'
continueOnError: false
enabled: false
- task: PublishTestResults@2
condition: always()

View File

@@ -30,7 +30,7 @@ jobs:
maxParallel: 2
# About 150% of total time
timeoutInMinutes: 120
timeoutInMinutes: 150
pool:
name: WIN_VMSS_VENV_D8S_WU2
@@ -133,7 +133,7 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_INTEL_GNA=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_INTEL_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_GAPI_PREPROCESSING=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_GAPI_PREPROCESSING=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
@@ -198,8 +198,8 @@ jobs:
python -m pytest $(INSTALL_DIR)\tests\smoke_tests\ --env_conf $(INSTALL_DIR)\tests\smoke_tests\env_config.yml -s --junitxml=TEST-SamplesSmokeTests.xml
workingDirectory: $(INSTALL_DIR)
displayName: 'Samples Smoke Tests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'ON')
continueOnError: false
- script: rd /Q /S $(BUILD_DIR)
displayName: 'Clean build dir'
@@ -218,10 +218,10 @@ jobs:
displayName: 'Tensorflow Frontend UT'
continueOnError: false
- script: |
set PATH=$(IB_DIR);%PATH%
call $(SETUPVARS) && "$(IB_TESTCONSOLE)" $(INSTALL_TEST_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests-IB.xml
displayName: 'IE UT old - IB'
# set PATH=$(IB_DIR);%PATH%
# call $(SETUPVARS) && "$(IB_TESTCONSOLE)" $(INSTALL_TEST_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests-IB.xml
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\InferenceEngineUnitTests --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
displayName: 'IE UT old'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ieUnitTests --gtest_output=xml:TEST-ieUnitTests.xml
@@ -235,7 +235,6 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\gnaUnitTests --gtest_output=xml:TEST-gnaUnitTests.xml
displayName: 'GNA UT'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'ON')
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\vpuUnitTests --gtest_output=xml:TEST-vpuUnitTests.xml
displayName: 'VPU UT'
@@ -257,11 +256,10 @@ jobs:
displayName: 'TEMPLATE FuncTests'
continueOnError: false
# call $(SETUPVARS) && $(INSTALL_TEST_DIR)\cpuFuncTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
- script: |
set PATH=$(IB_DIR);%PATH%
call $(SETUPVARS) && "$(IB_TESTCONSOLE)" $(INSTALL_TEST_DIR)\cpuFuncTests.exe --gtest_filter=*smoke*:-*CompareWithRefs/base_size=16_pre_nms_topn=100_post_nms_topn=100_nms_thresh=0.7_feat_stride=1_min_size=1_ratio*:*smoke_GRUSequenceCommonZeroClip/GRUSequenceTest.CompareWithRefs/mode=CONVERT_TO_TI_MAX_SEQ_LEN_CONST_seq_lengths* --gtest_output=xml:TEST-cpuFuncTests-IB.xml /testlevel=24
displayName: 'CPU FuncTests - IB'
# set PATH=$(IB_DIR);%PATH%
# call $(SETUPVARS) && "$(IB_TESTCONSOLE)" $(INSTALL_TEST_DIR)\cpuFuncTests.exe --gtest_filter=*smoke*:-*CompareWithRefs/base_size=16_pre_nms_topn=100_post_nms_topn=100_nms_thresh=0.7_feat_stride=1_min_size=1_ratio*:*smoke_GRUSequenceCommonZeroClip/GRUSequenceTest.CompareWithRefs/mode=CONVERT_TO_TI_MAX_SEQ_LEN_CONST_seq_lengths* --gtest_output=xml:TEST-cpuFuncTests-IB.xml /testlevel=24
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\cpuFuncTests --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')

View File

@@ -90,7 +90,7 @@ jobs:
path: build/docs/sphinx.log
- name: 'Upload html'
if: github.event_name == 'push'
if: always()
uses: actions/upload-artifact@v2
with:
name: openvino_html

View File

@@ -82,6 +82,7 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11
sudo apt --assume-yes install libclang-12-dev
- name: Install Python-based dependencies

View File

@@ -6,30 +6,29 @@
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level C++ Inference Engine API integrated with application logic.
through a high-level OpenVINO™ Runtime C++ and Python APIs integrated with application logic.
This open source version includes several components: namely [Model Optimizer], [nGraph] and
[Inference Engine], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
This open source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as Caffe\*, TensorFlow\*,
MXNet\* and ONNX\*.
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
## Repository components:
* [Inference Engine]
* [nGraph]
## Repository components
* [OpenVINO™ Runtime]
* [Model Optimizer]
* [Post-Training Optimization Tool]
## License
Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0](LICENSE).
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
## Resources:
## Resources
* Docs: https://docs.openvino.ai/
* Wiki: https://github.com/openvinotoolkit/openvino/wiki
* Issue tracking: https://github.com/openvinotoolkit/openvino/issues
* Storage: https://storage.openvinotoolkit.org/
* Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib
* Additional OpenVINO™ toolkit modules: https://github.com/openvinotoolkit/openvino_contrib
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
@@ -44,8 +43,8 @@ Please report questions, issues and suggestions using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine
[Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
[nGraph]:https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_DevGuide.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_README.html
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino

View File

@@ -23,14 +23,14 @@ ie_coverage_extract(INPUT "openvino" OUTPUT "legacy"
ie_coverage_genhtml(INFO_FILE "legacy"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
ie_coverage_extract(INPUT "openvino" OUTPUT "ov_hetero_plugin"
ie_coverage_extract(INPUT "openvino" OUTPUT "hetero_plugin"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/plugins/hetero/*")
ie_coverage_genhtml(INFO_FILE "ov_hetero_plugin"
ie_coverage_genhtml(INFO_FILE "hetero_plugin"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
ie_coverage_extract(INPUT "openvino" OUTPUT "ov_auto_plugin"
ie_coverage_extract(INPUT "openvino" OUTPUT "auto_plugin"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/plugins/auto/*")
ie_coverage_genhtml(INFO_FILE "ov_auto_plugin"
ie_coverage_genhtml(INFO_FILE "auto_plugin"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
ie_coverage_extract(INPUT "openvino" OUTPUT "preprocessing"
@@ -73,9 +73,9 @@ if (ENABLE_INTEL_GPU)
endif()
if(ENABLE_INTEL_GNA)
ie_coverage_extract(INPUT "openvino" OUTPUT "ov_intel_gna_plugin"
ie_coverage_extract(INPUT "openvino" OUTPUT "intel_gna_plugin"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/plugins/intel_gna/*")
ie_coverage_genhtml(INFO_FILE "ov_intel_gna_plugin"
ie_coverage_genhtml(INFO_FILE "intel_gna_plugin"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
endif()

View File

@@ -269,7 +269,7 @@ include(${OpenVINO_SOURCE_DIR}/src/cmake/ie_parallel.cmake)
if(ENABLE_INTEL_GNA)
reset_deps_cache(
GNA
GNA_EXT_DIR
GNA_PLATFORM_DIR
GNA_KERNEL_LIB_NAME
GNA_LIBS_LIST
@@ -286,12 +286,26 @@ if(ENABLE_INTEL_GNA)
LIST(APPEND FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/linux)
endif()
RESOLVE_DEPENDENCY(GNA
RESOLVE_DEPENDENCY(GNA_EXT_DIR
ARCHIVE_UNIFIED "GNA/GNA_${GNA_VERSION}.zip"
TARGET_PATH "${TEMP}/gna_${GNA_VERSION}"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+.[0-9]+).*"
FILES_TO_EXTRACT FILES_TO_EXTRACT_LIST
SHA256 ${GNA_HASH})
update_deps_cache(GNA "${GNA}" "Path to GNA root folder")
debug_message(STATUS "gna=" ${GNA})
update_deps_cache(GNA_EXT_DIR "${GNA_EXT_DIR}" "Path to GNA root folder")
debug_message(STATUS "gna=" ${GNA_EXT_DIR})
if (WIN32)
set(GNA_PLATFORM_DIR win64 CACHE STRING "" FORCE)
elseif (UNIX)
set(GNA_PLATFORM_DIR linux CACHE STRING "" FORCE)
else ()
message(FATAL_ERROR "GNA not supported on this platform, only linux, and windows")
endif ()
set(GNA_LIB_DIR x64 CACHE STRING "" FORCE)
set(GNA_PATH ${GNA_EXT_DIR}/${GNA_PLATFORM_DIR}/${GNA_LIB_DIR} CACHE STRING "" FORCE)
if(NOT BUILD_SHARED_LIBS)
list(APPEND PATH_VARS "GNA_PATH")
endif()
endif()

View File

@@ -129,7 +129,7 @@ set(IE_DEBUG_POSTFIX_WIN "d")
set(IE_RELEASE_POSTFIX_WIN "")
set(IE_DEBUG_POSTFIX_LIN "")
set(IE_RELEASE_POSTFIX_LIN "")
set(IE_DEBUG_POSTFIX_MAC "")
set(IE_DEBUG_POSTFIX_MAC "d")
set(IE_RELEASE_POSTFIX_MAC "")
if(WIN32)

View File

@@ -28,9 +28,26 @@ if (ENABLE_UB_SANITIZER)
if (WIN32)
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows")
endif()
# TODO: Remove -fno-sanitize=null as thirdparty/ocl/clhpp_headers UBSAN compatibility resolved:
# https://github.com/KhronosGroup/OpenCL-CLHPP/issues/17
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=undefined -fno-sanitize=null")
# Mute -fsanitize=function Indirect call of a function through a function pointer of the wrong type.
# Sample cases:
# call to function GetAPIVersion through pointer to incorrect function type 'void *(*)()'
# Mute -fsanitize=alignment Use of a misaligned pointer or creation of a misaligned reference. Also sanitizes assume_aligned-like attributes.
# Sample cases:
# VPU_FixedMaxHeapTest.DefaultConstructor test case load of misaligned address 0x62000000187f for type 'const DataType', which requires 4 byte alignment
# Mute -fsanitize=bool Load of a bool value which is neither true nor false.
# Samples cases:
# ie_c_api_version.apiVersion test case load of value 32, which is not a valid value for type 'bool'
# Mute -fsanitize=enum Load of a value of an enumerated type which is not in the range of representable values for that enumerated type.
# Samples cases:
# load of value 4294967295, which is not a valid value for type 'const (anonymous namespace)::onnx::Field'
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=undefined -fno-sanitize=null -fno-sanitize=alignment -fno-sanitize=bool -fno-sanitize=enum")
if(OV_COMPILER_IS_CLANG)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-sanitize=function")
endif()
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 fix
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -Wno-maybe-uninitialized")

View File

@@ -3,7 +3,7 @@
#
set(FRONTEND_INSTALL_INCLUDE "runtime/include/")
set(FRONTEND_NAME_PREFIX "ov_")
set(FRONTEND_NAME_PREFIX "openvino_")
set(FRONTEND_NAME_SUFFIX "_frontend")
set(FRONTEND_NAMES "" CACHE INTERNAL "")
@@ -35,7 +35,7 @@ function(ov_generate_frontends_hpp)
endif()
# add frontends to libraries including ov_frontends.hpp
ov_target_link_frontends(ov_runtime)
ov_target_link_frontends(openvino)
set(ov_frontends_hpp "${CMAKE_BINARY_DIR}/src/frontends/common/src/ov_frontends.hpp")
set(frontends_hpp_in "${IEDevScripts_DIR}/frontends/ov_frontends.hpp.in")

View File

@@ -2,7 +2,7 @@
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN)$'
ClassName: '^([A-Z][\w]+|b?float16|numeric_limits|ngraph_error|stopwatch|unsupported_op)$'
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair)$'
FunctionName: '^(operator\W+|[a-z_\d]+)$'
FunctionName: '^(operator\W+|[a-z_\d]+)|PrintTo$'
Namespace: '^([a-z\d_]+|InferenceEngine)$'
NamespaceAlias: '^([a-z\d_]+|InferenceEngine)$'
UnionName: '[A-Z][\w]+$'
@@ -99,7 +99,7 @@ CxxCatchStatement: '^.*$'
CxxTryStatement: '^.*$'
CxxForRangeStatement: '^.*$'
MsAsmStatement: 'XXXX'
NullStatement: 'XXXX'
NullStatement: '^.*$'
DeclarationStatement: '^.*$'
TranslationUnit: 'XXXX'
UnexposedAttribute: '^.*$'

View File

@@ -102,32 +102,33 @@ function(ie_add_plugin)
endif()
add_dependencies(ie_plugins ${IE_PLUGIN_NAME})
if(TARGET inference_engine_preproc)
if(TARGET openvino_gapi_preproc)
if(BUILD_SHARED_LIBS)
add_dependencies(${IE_PLUGIN_NAME} inference_engine_preproc)
add_dependencies(${IE_PLUGIN_NAME} openvino_gapi_preproc)
else()
target_link_libraries(${IE_PLUGIN_NAME} PRIVATE inference_engine_preproc)
target_link_libraries(${IE_PLUGIN_NAME} PRIVATE openvino_gapi_preproc)
endif()
endif()
# fake dependencies to build in the following order:
# IE -> IE readers -> IE inference plugins -> IE-based apps
if(BUILD_SHARED_LIBS)
if(TARGET ov_ir_frontend)
add_dependencies(${IE_PLUGIN_NAME} ov_ir_frontend)
if(TARGET openvino_ir_frontend)
add_dependencies(${IE_PLUGIN_NAME} openvino_ir_frontend)
endif()
if(TARGET openvino_onnx_frontend)
add_dependencies(${IE_PLUGIN_NAME} openvino_onnx_frontend)
endif()
if(TARGET openvino_paddle_frontend)
add_dependencies(${IE_PLUGIN_NAME} openvino_paddle_frontend)
endif()
if(TARGET openvino_tensorflow_frontend)
add_dependencies(${IE_PLUGIN_NAME} openvino_tensorflow_frontend)
endif()
# TODO: remove with legacy CNNNLayer API / IR v7
if(TARGET inference_engine_ir_v7_reader)
add_dependencies(${IE_PLUGIN_NAME} inference_engine_ir_v7_reader)
endif()
if(TARGET ov_onnx_frontend)
add_dependencies(${IE_PLUGIN_NAME} ov_onnx_frontend)
endif()
if(TARGET ov_paddle_frontend)
add_dependencies(${IE_PLUGIN_NAME} ov_paddle_frontend)
endif()
if(TARGET ov_tensorflow_frontend)
add_dependencies(${IE_PLUGIN_NAME} ov_tensorflow_frontend)
endif()
endif()
# install rules
@@ -319,7 +320,7 @@ function(ie_generate_plugins_hpp)
endforeach()
# add plugins to libraries including ie_plugins.hpp
ie_target_link_plugins(ov_runtime)
ie_target_link_plugins(openvino)
if(TARGET inference_engine_s)
ie_target_link_plugins(inference_engine_s)
endif()

View File

@@ -82,8 +82,8 @@ function(register_extra_modules)
endif()
endforeach()
if ("${NS}" STREQUAL "openvino")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime ALIAS ov_runtime)\n")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime::dev ALIAS ov_runtime_dev)\n")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime ALIAS openvino)\n")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime::dev ALIAS openvino_dev)\n")
endif()
endfunction()

View File

@@ -168,7 +168,19 @@ endif()
_ov_find_dependency(Threads)
if(NOT TARGET ov_runtime)
set(ENABLE_INTEL_GNA "@ENABLE_INTEL_GNA@")
set(ENABLE_INTEL_GNA_SHARED "@BUILD_SHARED_LIBS@")
if(ENABLE_INTEL_GNA AND NOT ENABLE_INTEL_GNA_SHARED AND NOT libGNA_FOUND)
set_and_check(GNA_PATH "@PACKAGE_GNA_PATH@")
_ov_find_dependency(libGNA
COMPONENTS KERNEL
CONFIG
PATHS ${CMAKE_CURRENT_LIST_DIR}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
endif()
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
@@ -224,6 +236,7 @@ if(_need_package_name_reset)
unset(_need_package_name_reset)
endif()
unset(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND)

View File

@@ -26,11 +26,16 @@
#
# Frontends:
#
# ngraph_ov_onnx_frontend_FOUND - True if the system has ov_onnx_frontend library
# ngraph::ov_onnx_frontend - ONNX FrontEnd target (optional)
# ngraph_onnx_frontend_FOUND - True if the system has ngraph::onnx_frontend library
# ngraph::onnx_frontend - ONNX FrontEnd target (optional)
#
# ngraph_paddle_frontend_FOUND - True if the system has Paddle frontend
# ngraph::ov_paddle_frontend - nGraph Paddle frontend (optional)
# ngraph_paddle_frontend_FOUND - True if the system has Paddle frontend
# ngraph::paddle_frontend - nGraph Paddle frontend (optional)
#
# ngraph_ir_frontend_FOUND - True if the system has OpenVINO IR frontend
#
# ngraph_tensorflow_frontend_FOUND - True if the system has TensorFlow frontend
# ngraph::tensorflow_frontend - nGraph TensorFlow frontend (optional)
#
@PACKAGE_INIT@
@@ -50,43 +55,46 @@ if(TARGET openvino::runtime AND NOT TARGET ngraph::ngraph)
INTERFACE_LINK_LIBRARIES openvino::runtime)
endif()
if(TARGET openvino::frontend::onnx AND NOT TARGET ngraph::ov_onnx_frontend)
add_library(ngraph::ov_onnx_frontend INTERFACE IMPORTED)
set_target_properties(ngraph::ov_onnx_frontend PROPERTIES
if(TARGET openvino::frontend::onnx AND NOT TARGET ngraph::onnx_frontend)
add_library(ngraph::onnx_frontend INTERFACE IMPORTED)
set_target_properties(ngraph::onnx_frontend PROPERTIES
INTERFACE_LINK_LIBRARIES openvino::frontend::onnx)
endif()
if(TARGET openvino::frontend::paddle AND NOT TARGET ngraph::ov_paddle_frontend)
add_library(ngraph::ov_paddle_frontend INTERFACE IMPORTED)
set_target_properties(ngraph::ov_paddle_frontend PROPERTIES
if(TARGET openvino::frontend::paddle AND NOT TARGET ngraph::paddle_frontend)
add_library(ngraph::paddle_frontend INTERFACE IMPORTED)
set_target_properties(ngraph::paddle_frontend PROPERTIES
INTERFACE_LINK_LIBRARIES openvino::frontend::paddle)
endif()
if(TARGET openvino::frontend::tensorflow AND NOT TARGET ngraph::ov_tensorflow_frontend)
add_library(ngraph::ov_tensorflow_frontend INTERFACE IMPORTED)
set_target_properties(ngraph::ov_tensorflow_frontend PROPERTIES
if(TARGET openvino::frontend::tensorflow AND NOT TARGET ngraph::tensorflow_frontend)
add_library(ngraph::tensorflow_frontend INTERFACE IMPORTED)
set_target_properties(ngraph::tensorflow_frontend PROPERTIES
INTERFACE_LINK_LIBRARIES openvino::frontend::tensorflow)
endif()
set(ngraph_ngraph_FOUND ON)
set(NGRAPH_LIBRARIES ngraph::ngraph)
set(ngraph_ov_onnx_frontend_FOUND ${OpenVINO_Frontend_ONNX_FOUND})
set(ngraph_onnx_frontend_FOUND ${OpenVINO_Frontend_ONNX_FOUND})
set(ngraph_tensorflow_frontend_FOUND ${OpenVINO_Frontend_TensorFlow_FOUND})
set(ngraph_paddle_frontend_FOUND ${OpenVINO_Frontend_Paddle_FOUND})
set(ngraph_onnx_importer_FOUND ${OpenVINO_Frontend_ONNX_FOUND})
if(ngraph_onnx_importer_FOUND)
set(ONNX_IMPORTER_LIBRARIES ngraph::ov_onnx_frontend)
set(ONNX_IMPORTER_LIBRARIES ngraph::onnx_frontend)
# ngraph::onnx_importer target and variables are deprecated
# but need to create a dummy target for BW compatibility
if(NOT TARGET ngraph::onnx_importer)
add_library(ngraph::onnx_importer INTERFACE IMPORTED)
set_target_properties(ngraph::onnx_importer PROPERTIES
INTERFACE_LINK_LIBRARIES ngraph::ov_onnx_frontend)
INTERFACE_LINK_LIBRARIES ngraph::onnx_frontend)
endif()
endif()
set(ngraph_paddle_frontend_FOUND ${OpenVINO_Frontend_Paddle_FOUND})
set(ngraph_tensorflow_frontend_FOUND ${OpenVINO_Frontend_TensorFlow_FOUND})
set(ngraph_onnx_frontend_FOUND ${OpenVINO_Frontend_ONNX_FOUND})
set(ngraph_ir_frontend_FOUND ${OpenVINO_Frontend_IR_FOUND})
check_required_components(ngraph)

View File

@@ -24,7 +24,7 @@ if(NOT ENABLE_DOCKER)
set(all_docs_targets
ie_docs_snippets ov_template_func_tests
template_extension ov_template_extension ov_template_plugin)
template_extension openvino_template_extension openvino_template_plugin)
foreach(target_name IN LISTS all_docs_targets)
if(TARGET ${target_name})
set_target_properties(${target_name} PROPERTIES FOLDER docs)
@@ -36,7 +36,7 @@ if(NOT ENABLE_DOCKER)
# install
foreach(target ov_template_plugin template_extension ov_template_extension)
foreach(target openvino_template_plugin template_extension openvino_template_extension)
if(TARGET ${target})
install(TARGETS ${target}
LIBRARY DESTINATION ${IE_CPACK_RUNTIME_PATH}
@@ -51,7 +51,6 @@ set(ENABLE_OPENVINO_NOTEBOOKS OFF CACHE BOOL "Build with openvino notebooks")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation dir.")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation dir.")
set(OVMS_DOCS_DIR "" CACHE PATH "Path to model server documentation dir.")
set(GST_DOCS_DIR "" CACHE PATH "Path to gst-video-analytics documentation dir.")
set(GRAPH_CSV_DIR "" CACHE PATH "Path to the folder containing csv data for rendering graphs.")
function(build_docs)
@@ -89,6 +88,8 @@ function(build_docs)
# Sphinx folders, doxyrest templates and config
set(SPHINX_CONF_IN "${DOCS_SOURCE_DIR}/conf.py")
set(SPHINX_TEMPLATES_IN "${DOCS_SOURCE_DIR}/_templates")
set(SPHINX_TEMPLATES_OUT "${RST_OUTPUT}/_templates")
set(SPHINX_CONF_OUT "${RST_OUTPUT}/conf.py")
set(SPHINX_STATIC_IN "${DOCS_SOURCE_DIR}/_static")
set(SPHINX_STATIC_OUT "${RST_OUTPUT}/_static")
@@ -132,6 +133,16 @@ function(build_docs)
)
endif()
list(APPEND commands
COMMAND ${CMAKE_COMMAND} -E copy ${API_DOCS_IN}/api_reference.rst ${API_DOCS_OUT}/api_reference.rst
)
if(ENABLE_PYTHON)
list(APPEND commands
COMMAND ${CMAKE_COMMAND} -E copy_directory ${API_DOCS_IN}/ie_python_api ${API_DOCS_OUT}/ie_python_api
)
endif()
# omz doc files
if(EXISTS "${OMZ_DOCS_DIR}")
get_filename_component(OMZ_DOCS_DIR "${OMZ_DOCS_DIR}" ABSOLUTE)
@@ -160,14 +171,6 @@ function(build_docs)
--output_dir=${DOCS_BUILD_DIR}/ovms)
endif()
# gst doc files
if(EXISTS "${GST_DOCS_DIR}")
get_filename_component(GST_DOCS_DIR "${GST_DOCS_DIR}" ABSOLUTE)
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
--input_dir=${GST_DOCS_DIR}
--output_dir=${DOCS_BUILD_DIR}/gst)
endif()
add_custom_target(preprocess_docs
COMMENT "Preprocess documentation"
VERBATIM)
@@ -197,7 +200,7 @@ function(build_docs)
COMMAND ${PYTHON_EXECUTABLE} ${COPY_IMAGES_SCRIPT} ${XML_OUTPUT} ${RST_OUTPUT}
COMMAND ${PYTHON_EXECUTABLE} ${DOXYGEN_MAPPING_SCRIPT} ${XML_OUTPUT} ${DOCS_BUILD_DIR} ${OpenVINO_SOURCE_DIR}/../
COMMAND ${CMAKE_COMMAND} -E copy ${SPHINX_INDEX_IN} ${SPHINX_INDEX_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${API_DOCS_IN} ${API_DOCS_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${SPHINX_TEMPLATES_IN} ${SPHINX_TEMPLATES_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_IN} ${DOXYREST_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_SPHINX_IN} ${DOXYREST_SPHINX_OUT}
COMMAND ${CMAKE_COMMAND} -E copy_directory ${SPHINX_STATIC_IN} ${SPHINX_STATIC_OUT}

View File

@@ -31,15 +31,15 @@ There are three steps to support inference of a model with custom operation(s):
1. Add support for a custom operation in the [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) so
the Model Optimizer can generate the IR with the operation.
2. Create an operation set and implement a custom nGraph operation in it as described in the
[Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md).
3. Implement a customer operation in one of the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
[Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md).
3. Implement a customer operation in one of the [Inference Engine](../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md)
plugins to support inference of this operation using a particular target hardware (CPU, GPU or VPU).
To see the operations that are supported by each device plugin for the Inference Engine, refer to the
[Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md).
[Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md).
> **NOTE**: If a device doesn't support a particular operation, an alternative to creating a new operation is to target
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../OV_Runtime_UG/supported_plugins/HETERO.md) may be
> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to
> run on another device (e.g., CPU) that does support those operations.
@@ -61,20 +61,20 @@ operation. Refer to the "Operation Extractor" section of
## Custom Operations Extensions for the Inference Engine
Inference Engine provides an extension mechanism to support new operations. This mechanism is described in [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md).
Inference Engine provides an extension mechanism to support new operations. This mechanism is described in [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md).
Each device plugin includes a library of optimized implementations to execute known operations which must be extended to execute a custom operation. The custom operation extension is implemented according to the target device:
- Custom Operation CPU Extension
- A compiled shared library (`.so` or `.dll`) needed by the CPU Plugin for executing the custom operation
on a CPU. Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more
on a CPU. Refer to the [How to Implement Custom CPU Operations](../OV_Runtime_UG/Extensibility_DG/CPU_Kernel.md) for more
details.
- Custom Operation GPU Extension
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with an operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the [How to Implement Custom GPU Operations](../IE_DG/Extensibility_DG/GPU_Kernel.md) for more details.
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with an operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the [How to Implement Custom GPU Operations](../OV_Runtime_UG/Extensibility_DG/GPU_Kernel.md) for more details.
- Custom Operation VPU Extension
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with an operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to [How to Implement Custom Operations for VPU](../IE_DG/Extensibility_DG/VPU_Kernel.md) for more details.
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with an operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to [How to Implement Custom Operations for VPU](../OV_Runtime_UG/Extensibility_DG/VPU_Kernel.md) for more details.
Also, it is necessary to implement nGraph custom operation according to [Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) so the Inference Engine can read an IR with this
Also, it is necessary to implement nGraph custom operation according to [Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md) so the Inference Engine can read an IR with this
operation and correctly infer output tensor shape and type.
## Enabling Magnetic Resonance Image Reconstruction Model
@@ -125,7 +125,7 @@ Firstly, open the model in TensorBoard or other TensorFlow* model visualization
batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all
dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set
the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](../MO_DG/prepare_model/convert_model/Converting_Model.md) and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
described in the [Using Shape Inference](../OV_Runtime_UG/ShapeInference.md). Also refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](../MO_DG/prepare_model/convert_model/Converting_Model.md) and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
for more details and command line parameters used for the model conversion.
```sh
@@ -263,7 +263,7 @@ The sub-graph corresponding to the originally non-supported one is depicted in t
### Inference Engine Extension Implementation
Now it is necessary to implement the extension for the CPU plugin with operation "FFT" introduced previously. The code
below is based on the template extension described in [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md).
below is based on the template extension described in [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md).
#### CMake Build File
The first step is to create a CMake configuration file which builds the extension. The content of the "CMakeLists.txt"
@@ -284,7 +284,7 @@ in the `fft_op.cpp` file with the following content:
@snippet template_extension/old/fft_op.cpp fft_op:implementation
Refer to the [Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) for more details.
Refer to the [Custom nGraph Operation](../OV_Runtime_UG/Extensibility_DG/AddingNGraphOps.md) for more details.
#### CPU FFT Kernel Implementation
The operation implementation for CPU plugin uses OpenCV to perform the FFT. The header file "fft_kernel.hpp" has the
@@ -296,11 +296,11 @@ The "fft_kernel.cpp" with the implementation of the CPU has the following conten
@snippet template_extension/old/fft_kernel.cpp fft_kernel:implementation
Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more details.
Refer to the [How to Implement Custom CPU Operations](../OV_Runtime_UG/Extensibility_DG/CPU_Kernel.md) for more details.
#### Extension Library Implementation
The last step is to create an extension library "extension.cpp" and "extension.hpp" which will include the FFT
operation for the CPU plugin. The code of the library is described in the [Extension Library](../IE_DG/Extensibility_DG/Extension.md).
operation for the CPU plugin. The code of the library is described in the [Extension Library](../OV_Runtime_UG/Extensibility_DG/Extension.md).
### Building and Running the Custom Extension
To build the extension, run the following:<br>
@@ -335,8 +335,8 @@ python3 mri_reconstruction_demo.py \
- OpenVINO™ toolkit online documentation: [https://docs.openvino.ai](https://docs.openvino.ai)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
- [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
- [Inference Engine Extensibility Mechanism](../OV_Runtime_UG/Extensibility_DG/Intro.md)
- [OpenVINO™ Toolkit Samples Overview](../OV_Runtime_UG/Samples_Overview.md)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d147adf801535e95d8b627a8a1d23f7b89dea1eabe06218235e756b0a9866fe
size 1636

View File

@@ -21,7 +21,7 @@ Once the commands above are executed, the Inference Engine Developer Package is
* `IE::ngraph` - shared nGraph library
* `IE::inference_engine` - shared Inference Engine library
* `IE::inference_engine_transformations` - shared library with Inference Engine ngraph-based Transformations
* `IE::inference_engine_preproc` - shared library with Inference Engine preprocessing plugin
* `IE::openvino_gapi_preproc` - shared library with Inference Engine preprocessing plugin
* `IE::inference_engine_plugin_api` - interface library with Inference Engine Plugin API headers
* `IE::inference_engine_lp_transformations` - shared library with low-precision transformations
* `IE::pugixml` - static Pugixml library

View File

@@ -2,10 +2,13 @@
@sphinxdirective
.. _deep learning model optimizer:
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_IR_and_opsets
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
@@ -19,7 +22,7 @@
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the [Inference Engine](../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md).
> **NOTE**: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.

View File

@@ -9,7 +9,7 @@ Model Optimizer performs preprocessing to a model. It is possible to optimize th
If, for example, your network assumes the RGB inputs, the Model Optimizer can swap the channels in the first convolution using the `--reverse_input_channels` command line option, so you do not need to convert your inputs to RGB every time you get the BGR image, for example, from OpenCV*.
- **Larger batch size**<br>
Notice that the devices like GPU are doing better with larger batch size. While it is possible to set the batch size in the runtime using the Inference Engine [ShapeInference feature](../../IE_DG/ShapeInference.md).
Notice that the devices like GPU are doing better with larger batch size. While it is possible to set the batch size in the runtime using the Inference Engine [ShapeInference feature](../../OV_Runtime_UG/ShapeInference.md).
- **Resulting IR precision**<br>
The resulting IR precision, for instance, `FP16` or `FP32`, directly affects performance. As CPU now supports `FP16` (while internally upscaling to `FP32` anyway) and because this is the best precision for a GPU target, you may want to always convert models to `FP16`. Notice that this is the only precision that Intel&reg; Movidius&trade; Myriad&trade; 2 and Intel&reg; Myriad&trade; X VPUs support.

View File

@@ -18,7 +18,7 @@ You need to build your performance conclusions on reproducible data. Do the perf
- If the warm-up run does not help or execution time still varies, you can try running a large number of iterations and then average or find a mean of the results.
- For time values that range too much, use geomean.
Refer to the [Inference Engine Samples](../../IE_DG/Samples_Overview.md) for code examples for the performance measurements. Almost every sample, except interactive demos, has a `-ni` option to specify the number of iterations.
Refer to the [Inference Engine Samples](../../OV_Runtime_UG/Samples_Overview.md) for code examples for the performance measurements. Almost every sample, except interactive demos, has a `-ni` option to specify the number of iterations.
## Getting performance numbers using OpenVINO tool
@@ -39,7 +39,7 @@ to execute on the CPU instead.
For example, for the CPU throughput mode from the previous section, you can play with number of streams (`-nstreams` command-line param).
Try different values of the `-nstreams` argument from `1` to a number of CPU cores and find one that provides the best performance. For example, on a 8-core CPU, compare the `-nstreams 1` (which is a latency-oriented scenario) to the `2`, `4` and `8` streams. Notice that `benchmark_app` automatically queries/creates/runs number of requests required to saturate the given number of streams.
Finally, notice that when you don't specify number of streams with `-nstreams`, "AUTO" value for the streams is used, e.g. for the CPU this is [CPU_THROUGHPUT_AUTO](../../IE_DG/supported_plugins/CPU.md). You can spot the actual value behind "AUTO" for your machine in the application output.
Finally, notice that when you don't specify number of streams with `-nstreams`, "AUTO" value for the streams is used, e.g. for the CPU this is [CPU_THROUGHPUT_AUTO](../../OV_Runtime_UG/supported_plugins/CPU.md). You can spot the actual value behind "AUTO" for your machine in the application output.
Notice that the "AUTO" number is not necessarily most optimal, so it is generally recommended to play either with the benchmark_app's "-nstreams" as described above, or via [new Workbench tool](@ref workbench_docs_Workbench_DG_Introduction).This allows you to simplify the app-logic, as you don't need to combine multiple inputs into a batch to achieve good CPU performance.
Instead, it is possible to keep a separate infer request per camera or another source of input and process the requests in parallel using Async API.
@@ -47,7 +47,7 @@ Instead, it is possible to keep a separate infer request per camera or another s
When comparing the Inference Engine performance with the framework or another reference code, make sure that both versions are as similar as possible:
- Wrap exactly the inference execution (refer to the [Inference Engine Samples](../../IE_DG/Samples_Overview.md) for examples).
- Wrap exactly the inference execution (refer to the [Inference Engine Samples](../../OV_Runtime_UG/Samples_Overview.md) for examples).
- Do not include model loading time.
- Ensure the inputs are identical for the Inference Engine and the framework. For example, Caffe\* allows to auto-populate the input with random values. Notice that it might give different performance than on real images.
- Similarly, for correct performance comparison, make sure the access pattern, for example, input layouts, is optimal for Inference Engine (currently, it is NCHW).
@@ -64,7 +64,7 @@ Alternatively, you can gather the raw profiling data that samples report, the se
### Internal Inference Performance Counters <a name="performance-counters"></a>
Almost every sample (inspect command-line options for a specific sample with `-h`) supports a `-pc` command that outputs internal execution breakdown. Refer to the [samples code](../../IE_DG/Samples_Overview.md) for the actual Inference Engine API behind that.
Almost every sample (inspect command-line options for a specific sample with `-h`) supports a `-pc` command that outputs internal execution breakdown. Refer to the [samples code](../../OV_Runtime_UG/Samples_Overview.md) for the actual Inference Engine API behind that.
Below is example of CPU plugin output for a network (since the device is CPU, the layers wall clock `realTime` and the `cpu` time are the same):

View File

@@ -214,7 +214,7 @@ One of the layers in the specified topology might not have inputs or values. Ple
#### 24. What does the message "Part of the nodes was not translated to IE. Stopped" mean? <a name="question-24"></a>
Some of the layers are not supported by the Inference Engine and cannot be translated to an Intermediate Representation. You can extend the Model Optimizer by allowing generation of new types of layers and implement these layers in the dedicated Inference Engine plugins. For more information, refer to the [Custom Layers Guide](../../HOWTO/Custom_Layers_Guide.md) and [Inference Engine Extensibility Mechanism](../../IE_DG/Extensibility_DG/Intro.md)
Some of the layers are not supported by the Inference Engine and cannot be translated to an Intermediate Representation. You can extend the Model Optimizer by allowing generation of new types of layers and implement these layers in the dedicated Inference Engine plugins. For more information, refer to the [Custom Layers Guide](../../HOWTO/Custom_Layers_Guide.md) and [Inference Engine Extensibility Mechanism](../../OV_Runtime_UG/Extensibility_DG/Intro.md)
#### 25. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? <a name="question-25"></a>
@@ -638,4 +638,4 @@ Starting from the 2022.1 version, the default IR conversion path for ONNX models
Certain features, such as `--extensions` and `--transformations_config`, are not yet fully supported on the new frontends.
For `--extensions`, the new frontends support only paths to shared libraries (.dll and .so). For `--transformations_config`, they support JSON configurations with defined library fields.
Inputs freezing (enabled by `--freeze_placeholder_with_value` or `--input` arguments) is not supported on the new frontends.
The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (by `--use_new_frontend` or `--use_legacy_frontend` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path.
The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (by `--use_new_frontend` or `--use_legacy_frontend` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path.

View File

@@ -1,11 +1,17 @@
# Converting a Caffe* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe}
@sphinxdirective
.. _convert model caffe:
@endsphinxdirective
A summary of the steps for optimizing and deploying a model that was trained with Caffe\*:
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Caffe\*.
2. [Convert a Caffe\* Model](#Convert_From_Caffe) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md)
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md)
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
## Supported Topologies

View File

@@ -2,6 +2,8 @@
@sphinxdirective
.. _convert model kaldi:
.. toctree::
:maxdepth: 1
:hidden:
@@ -14,8 +16,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Kaldi\*.
2. [Convert a Kaldi\* Model](#Convert_From_Kaldi) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md).
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
> **NOTE**: The Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. Support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.

View File

@@ -2,6 +2,8 @@
@sphinxdirective
.. _convert model mxnet:
.. toctree::
:maxdepth: 1
:hidden:
@@ -15,8 +17,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for MXNet* (MXNet was used to train your model)
2. [Convert a MXNet model](#ConvertMxNet) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md)
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md)
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment
## Supported Topologies

View File

@@ -2,6 +2,8 @@
@sphinxdirective
.. _convert model onnx:
.. toctree::
:maxdepth: 1
:hidden:

View File

@@ -4,8 +4,8 @@ A summary of the steps for optimizing and deploying a model trained with Paddle\
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for Paddle\*.
2. [Convert a Paddle\* Model](#Convert_From_Paddle) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases.
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md).
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
4. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
## Supported Topologies

View File

@@ -48,8 +48,8 @@ PyTorch* framework is supported through export to ONNX\* format. A summary of th
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for ONNX\*.
2. [Export PyTorch model to ONNX\*](#export-to-onnx).
3. [Convert an ONNX\* model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../IE_DG/Samples_Overview.md).
5. [Integrate](../../../IE_DG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
5. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
## Export PyTorch\* Model to ONNX\* Format <a name="export-to-onnx"></a>

View File

@@ -2,6 +2,8 @@
@sphinxdirective
.. _convert model tf:
.. toctree::
:maxdepth: 1
:hidden:
@@ -29,8 +31,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
1. [Configure the Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md) for TensorFlow\* (TensorFlow was used to train your model).
2. [Freeze the TensorFlow model](#freeze-the-tensorflow-model) if your model is not already frozen or skip this step and use the [instruction](#loading-nonfrozen-models) to a convert a non-frozen model.
3. [Convert a TensorFlow\* model](#Convert_From_TF) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../IE_DG/Samples_Overview.md).
5. [Integrate](../../../IE_DG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../OV_Runtime_UG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../OV_Runtime_UG/Samples_Overview.md).
5. [Integrate](../../../OV_Runtime_UG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
## Supported Topologies

View File

@@ -194,6 +194,8 @@ Framework-agnostic parameters:
--transformations_config TRANSFORMATIONS_CONFIG
Use the configuration file with transformations
description.
--use_new_frontend Force the usage of new frontend API for model processing.
--use_legacy_frontend Force the usage of legacy API for model processing.
```
The sections below provide details on using particular parameters and examples of CLI commands.

View File

@@ -3,7 +3,7 @@
## Introduction
Inference Engine CPU and GPU plugin can infer models in the low precision.
For details, refer to [Low Precision Inference on the CPU](../../../IE_DG/Int8Inference.md).
For details, refer to [Low Precision Inference on the CPU](../../../OV_Runtime_UG/Int8Inference.md).
Intermediate Representation (IR) should be specifically formed to be suitable for low precision inference.
Such an IR is called a Low Precision IR and you can generate it in two ways:

View File

@@ -104,5 +104,5 @@ mo --input_model rnnt_prediction.onnx --input "symbol[1 1],hidden_in_1[2 1 320],
mo --input_model rnnt_joint.onnx --input "0[1 1 1024],1[1 1 320]"
```
Please note that hardcoded value for sequence length = 157 was taken from the MLCommons but conversion to IR preserves
network [reshapeability](../../../../IE_DG/ShapeInference.md), this means you can change input shapes manually to any value either during conversion or
network [reshapeability](../../../../OV_Runtime_UG/ShapeInference.md), this means you can change input shapes manually to any value either during conversion or
inference.

View File

@@ -116,4 +116,4 @@ Run the Model Optimizer with the following command line parameters to generate r
```
For other applicable parameters, refer to [Convert Model from TensorFlow](../Convert_Model_From_TensorFlow.md).
For more information about reshape abilities, refer to [Using Shape Inference](../../../../IE_DG/ShapeInference.md).
For more information about reshape abilities, refer to [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md).

View File

@@ -47,7 +47,7 @@ There are two specificities with the supported part of the model.
The first is that the model contains an input with sequence length. So the model can be converted with
a fixed input length shape, thus the model is not reshapeable.
Refer to the [Using Shape Inference](../../../../IE_DG/ShapeInference.md).
Refer to the [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md).
The second is that the frozen model still has two variables: `previous_state_c` and `previous_state_h`, figure
with the frozen *.pb model is below. It means that the model keeps training these variables at each inference.

View File

@@ -2,7 +2,7 @@
> **NOTES**:
> * Starting with the 2022.1 release, the Model Optimizer can convert the TensorFlow\* Object Detection API Faster and Mask RCNNs topologies differently. By default, the Model Optimizer adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the pre-processing applied to the input image (refer to the [Proposal](../../../../ops/detection/Proposal_4.md) operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model Optimizer can generate IR for such models and insert operation [DetectionOutput](../../../../ops/detection/DetectionOutput_1.md) instead of `Proposal`. The `DetectionOutput` operation does not require additional model input "image_info" and moreover, for some models the produced inference results are closer to the original TensorFlow\* model. In order to trigger new behaviour the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
> * Starting with the 2021.1 release, the Model Optimizer converts the TensorFlow\* Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the Inference Engine using dedicated reshape API. Refer to [Using Shape Inference](../../../../IE_DG/ShapeInference.md) for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
> * Starting with the 2021.1 release, the Model Optimizer converts the TensorFlow\* Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the Inference Engine using dedicated reshape API. Refer to [Using Shape Inference](../../../../OV_Runtime_UG/ShapeInference.md) for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
> * To generate IRs for TF 1 SSD topologies, the Model Optimizer creates a number of `PriorBoxClustered` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the Inference Engine using dedicated Inference Engine API. The reshaping is supported for all SSD topologies except FPNs which contain hardcoded shapes for some operations preventing from changing topology input shape.
## How to Convert a Model
@@ -63,7 +63,7 @@ based on deep learning in various tasks, including Image Classifiacton, Visual O
Speech Recognition, Natural Language Processing and others. Refer to the links below for more details.
* [Inference Engine Samples](../../../../IE_DG/Samples_Overview.md)
* [Inference Engine Samples](../../../../OV_Runtime_UG/Samples_Overview.md)
* [Open Model Zoo Demos](@ref omz_demos)
## Important Notes About Feeding Input Images to the Samples

View File

@@ -65,13 +65,14 @@ cd tensorflow-yolo-v3
```sh
git checkout ed60b90
```
3. Download [coco.names](https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names) file from the DarkNet website **OR** use labels that fit your task.
3. Download [coco.names](https://github.com/AlexeyAB/darknet/blob/master/data/coco.names) file from the DarkNet website **OR** use labels that fit your task.
4. Download the [yolov3.weights](https://pjreddie.com/media/files/yolov3.weights) (for the YOLOv3 model) or [yolov3-tiny.weights](https://pjreddie.com/media/files/yolov3-tiny.weights) (for the YOLOv3-tiny model) file **OR** use your pre-trained weights with the same structure.
5. Install PIL, which is used by the conversion script in the repo:
```sh
pip install PIL
pip install pillow
```
6. Run a converter:
> **NOTE**: This converter works with TensorFlow 1.x and numpy 1.19 or lower.
- For YOLO-v3:
```sh
python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights
@@ -116,7 +117,7 @@ where:
- `custom_attributes` is a parameter that stores all the YOLOv3 specific attributes:
- `classes`, `coords`, `num`, and `masks` are attributes that you should copy from the configuration
file that was used for model training. If you used DarkNet officially shared weights,
you can use `yolov3.cfg` or `yolov3-tiny.cfg` configuration file from https://github.com/pjreddie/darknet/tree/master/cfg. Replace the default values in `custom_attributes` with the parameters that
you can use `yolov3.cfg` or `yolov3-tiny.cfg` configuration file from https://github.com/david8862/keras-YOLOv3-model-set/tree/master/cfg. Replace the default values in `custom_attributes` with the parameters that
follow the `[yolo]` titles in the configuration file.
- `anchors` is an optional parameter that is not used while inference of the model, but it used in a demo to parse `Region` layer output
- `entry_points` is a node name list to cut off the model and append the Region layer with custom attributes specified above.

View File

@@ -157,7 +157,7 @@ the following (for the case when `axis` is not equal to 0 and 1):
4. Use the concatenated value as the second input to the `Reshape` operation.
It is highly recommended that you write shape-agnostic transformations to avoid model reshape-ability issues. Refer to
[Using Shape Inference](../../../IE_DG/ShapeInference.md) for more information related to the reshaping of a model.
[Using Shape Inference](../../../OV_Runtime_UG/ShapeInference.md) for more information related to the reshaping of a model.
More information on how to develop front phase transformations and dedicated API description is provided in the
[Front Phase Transformations](#front-phase-transformations).
@@ -171,7 +171,7 @@ defined as a mathematical expression using the [ShapeOf](../../../ops/shape/Shap
> **NOTE**: Model Optimizer does not fold sub-graphs starting from the [ShapeOf](../../../ops/shape/ShapeOf_3.md)
> operation by default because this leads to a model non-reshape-ability (the command line parameter `--static_shape`
> can override this behavior). Refer to [Using Shape Inference](../../../IE_DG/ShapeInference.md) for more information
> can override this behavior). Refer to [Using Shape Inference](../../../OV_Runtime_UG/ShapeInference.md) for more information
> related to reshaping of a model.
Model Optimizer calculates output shapes for all operations in a model to write them to Intermediate Representation
@@ -507,7 +507,7 @@ There are a number of common attributes used in the operations. Here is the list
Model Optimizer operations this attribute should be set to `None`. The model conversion fails if an operation with
`type` equal to `None` comes to the IR emitting phase. **Mandatory**.
* `version` — the operation set (opset) name the operation belongs to. If not specified, the Model Optimizer sets it
equal to `experimental`. Refer to [nGraph Basic Concepts](@ref openvino_docs_nGraph_DG_basic_concepts) for more
equal to `experimental`. Refer to [OpenVINO Model Representation](@ref openvino_docs_OV_Runtime_UG_Model_Representation) for more
information about operation sets. **Mandatory**.
* `op` — Model Optimizer type of the operation. In many cases, the value of `type` is equal to the value of `op`. But
when the Model Optimizer cannot instantiate the opset operation during model loading, it creates an instance of an internal
@@ -1259,6 +1259,6 @@ Refer to the `extensions/back/GatherNormalizer.py` for the example of a such typ
## See Also <a name="see-also"></a>
* [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md)
* [Converting a Model to Intermediate Representation (IR)](../convert_model/Converting_Model.md)
* [nGraph Basic Concepts](../../../nGraph_DG/nGraph_basic_concepts.md)
* [Inference Engine Extensibility Mechanism](../../../IE_DG/Extensibility_DG/Intro.md)
* [OpenVINO Model Representation](../../../OV_Runtime_UG/model_representation.md)
* [Inference Engine Extensibility Mechanism](../../../OV_Runtime_UG/Extensibility_DG/Intro.md)
* [Extending the Model Optimizer with Caffe* Python Layers](Extending_Model_Optimizer_with_Caffe_Python_Layers.md)

View File

@@ -1,13 +1,17 @@
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
# OpenVINO™ Runtime User Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
@sphinxdirective
.. _deep learning inference engine:
.. toctree::
:maxdepth: 1
:hidden:
openvino_2_0_transition_guide
openvino_docs_IE_DG_Integrate_with_customer_application_new_API
openvino_docs_OV_Runtime_UG_Model_Representation
ngraph_transformation
openvino_docs_deployment_optimization_guide_dldt_optimization_guide
openvino_docs_IE_DG_Device_Plugins
Direct ONNX Format Support <openvino_docs_IE_DG_ONNX_Support>
@@ -35,8 +39,6 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
![](img/BASIC_FLOW_IE_C.svg)
\\* _nGraph_ is the internal graph representation in the OpenVINO™ toolkit. Use it to [build a model from source code](https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_build_function.html).
## Video

View File

@@ -70,7 +70,7 @@ The example below demonstrates how to unregister an operator from the destructor
## Requirements for Building with CMake
A program that uses the `register_operator` functionality requires `openvino::core` and `openvino::frontend::onnx` libraries in addition to the OpenVINO Inference Runtime.
The `ov_onnx_frontend` is a component of the `OpenVINO` package , so `find_package(OpenVINO REQUIRED COMPONENTS ONNX)` can find both.
The `openvino::frontend::onnx` is a component of the `OpenVINO` package , so `find_package(OpenVINO REQUIRED COMPONENTS ONNX)` can find both.
Those libraries need to be passed to the `target_link_libraries` command in the CMakeLists.txt file.
See CMakeLists.txt below for reference:

View File

@@ -34,15 +34,13 @@ Read the sections below to learn about each item.
```
2. **Include Inference Engine, nGraph and OpenCV libraries** in `project/CMakeLists.txt`
[OpenCV](https://docs.opencv.org/master/db/df5/tutorial_linux_gcc_cmake.html) integration is needed mostly for pre-processing input data and nGraph for more complex applications using [nGraph API](../nGraph_DG/nGraph_dg.md).
[OpenCV](https://docs.opencv.org/master/db/df5/tutorial_linux_gcc_cmake.html) integration is needed mostly for pre-processing input data and model representation in OpenVINO™ Runtime for more complex applications using [OpenVINO Model API](../OV_Runtime_UG/model_representation.md).
``` cmake
cmake_minimum_required(VERSION 3.0.0)
project(project_name)
find_package(ngraph REQUIRED)
find_package(InferenceEngine REQUIRED)
find_package(OpenCV REQUIRED)
find_package(OpenVINO REQUIRED)
add_executable(${PROJECT_NAME} src/main.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE ${InferenceEngine_LIBRARIES} ${OpenCV_LIBS} ${NGRAPH_LIBRARIES})
target_link_libraries(${PROJECT_NAME} PRIVATE openvino::runtime)
```
### Use Inference Engine API to Implement Inference Pipeline
@@ -457,7 +455,7 @@ Load the model to the device using `load_network()`:
@endsphinxdirective
This example is designed for CPU device, refer to the [Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md) page to read about more devices.
This example is designed for CPU device, refer to the [Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md) page to read about more devices.
#### Step 4. Prepare input
```py
@@ -491,4 +489,4 @@ Congratulations, you have made your first Python application with OpenVINO™ to
[ie_api_flow_cpp]: img/BASIC_IE_API_workflow_Cpp.svg
[ie_api_use_cpp]: img/IMPLEMENT_PIPELINE_with_API_C.svg
[ie_api_flow_python]: img/BASIC_IE_API_workflow_Python.svg
[ie_api_use_python]: img/IMPLEMENT_PIPELINE_with_API_Python.svg
[ie_api_use_python]: img/IMPLEMENT_PIPELINE_with_API_Python.svg

View File

@@ -2,6 +2,8 @@
@sphinxdirective
.. _code samples:
.. toctree::
:maxdepth: 1
:hidden:

View File

@@ -43,8 +43,8 @@ If a model has a hard-coded batch dimension, use `InferenceEngine::CNNNetwork::s
Inference Engine takes three kinds of a model description as an input, which are converted into an `InferenceEngine::CNNNetwork` object:
1. [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md) through `InferenceEngine::Core::ReadNetwork`
2. [ONNX model](../IE_DG/ONNX_Support.md) through `InferenceEngine::Core::ReadNetwork`
3. [nGraph function](../nGraph_DG/nGraph_dg.md) through the constructor of `InferenceEngine::CNNNetwork`
2. [ONNX model](../OV_Runtime_UG/ONNX_Support.md) through `InferenceEngine::Core::ReadNetwork`
3. [OpenVINO Model](../OV_Runtime_UG/model_representation.md) through the constructor of `InferenceEngine::CNNNetwork`
`InferenceEngine::CNNNetwork` keeps an `ngraph::Function` object with the model description internally.
The object should have fully-defined input shapes to be successfully loaded to Inference Engine plugins.
@@ -66,7 +66,7 @@ To feed input data of a shape that is different from the model input shape, resh
Once the input shape of `InferenceEngine::CNNNetwork` is set, call the `InferenceEngine::Core::LoadNetwork` method to get an `InferenceEngine::ExecutableNetwork` object for inference with updated shapes.
There are other approaches to reshape the model during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when_to_specify_input_shapes">IR generation</a> or [nGraph::Function creation](../nGraph_DG/build_function.md).
There are other approaches to reshape the model during the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model.html#when_to_specify_input_shapes">IR generation</a> or [ov::Model creation](../OV_Runtime_UG/model_representation.md).
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `InferenceEngine::CNNNetwork::reshape` method.
@@ -223,4 +223,4 @@ The Inference Engine provides a special mechanism that allows adding support of
### See Also:
[Hello Reshape Python Sample](../../inference_engine/ie_bridges/python/sample/hello_reshape_ssd/README.html)
[Hello Reshape Python Sample](../../samples/python/hello_reshape_ssd/README.html)

View File

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 49 KiB

View File

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 49 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

Before

Width:  |  Height:  |  Size: 246 KiB

After

Width:  |  Height:  |  Size: 246 KiB

View File

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 96 KiB

View File

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 79 KiB

View File

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 118 KiB

View File

Before

Width:  |  Height:  |  Size: 203 KiB

After

Width:  |  Height:  |  Size: 203 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

Some files were not shown because too many files have changed in this diff Show More