Compare commits

..

360 Commits

Author SHA1 Message Date
Andrey Zaytsev
e9969112af Removed confusing ONNX RT EP deprecation note (#2400) 2020-09-23 20:16:45 +03:00
Anton Romanov
d32a2d63de Added Conda CentOS documentation 2020.4 (#1367)
* Added Conda CentOS documentation 2020.4

* Added OS
2020-09-01 13:17:00 +03:00
Mikhail Letavin
c8d07caf67 [IE CLDNN] Move iGPU to first position in GPU device map (#1829) 2020-08-20 13:46:23 +03:00
Mikhail Letavin
f855af885d [IE CLDNN] dp4a query that should work with new driver (#1768) 2020-08-14 15:38:24 +03:00
Denis Orlov
c880ecb78a Merge 2020.4.0.1 (#1764)
* [GNA] Update GNA lib + propagate QoS timeout to the calling app (#1188)

* [GNA] Remove empty PWL (#1459)

* [GNA] Support timeout value set in Wait (#1499)

* [GNA] Bump GNA2 version to 1010 (#1510)

* [GNA] stored request id for completed sync infer request in order to get status later using wait() (#1458)

* stored request id for completed async infer request in order to get it's status later

* preserved status not started for multiple sequential calls to wait()

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

* [GNA] Fix callbacks (#1607)

* [GNA] Bump GNA2 version to 1047 (#1614)

* merge documentation updates from 2020/4 branch (#1671)

* update system requirements (#1321)

* update release version in readme

* Doc Migration from Gitlab (#1289)

* Update FakeQuantize_1.md

* Update performance_benchmarks.md

* Updates graphs for FPGA

* Update performance_benchmarks.md

* Change DL Workbench structure (#1)

* Changed DL Workbench structure

* Update performance_benchmarks_faq.md

* Fixes in DL Workbench layout

* Fixes for CVS-31290

* [DL Workbench] Minor correction

* Fix for CVS-30955

* Added nGraph deprecation notice as requested by Zoe

* fix broken links in api doxy layouts

* Fixed POT TOC

* Update PAC_Configure.md

PAC DCP 1.2.1 install guide.

* Update inference_engine_intro.md

* Update opset.md

* Update VisionAcceleratorFPGA_Configure.md (#1378)

Updated from 2020.3 to 2020.4

Co-authored-by: domi2000 <domi2000@users.noreply.github.com>

* Updated documentation for 2020.4 (#1434)

* Updated documentation for 2020.4

* Updated Core::ReadNetwork documentation (#1178)

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

* Documentation updates for 2020.4 (#1672) (#1729)

* Doc updates

* 2020.4 doc updates

* Removed </br> tag

* Minor fix

* Minor fixes

* Updated documentation for 2020.4 (#1434)

* Updated documentation for 2020.4

* Updated Core::ReadNetwork documentation (#1178)

* Fixed docs

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Pavel Rodionov <pavel.rodionov@intel.com>
Co-authored-by: Eugene Smirnov <eugene.smirnov@intel.com>
Co-authored-by: Alexey Suhov <alexey.suhov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2020-08-14 12:24:36 +03:00
Andrey Zaytsev
4fa0e8aad7 Fixes links for DL Streamer samples (#1767) 2020-08-13 18:46:43 +03:00
Andrey Zaytsev
cf8450e56f Update Model_Optimizer_FAQ.md (#1753) 2020-08-13 13:16:31 +03:00
Andrey Zaytsev
648c86ee9a Documentation updates for 2020.4 (#1672)
* Doc updates

* 2020.4 doc updates

* Removed </br> tag

* Minor fix

* Minor fixes

* Updated documentation for 2020.4 (#1434)

* Updated documentation for 2020.4

* Updated Core::ReadNetwork documentation (#1178)

* Fixed docs

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2020-08-07 15:29:18 +03:00
Ilya Lavrenov
b75ce14c21 Removed legacy include from plugin api (#1651) 2020-08-06 11:34:24 +03:00
Ilya Lavrenov
a9fe3c44d1 Minimized ie_paralle.hpp include in plugin api (#1650) 2020-08-06 11:25:12 +03:00
Ilya Lavrenov
8efed7cdca Updated documentation for 2020.4 (#1434)
* Updated documentation for 2020.4

* Updated Core::ReadNetwork documentation (#1178)

* Fixed docs

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2020-07-23 14:17:15 +03:00
Nikolay Tyukaev
a9c6e7269f Update VisionAcceleratorFPGA_Configure.md (#1378)
Updated from 2020.3 to 2020.4

Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
2020-07-18 12:51:10 +03:00
Nikolay Tyukaev
2f1283687b Doc Migration from Gitlab (#1289)
* doc migration

* fix

* Update FakeQuantize_1.md

* Update performance_benchmarks.md

* Updates graphs for FPGA

* Update performance_benchmarks.md

* Change DL Workbench structure (#1)

* Changed DL Workbench structure

* Fixed tags

* fixes

* Update ie_docs.xml

* Update performance_benchmarks_faq.md

* Fixes in DL Workbench layout

* Fixes for CVS-31290

* [DL Workbench] Minor correction

* Fix for CVS-30955

* Added nGraph deprecation notice as requested by Zoe

* fix broken links in api doxy layouts

* CVS-31131 fixes

* Additional fixes

* Fixed POT TOC

* Update PAC_Configure.md

PAC DCP 1.2.1 install guide.

* Update inference_engine_intro.md

* fix broken link

* Update opset.md
2020-07-16 15:24:27 +03:00
Alexey Suhov
023e7c2c3f update system requirements (#1321)
* update system requirements

* update release version in readme
2020-07-14 20:25:39 +03:00
Alexey Suhov
34ddb70f7d fix build target name in demos for Windows (#1248) 2020-07-07 18:26:50 +03:00
Andrew Bakalin
21e092122f [VPU] WA for statis shape allocation (#1106) 2020-06-24 16:28:59 +03:00
Roman Kazantsev
92c1333653 Correct removing nodes from graph and add test for ConstToResult transform (#1083)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2020-06-24 15:39:08 +03:00
Roman Kazantsev
c26ec8b312 [IE] Preserve output data name after merging and update output data map (#1092)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2020-06-24 12:30:25 +03:00
Andrew Bakalin
32054ff180 [VPU] Support for originalLayersNames attribute in exec graph (#1073) 2020-06-23 12:19:15 +03:00
Ilya Churaev
7cff005ada Disable ref implementations (#951)
* Add NGRAPH_EVALUATE_ENABLE flag and disable all reference implementations

* Enable some evaluate methods

* Added dynamic library with reference implementations

* Fixed tests

* Enabled unsqueeze  CF

* Removed nGraph test library

* Disable all nGraph tests to check

* Enable some reference implementations

* Added debug message

* EVALUATE true

* Revert "Disable all nGraph tests to check"

This reverts commit 38bca3ed3dfed029e892fe609ea7e48c5cfadb67.

* Enable some implementations

* Removed some TYPE_CASE reference implementations

* Fixed reshape

* Revert types for Broadcast and Add

* Disabled failing gpu_engine.user_context test

* Disabled failed nGraph tests

* Add u8 for non_zero

* Revert "Added debug message"

This reverts commit 4b9f4894f5ae9963426830ac5e5eb833af8847aa.

* Revert "Enable some reference implementations"

This reverts commit d2001a636df7504e0ad5abe5c98725ef0be07379.

Revert "Enabled unsqueeze  CF"

This reverts commit 814a8e52cb2b673446d24e54ed11af1dd3d80fad.

Revert "Enable some evaluate methods"

This reverts commit 73767b8942d857bf60317f29120c98c528344a04.

* Revert "Add NGRAPH_EVALUATE_ENABLE flag and disable all reference implementations"

This reverts commit cfaa7d7e7bf34b617f53a556d24fea2189372592.
2020-06-23 12:17:40 +03:00
Ivan Tikhonov
06707cc53f Fix for Kaldi models with Memory layers and a batch more than 1 (#1025)
* fix kaldi models with memory (batch > 1)

* apply review comments

* Added test for the case using the SetBatchSize function when ReadValue op is in the network

* Check status code instead of message

* Use new ngraph api
2020-06-23 11:47:18 +03:00
Konrad Dobros
fff93d8f05 [IE CLDNN] Add work-around for 1d input to Gather (#1069) 2020-06-23 11:44:20 +03:00
Gladilov, Gleb
637ddd5dfb [IE][VPU]: Fixes klocwork issues (#1075) 2020-06-23 09:58:12 +03:00
Ivan Tikhonov
fa4c5e8e38 Fix ARM build: explicit type conversion (#1061)
* fix arm build: explicit type conversion

* Use explicit conversion in prior_box_ie.cpp
2020-06-22 23:37:54 +03:00
Maxim Vafin
c9fc6f0531 Fix OneHot transformation for Bert Squad opset 10 (#954)
* Add transformation for squeezing depth input for ONNX OneHot operation because from some TF models it has shape [1] instead of []
2020-06-22 18:58:07 +03:00
Denis Orlov
c9eb6ae62b [GNA] Initialize a local variable (#1066) 2020-06-22 18:49:22 +03:00
Alexander Chaiko
eef56ca80c [IE CLDNN] WA to 1d input for concat (#1040) 2020-06-22 15:25:17 +03:00
Gorokhov Dmitriy
36f1c00e02 [CPU] Fixed issue with unsupported reorder case for groupped convolutions (#893) 2020-06-22 14:06:53 +03:00
Konrad Dobros
5c43765011 [IE CLDNN] Fix activation implementation for fsv16 format (#1038)
For b_fs_yx_fsv16 format in reference kernel features for dispatch are
rounded to multiple of 16. This change adds correct check in kernel to
return work-items that are inside this dispatch padding.
Previously those work-items could corrupt memory expected to be filled
with 0s, and for parametrized activation due to bounds checking with
modulo operator they could have been corrupting actual layer output.

Issue: CVS-27672
2020-06-22 09:17:00 +03:00
Ilya Lavrenov
bbfc9bbc14 Deprecated IGNORE_IR_STATISTIC VPU option (#1028) 2020-06-20 10:38:47 +03:00
Pavel Rodionov
9c607528ef [GNA] Support export model with multiple inputs/outputs and Permute layer (#1024) 2020-06-19 18:06:38 +03:00
Denis Orlov
ae9e0510f0 [GNA] Additional checks (#998) 2020-06-19 13:14:32 +03:00
Edward Shogulin
76af547c17 [LPT] BERT with specific biases support & improvement (#968)
* [LPT] BERT with biases support

* [LPT] Gemm biases and quantization

* [CPU] Fixed FullyConnected + Depthwise node fusing

* [LPT] FullyConnected 3D: symmetric quantization support

* [LPT] FullyConnected 3D: symmetric quantization support fix

* [CPU] Fixed FullyConnected + Depthwise fusing initialization

Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
2020-06-19 13:14:20 +03:00
Kamil Magierski
5e97a3123f Fix cases then const blob precision is not FP32/FP16 (#1000)
Co-authored-by: kmagiers <kmagiers@intel.com>
2020-06-19 13:13:19 +03:00
Andrey Dmitriev
532dec140b [GNA] fix permute 0_2_1 (#993) 2020-06-19 10:20:55 +03:00
Vladimir Paramuzov
c41c6294f9 [IE CLDNN] Fix strided slice (#953) 2020-06-19 08:23:25 +03:00
Gorokhov Dmitriy
3bbe88e659 [IE Common][WA] Skipped const folding for Convolution layer (#1002) 2020-06-19 01:25:20 +03:00
Maxim Andronov
2f3d5f68cd [CPU] fix one dims scale shift (#983) 2020-06-18 14:21:07 +03:00
Evgeny Talanin
843f81a1cc [IE TESTS] disable Some myriad tests on Win (#763) (#988)
* [IE TESTS] disable Some myriad tests on Windisable Some myriad tests on Win

* Skip test with todo

Co-authored-by: Irina Efode <irina.efode@intel.com>
2020-06-18 13:57:21 +03:00
Pavel Esir
c596707a09 fixed some typos in MO help (#979) 2020-06-18 11:02:28 +03:00
Konrad Dobros
cf60baf2f0 [IE CLDNN] Fix gather dimensions calculation (#960) 2020-06-18 00:31:17 +03:00
Nikita Kudriavtsev
aeb70036d7 [IE Myriad] Remove Myriad 2 from supported devices in XLink (#978) 2020-06-17 17:47:55 +03:00
Daria Mityagina
dea04dae8c [IE Myriad] - WrapInLoop fix: if data has consumer's input inside subgraph - replace them (#958) 2020-06-17 17:27:17 +03:00
Ilya Churaev
14b44803ba Fixed cpack information, removed some links (#975) 2020-06-17 17:17:10 +03:00
Andrey Dmitriev
06286f2aae [GNA] Added fix multiple output with one go to memory and test (#888)
[GNA] Added fix multiple output with one go to memory and test

[GNA] Added fix multiple output with one go to memory and test

[GNA] Added fix multiple output with one go to memory and test

Added multi output

Update gna_pass_manager.cpp

test

[GNA] Added fix multiple output with one go to memory and test

[GNA] Added fix multiple output with one go to memory and test

[GNA] Added fix multiple output with one go to memory and test

Added multi output

Update gna_pass_manager.cpp

test

tests

[GNA] Added fix multiple output with one go to memory and test

[GNA] Added fix multiple output with one go to memory and test

Added multi output

Update gna_pass_manager.cpp

test

tests

Added pass

Test

test

tests_2

return old
2020-06-17 11:23:56 +03:00
Ilya Churaev
97e5fc4bae Use creators only for default opsets (#932) 2020-06-16 12:25:06 +03:00
Alexey Tarakanov
47218284b2 Support fp16 networks for releases_2020_4 (#936) 2020-06-16 10:31:57 +03:00
Andrey Dmitriev
6079a35b81 [GNA] Added test for ScaleShift and fixed power layer with non-zero shift (#922)
* [GNA] Added test ScaleShift and fixed power layer with non zero shift

added tests

[GNA] Added test ScaleShift and fixed power layer with non zero shift

* Test Assert

* rebuild
2020-06-16 00:32:28 +03:00
Roman Kazantsev
4f4352f301 Fix preserving names of output layers after TopK NGraph transformation (#928)
* Fix preserving names of output layers after TopK NGraph transformation (#843)

* Fix preserving names of output layers after TopK NGraph transformation

It helps to infer semantic-segmentation-adas-0001 model. See CVS-31977.

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix a test for TopK

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix TopK NGraph transformation and its test

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Disable smoke_LoadNetworkAccuracy due to sporadic failure

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2020-06-15 20:57:45 +03:00
Anastasia Kuporosova
a67d74c41f [Python API] Fix long inference (#897) 2020-06-15 16:21:41 +03:00
Ivan Tikhonov
26c563132d Revert prior box constant folding (#906)
* Revert "Const folding and reference implementation for PriorBox(Clustered) ops (#785)"

This reverts commit 9fc818478a.

* apply codestyle for ngraph part
2020-06-15 12:38:27 +03:00
Ilya Lavrenov
dc1ca195dd Updated dates of removal for deprecated API (#911) 2020-06-15 12:24:27 +03:00
Vladimir Paramuzov
f5ad3e6f89 [IE CLDNN] Fixed clone network to preserve original CNNNetwork (#870) 2020-06-12 15:53:30 +03:00
Konrad Dobros
6c736ce001 [IE CLDNN] Fix fsv16 -> bfyx reorder removal (#873) 2020-06-12 15:43:54 +03:00
Anastasia Kuporosova
30ab6534e1 [Python API] Fixate requirements (#905) 2020-06-12 12:06:11 +03:00
Ilya Lavrenov
259a4c25ce TESTS: Added test for parallel LoadNetwork with accuracy check (#858) 2020-06-12 11:56:59 +03:00
Andrey Somsikov
347930008c Use default thread sanitizer linkage (#899)
GCC and CLang *default* sanitizer linkage differs (static vs. dynamic).
Prefer default behavior as alternate seen having issues.

Default (GN)U linker fails with unresolved symbols linking Clang built
binaries with sanitizer enabled. Force use LLVM linker lld for Clang
builds.

Sanitizer instrumentation and link flags should be retained for all
binaries. Updating samples cmake configuration to keep those flags
after unset logic at the ie_build_samples().
2020-06-12 00:36:03 +03:00
Evgeny Latkin
4fa251483a [IE][Myriad] fix HW tiling (#894) 2020-06-11 20:48:56 +03:00
Vladimir Paramuzov
30f8af70fc [IE CLDNN] fix perf for fsv16 global avg pooling (#666) 2020-06-11 20:44:37 +03:00
Andrew Bakalin
3fc6d8a188 [VPU] Update firmware (#898) 2020-06-11 20:44:20 +03:00
Denis Orlov
66c8df6a87 [GNA] Fixes in checks, asserts, etc. (#867) 2020-06-11 20:04:46 +03:00
Nikolay Shchegolev
e53eb86334 [Common] Static analysed issues. Part II. 2020-06-11 19:59:44 +03:00
Edward Shogulin
2df99d4263 [LPT] Static code analysis issues fix (#889) 2020-06-11 15:09:20 +03:00
Gleb Kazantaev
deab4d38b0 Fix NopElimination (#869) 2020-06-11 13:28:27 +03:00
Vladimir Paramuzov
412428f1dd [IE CLDNN] Always use FP32 as intermediate type for fused quantize (#829) 2020-06-11 12:22:27 +03:00
Evgeny Lazarev
167c96a8af Relaxed MO requirements for "protobuf" package (#862) 2020-06-10 18:26:16 +03:00
Gleb Kazantaev
b7363ba711 Fix divide conversion for integer input type (#853) 2020-06-10 16:25:57 +03:00
Evgeny Lazarev
5cef9f3734 Fixed StridedSlice to Crop transformation (#836) (#845)
* Fixed StridedSlice to Crop transformation to not apply when rank of data is changed

* Added unit test for StridedSlice to Crop transformation
2020-06-10 11:54:02 +03:00
Andrey Dmitriev
0bf1f53356 [GNA] Added support permute layer (#723)
* [GNA] Added GNA natively supported permute layer cases.
2020-06-09 16:43:01 +03:00
Maksim Doronin
18004bdb5e [IE VPU] Dynamic Broadcast tests (#737)
* [IE VPU] Enable StaticShapeBroadcast tests

* [IE VPU] DSR: support case when shape is output and input for stage

* [IE VPU] Enable Broadcast and Transpose tests

* [IE VPU] DSR: fix typo

* [IE VPU] Add assertion for numConsumer in DSR

* [IE VPU] Added CheckMyriadX helper method

* [IE VPU] New DSR assert for input->getInputTo

* [IE VPU] Fix myriad2 tests bug
2020-06-09 16:10:12 +03:00
Ivan Tikhonov
9fc818478a Const folding and reference implementation for PriorBox(Clustered) ops (#785)
* Constant folding for PriorBox, PriorBoxClustered; Deleted PriorBoxIE, PriorBoxClusteredIE and transformations; Added unit tests; codestyle

* Delete debug info

* delete unnecessary convert_prior_to_ie_prior.hpp file

* fix ngraph reader tests; delete PriorBoxIE functional test

* fix for ngraph reader tests

* Apply review comment

* apply ngraph codestyle

* restore PriorBoxClustered tests in disabled state
2020-06-09 14:47:49 +03:00
Denis Orlov
ef8a8dd309 add support for multiple scale factors in speech sample (#835)
Co-authored-by: Anna Alberska <anna.alberska@intel.com>
2020-06-09 14:36:28 +03:00
Andrey Sokolov
d4e880de3d [IE VPU] Update firmware; enable convolution VPU OCL tests (#802) 2020-06-09 14:34:10 +03:00
Vladimir Paramuzov
fe198dd544 [IE CLDNN] Added 6d tensor support in eltwise/scale primitives (#826) 2020-06-09 14:29:36 +03:00
Anton Zaytsev
b0eb3e67ee [ci-skip][IE MKLDNN] Add Precision U16 in MKLDNN (#783) 2020-06-09 14:20:43 +03:00
dmitrygo
434361cea9 [TESTS] fixes after rebase 2020-06-09 14:11:18 +03:00
dmitrygo
aa30580109 [CPU] mkldnn submodule up 2020-06-09 14:11:18 +03:00
dmitrygo
051a429c31 [LPT] Fixed quantizeBlob routine for 3D case 2020-06-09 14:11:18 +03:00
Edward Shogulin
8eb88d51f2 [LPT] GPU tests were fixed 2020-06-09 14:11:18 +03:00
Edward Shogulin
971811c8c8 [LPT] [TEST] LayerTransformation test threshold was updated 2020-06-09 14:11:18 +03:00
Anton Voronov
629ca3a5d8 [CPU] Gemm node: supported precisions U8 and I8 and added tests 2020-06-09 14:11:18 +03:00
Edward Shogulin
92e5e010b9 [LPT] FullyConnected & Gemm tests 2020-06-09 14:11:18 +03:00
dmitrygo
c7313bab7f [CPU] Fixed weights candidate initialization in FC node 2020-06-09 14:11:18 +03:00
Edward Shogulin
d798831c95 [LPT] Gemm and FullyConnected 3D improvement 2020-06-09 14:11:18 +03:00
Edward Shogulin
4d01adbe01 [LPT] tests extending 2020-06-09 14:11:18 +03:00
Edward Shogulin
1d51d2185a [LPT] [Test] Low precision transformations functional tests infrastructure improvement 2020-06-09 14:11:18 +03:00
Edward Shogulin
65b00c1dfb [LPT] FullyConnected transformation fix 2020-06-09 14:11:18 +03:00
Edward Shogulin
9758305b32 [nGraph] Remove Reshape for 3D FullyConnected 2020-06-09 14:11:18 +03:00
Edward Shogulin
d7c77212b8 [IE COMMON] [LPT] Concat asymmetric quantization with signed interval fix 2020-06-09 14:11:18 +03:00
Edward Shogulin
e544dd1e28 [IE COMMON] [LPT] Support 3D layout for FullyConnected transformation 2020-06-09 14:11:18 +03:00
dmitry-gorokhov
bc98d17121 [CPU] Added custom implementations (power=0.5, power=-1.0) for Power node 2020-06-09 14:11:18 +03:00
dmitry-gorokhov
bcd38100db [CPU][WA] Supported 3D layout for FullyConnected primitive
Extended jit uni depthwise primitive to support 3D inputs
2020-06-09 14:11:18 +03:00
Nikolay Shchegolev
b6f2c06b26 [Common] Static analyzed issues. (#804) 2020-06-09 13:49:50 +03:00
Vladimir Paramuzov
b4546ad1e0 [IE CLDNN] Better error message when output is not found (#824) 2020-06-09 12:26:28 +03:00
Edward Shogulin
d02b9a9b81 [LPT] [TEST] LayerTransformation test threshold was updated (#828) 2020-06-09 10:34:17 +03:00
Maxim Andronov
d8e82d56d2 [CPU] fix set up config for bin conv fused (#608) 2020-06-09 09:59:29 +03:00
Anastasia Kuporosova
e91453e006 [Python API] Fixate requirements versions (#830) 2020-06-09 08:49:49 +03:00
Anastasia Kuporosova
6a60f93af0 [Python API] Fix deprecation warnings (#812) 2020-06-09 08:48:08 +03:00
Edward Shogulin
ca643edb1b [LPT] [CPU] NormalizeL2 transformation (#662)
* [LPT] NormalizeL2 transformation

* [LPT] NormalizeL2 transformation tests improvement

* [CPU] Fixed depthwise injector aux_vec_count for broadcasting case

* [LPT] Normalize on GPU enabling

Co-authored-by: Zinoviev, Vladimir <vladimir.zinoviev@intel.com>
Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
2020-06-08 22:42:50 +03:00
Pavel Esir
7a11e36eeb Add fixedscale(bias) components to Kaldi (#725)
* Added fixed scale(bias) components

* Successfully converted after adding fixed bias,scale components

* Added unittests
2020-06-08 21:37:44 +03:00
Mikhail Letavin
155916acde [IE CLDNN] Fix variable initialization issues (#816) 2020-06-08 21:07:50 +03:00
Nikita Kudriavtsev
ac65ea30fd [ICV] Watchdog switch + ddr initialization (#554)
* [IE Myriad] Added XLinkBootFirmware method in XLink API for booting firmware buffer

* [IE Myriad] Patch firmware in mvnc. Added test to check device reset without connecting.

* [IE Myriad] Added option MOVIDIUS_DDR_TYPE for Myriad plugin

* [IE Myriad] Added tests for new option MOVIDIUS_DDR_TYPE

* [IE Myriad] Update firmware 1201 -> 1212

* [IE Myriad] Convolution3x3 tests are disabled due to firmware issue. #-32921
2020-06-08 20:51:45 +03:00
Irina Efode
3b5de94a09 [IE TEST] Eltwise tests refactoring (#726)
* [IE TEST] Eltwise tests refactoring

* [IE TESTS] Fix comments
2020-06-08 18:44:42 +03:00
Denis Orlov
ff00817bb7 [GNA] Support changing the execution mode in runtime (#801) 2020-06-08 18:43:12 +03:00
iliya mironov
eefaf56075 Fix unit tests for select layer. (#638)
* Fix unit tests for select layer.
2020-06-08 18:39:40 +03:00
Maxim Vafin
f1811ad060 Implement support for opset3 EmbeddingBag ops (#546)
* [MO] Implement EmbeddingBag_3

* Transform dynamic sub-graph of Wide and Deep into EmbeddingSegmentsSum

- Expressed SparseWeightedSum sub-graph through EmbeddingSegmentsSum
- Removed experimental SparseWeightedSum layer
- Implemented tests for the transformation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix EmbeddingBag shape infer

* Fix EmbeddingSegmentsSum transformation for Wide and Deep

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix EmbeddingSegmentSum replacer after ports swap

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update package_BOM.txt

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Add unit tests for EmbeddingXXX shape infer

* Fix ATen resolver

* Remove deleted files from BOM

* Add opset version to embedding_bag

* Use base class for EmbeddingBag

* Fix per_sample_weights case

* Fix EmbeddingSegmentsSum transformation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix EmbeddingBag checks

* Fix ATen front transformation and merge conflicts

* Fix BOM

* Work around limitation for I64 input of W&D model

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Cleanup where operation to fix affect of WhereDecomposition transform

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix BOM

* Correct EmbeddingSegmentSum transform for Wide and Deep

Add casting segment ids to i32 and remove ConstToResult sub-graph.

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update BOM with RemoveConstToResult transform

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Add more comments for RemoveConstToResult transformation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Remove useless logging in EmbeddingSegmentsSum transformation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Small fixes

* Move EmbeddingBag resolving back to front phase

* Improve error messages

* Fix typo in unittests

* Reimplement sparse_reshape middle transform

Avoid deprecated API.

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Clean-up graph after sparse_reshape and ConstToResult transformation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix clean-up for transformations

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix clean-up for transformation #2

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2020-06-08 18:06:40 +03:00
Konrad Dobros
d155483573 [IE CLDNN] Optimize 1x1 imad convolution kernel (#757) 2020-06-08 16:44:50 +03:00
Andrey Somsikov
626bc4f3d4 Add commit links to memcheck report (#820) 2020-06-08 15:08:58 +03:00
Tomasz Dołbniak
60d4d62536 Disable warnings-as-errors for ONNX target (#749)
* Disable warnings-as-errors for ONNX target

* Disable warnigs-as-errors for windows too

* Change WIN32 -> MSVC
2020-06-08 13:52:45 +03:00
Evgenya Stepyreva
e7f5f53f92 [ MO ] Groupped conv fusion (#797)
Fixed the group convolution fusion pass to properly get the feature dim in NCHW layout case.
2020-06-08 13:00:54 +03:00
Edward Shogulin
a224078c5c [LPT] [Test] DepthToSpace sporadic fail fix (#815) 2020-06-08 12:55:37 +03:00
Alexander Zhogov
9a968b12db Azure CI: increase timeout for Mac to 180 min 2020-06-08 12:17:50 +03:00
Vladimir Paramuzov
f0498ad011 [IE CLDNN] Enable ShuffleChannels op (#787) 2020-06-07 22:57:20 +03:00
Edward Shogulin
63ee9f8916 [LPT] [CPU] DepthToSpace transformation (#663)
* [LPT] [TEST] LayerTransformation generalization

* [LPT] DequantizationDetails extending

* [LPT] DepthToSpace transformation implementation
2020-06-07 21:12:52 +03:00
Alexander Zhogov
93b60cacfa Azure: Add Ninja (#803)
* Azure: Add Ninja

* Fix 'Install Ninja' on Linux

* Fix bin dir path on Windows

* Add -Wno-unused-variable on Mac

* Add -Wno-error=unused-command-line-argument on Mac

* Set CXXFLAGS for Mac

* Improvements

* Fix BIN_DIR on Linux
2020-06-06 15:56:24 +03:00
Vladimir Paramuzov
0022eebd71 [IE CLDNN] Enable DepthToSpace (#780)
Enabled DepthToSpace ngraph transformat
Updated implementation to support 5d and mode parameter
fsv16 direct support
Functional tests for GPU
2020-06-05 20:16:47 +03:00
Daria Mityagina
807f85f93f it is duplicate of PR: #656, but without test modification (#794) 2020-06-05 19:57:03 +03:00
Chance Luo
7f09b54af8 Disable Hw Avg Pooling for small output tensors if excludePad=true (#772) 2020-06-05 19:47:53 +03:00
Kami-996
cad3ccd8a3 add /wd4819 to disable C4819 warning, which is treated as error in win32 (#767)
Co-authored-by: jasonlee <jasonlee@qiyi.com>
2020-06-05 16:04:59 +03:00
emmanuelattia-philips
a0d1dae91d Fix: ITT_INCLUDE_DIR was not correctly detected (#748) 2020-06-05 14:46:39 +03:00
Ilya Znamenskiy
4d3ddc1684 [IE CLDNN] GEMM int8 optimization using MMAD macro (#635) 2020-06-05 14:28:21 +03:00
Anton Voronov
70c2058b61 [CPU] supported ShuffleChannels and added tests (#636) 2020-06-05 14:10:55 +03:00
Ilya Churaev
3571d44896 Save the name of output data if we remove previous layer (#760)
* Save the name of output data if we remove previous layer

* Added test
2020-06-05 13:36:35 +03:00
Pavel Rodionov
20d812d959 [GNA] Set default GNA library to GNA2 (#771) 2020-06-05 13:00:58 +03:00
dmitrygo
b485e829d6 [CPU] DepthToSpace review leftovers 2020-06-05 12:47:24 +03:00
Maxim Vafin
f51c533ea8 Add ReduceL2 decomposition (#733)
* Add ReduceL2 decomposition

* Add ReduceL2 transformation tests

* Add const propagation unit test for ReduceL2
2020-06-05 12:34:57 +03:00
Denis Orlov
67e3e06bee Fix hetero mode in speech sample - set config when loading network (#786) 2020-06-05 11:54:03 +03:00
Ilya Churaev
7a5d447e9f [SAMPLES] Use defined constant instead of string (#788) 2020-06-05 11:22:24 +03:00
Gladilov, Gleb
f80bd537bf [IE][VPU][nGraph]: Fixes DTS transformations to properly keep outputs names (#734)
* NonZero, Broadcast

* Concat

* Gather

* [IE][VPU][nGraph]: Fixes DTS transformations to correctly keep outputs names

* [IE][VPU][nGraph]: Fixes dynamic to static shape nonzero tests

Co-authored-by: Roman Vyunov <roman.vyunov@intel.com>
2020-06-05 11:16:52 +03:00
Edward Shogulin
f9ac555857 [LPT] Output layers update fix (#754) 2020-06-05 10:54:38 +03:00
Sergey Shlyapnikov
6e491a89ad [IE CLDNN] Improve Gather performance and add fusing support (#736) 2020-06-05 10:20:58 +03:00
Egor Churaev
2100521a14 [IE CLDNN] Implement NormalizeL2 int8 kernels (#720) 2020-06-05 10:16:27 +03:00
Ilya Churaev
a705f0c358 Avoid loading of reader if it doesn't exist (#758)
* Avoid loading of reader if it doesn't exist

* Updated error messages
2020-06-04 21:21:13 +03:00
Maxim Vafin
c7d130efbe Fix Proposal for the case of 2 outputs (#773) 2020-06-04 20:56:46 +03:00
Evgeny Lazarev
c10ff28f12 Added default value for 'aligned' in the ExperimentalDetectronROIFeatureExtractor for backward compatibility (#777)
Fixed backward compatibility issue that old IRs with ExperimentalDetectronROIFeatureExtractor operation cannot be loaded with the new IE
2020-06-04 20:47:52 +03:00
Lukasz Debski
698dfc4bf6 [IE CLDNN] Permute fused ops support (#642) 2020-06-04 17:01:21 +03:00
Alexey Varyzgin
85aa23ec8a [CPU][BF16] Default Optimisation Capability of BF16 was enabled on CPX (#647) 2020-06-04 16:06:15 +03:00
Maxim Vafin
1001caf04e Add support for ONNX Pad-11 (#744) 2020-06-04 14:48:31 +03:00
Denis Orlov
0e60aed97a [GNA] Support 100 inputs, instead of 10 (#741) 2020-06-04 14:33:09 +03:00
Gorokhov Dmitriy
3183c116d9 DepthToSpace, SpaceToDepth layers optimizations (#706)
* [CPU] Updated DepthToSpace and SpaceToDepth layers to be conformant with the specification

The patch also includes n[d]hwc layout support as well as some optimizations

* [CPU][TESTS] Removed old DepthToSpace test since it doesn't corresponds to layer's specification

* [nGraph] Utilize CommonOptimizations pass with custom transformations callback
2020-06-04 14:25:19 +03:00
Evgenya Stepyreva
01e60d057d [ MO ] InterpolateConcat empty sources fix (#764) 2020-06-04 14:18:33 +03:00
Vladimir Paramuzov
d7fad0109a [IE CLDNN] Disabled sporadic detection output tests (#740) 2020-06-04 11:14:05 +03:00
Vladimir Paramuzov
28ffbf0857 [IE CLDNN] Remove unused fused deps for FQ (#712)
Remove unused fused FQ kernel arguments to avoid extra setArg() calls which significantly reduces host overhead
2020-06-04 10:30:46 +03:00
Egor Churaev
546377dc8e [IE CLDNN] Implement EmbeddingBag operations (#623)
Implemented three operations: EmbeddingBagPackedSum,
EmbeddingBagOffsetsSum and EmbeddingSegmentsSum. These operations do
the same work but have a different format of inputs.
2020-06-04 10:25:28 +03:00
Anton Voronov
e53b1b7fbc [MKLDNN_PLUGIN] Convolution node: skip initializing of primitive descriptors for planar layout if there is already jit primitive (#672) 2020-06-04 08:06:14 +03:00
Ilya Lavrenov
158d32139f Revert "Enabled thread tests (#717)" (#756)
This reverts commit 99a2423ec0.
2020-06-03 22:32:55 +03:00
wistal
2bb7010193 MO should support LRN k param with caffe model, rather than fixed to 1 (#716)
Co-authored-by: yipengqu <yipeng.qu@intel.com>
2020-06-03 20:33:55 +03:00
Alexey Suhov
1ffada0b23 [Docs] Fixes in readme files: (#750)
- change repo name to openvino
- update driver version
- fix path to samples data
- remove section about Movidius driver installation
- change latest release to 2020.3
- merge fixes in install_dependencies.sh from 2020 branch
2020-06-03 20:14:35 +03:00
Mikołaj Życzyński
023344a317 [IE CLDNN] Added fusing suport to all pooling kernels (#689)
adds fusing support to all available pooling kernels
tests all possible input type/output type configurations
fixes minor bug in max pooling in pooling_gpu_test.cpp
fixed minor bug with yxbf format in pooling_gpu_ref and pooling_gpu_int8_ref kernels
fixes bug with b_fs_yx_fsv32 format in pooling_gpu kernel
resolves bug with max pooling accuracy missmatch in case of non zero pad end layer parameter
resolves average pooling accuracy missmatch in case of non zero pad end layer parameter
2020-06-03 19:44:27 +03:00
Lukasz Debski
e2d1ae7055 [IE CLDNN] Fixed stack overflow in calculate_prior_boxes pass (#747)
The problem behind this error was in program_impl::init_graph() where in calculate_prior_boxes we are trying to calculate output layout of an entire network recursively which causes stack overflow. Calculating output layouts beforehand in processing order fixes this issue.
2020-06-03 19:42:50 +03:00
Ilya-Krylov
cfb5f27899 Add 'aligned' param to ExperimentalDetectronROIFeatureExtractor for CPU plugin and MO 2020-06-03 17:52:40 +03:00
Tomasz Dołbniak
53927034da Python API for Assign, ReadValue and ExtractImagePatches (#719) 2020-06-03 15:01:43 +02:00
LiweiSong
63a77bb4a1 mkldnn_memory_solver.hpp: include stdint.h to avoid build error (#729)
fix the following compile error:

inference-engine/src/mkldnn_plugin/mkldnn_memory_solver.hpp:60:9: error: 'int64_t' does not name a type
|    60 |         int64_t size;
|       |         ^~~~~~~

include stdint.h to fix this.

Signed-off-by: Liwei Song <liwei.song@windriver.com>
2020-06-03 15:19:29 +03:00
Edward Shogulin
7edebd8d87 [LPT] [TEST] Sporadic test fail fix (workaround) (#742) 2020-06-03 15:05:45 +03:00
Evgenya Stepyreva
da230131d0 [ nGraph ] FP16 for evaluate (#722) 2020-06-03 14:14:59 +03:00
Vitaliy Urusovskij
72d9a9fae7 Use pre-defined DB collection names in memcheck_upload.py CLI (#651)
Use argparses `choices` for `--db_collection` option.

Also removed unnecessary redefinition of `db_collection` in memcheck_upload.py
2020-06-03 13:54:38 +03:00
Sergey Shlyapnikov
20ef9a9423 [IE CLDNN] Improve kernel selection for b_fs_yx_fsv16 layout and optimize Convolution kernels (#730) 2020-06-03 13:42:15 +03:00
Anton Zaytsev
b457553593 [IE TESTS] Move InferRequestTests (#618)
* [IE TESTS] move Infer_request tests

* fix v0

* [ci-skip][IE TESTS] test update basic class v0

* [ci-skip][IE TESTS] test update basic class v1

* [ci-skip][IE TESTS] test update basic class

* [ci-skip][IE TESTS] test update basic class v3

* [ci-skip][IE TESTS] test update basic class final versions

* [ci-skip][IE TESTS] fix

* [ci-skip][IE TESTS] fix codestaly and comment

Co-authored-by: Irina Efode <irina.efode@intel.com>
2020-06-03 12:16:00 +03:00
Evgeny Talanin
ed85690136 Skip some functional tests on VPU (#568) 2020-06-03 12:15:06 +03:00
Adam Osewski
3a80f0476b [ONNX] GRU and RNN operators. (#607)
* Create generic RecurrentSequenceDirection enum.

* Helper class RecurrentSequenceOp.

* Add ONNX GRU & RNN operators.

* Use OutputVector.

* Update doc.

* Add UTs for GRU and skip them on IE_CPU

* Add UT for bidirectional mode and fix it.

* Normalize activation function name case.

* Add unit-tests for RNN operator.

* UT for GRU with linear_before_reset set to true.

* Fix ONNX GRU for linear_before_reset case.

* Remove unnecessary symbol export macro.

* Fix CentOS error.

* Update UTs.

- Update few tests accuracy tolerance
- Update rnn_fwd_activations with new reference values and model.

* Review comment: add check for static shape

* Add UT for RNN with constant inputs W, R.

* Skip UT with const W,R on IE_CPU
2020-06-03 12:01:56 +03:00
Gladilov, Gleb
4e0c7a217f [IE][VPU]: Faster-RCNN fixes on myriad plugin side (#711)
* [IE][VPU]: Enables pass for propagating dynamism to network outputs

If network had dynamic output and then myriad Front-End inserted
convert stage at the end (to convert FP16 -> FP32 - output precision)
then dynamism would not be propagated - we have convert stage that
has dynamic input, but static output. As a result, we have run-time
error in Convert kernel: input and output shapes do not match.

At the moment, pass supports only Convert stage as output stage
over which we should propagate dynamism to outputs.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Fixes parse DSR in case of output data

Replacing stage output must be done after replacing
data to shape parent, because the last one may access
original parent producer, but after replacing stage output
it'd not have one.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Fixes MacOS build

* [IE][VPU]: Fixes shape data naming convention

Plugin part assumes that if there is dynamic data object, that's
represented as 2 different data objects (data and shape), then
shape data object has name = data object name + @shape suffix.

Pass that creates new dynamic data object should respect that
assumption.

* [IE][VPU]: Fixes dis-alignment in names of data objects representing dynamic data object

MyriadInferRequest::GetResult assumes that in case of dynamic data object
"data" data object and "shape" data object will have aligned names:
"shape" name = "data" name + "@shape" suffix.

In order to meet that expectation propagating dynamism pass must use output
data object name as prefix. Additionally, propagating pass must be applied
before converting shape notation pass in order to make output shape in IE
notation, not MDK, as MyriadInferRequest::GetResult is expecting.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
2020-06-03 11:43:19 +03:00
Mikhail Treskin
447dd3570d Remove deprecated layer test class (#610)
* Update activation layer test

Signed-off-by: Mikhail Treskin <mikhail.treskin@intel.com>

* Get rid of LayerTestsCommonDeprecated class

Signed-off-by: Mikhail Treskin <mikhail.treskin@intel.com>

* Fix activation tests instantiations for gpu and myriad plugins

* Remove leaking inferWithInterp function
2020-06-03 11:04:15 +03:00
Mikołaj Życzyński
3ea1657e4f [IE CLDNN] Activation with fused quantize bug fix (#613)
fixed bug connected with quantization fusing to activation
added scale and activation fusing support
added corresponding tests
2020-06-03 09:30:49 +03:00
Ilya Lavrenov
cdd31da1c7 Updated deprecated messages (#715) 2020-06-03 06:04:50 +03:00
Edward Shogulin
9f6fde9af2 [LPT] Output layers fix (#677) 2020-06-02 23:44:24 +03:00
Ilya Churaev
99a2423ec0 Enabled thread tests (#717) 2020-06-02 23:42:05 +03:00
Nikolay Shchegolev
4f6c976add [CPU] EmbeddingBagOffsetsSum, EmbeddingBagPackedSum, EmbeddingSegmentsSum operations. (#576)
* [CPU] EmbeddingBagOffsetsSum, EmbeddingBagPackedSum, EmbeddingSegmentsSum operations.

* Performance fix

* Perf v2

* Code style
2020-06-02 21:56:17 +03:00
Anna Alberska
4c44ce9795 add PassManagerSettings & create more legible description for concat quantization exception and a test for it (#563) 2020-06-02 21:03:27 +03:00
Andrey Babushkin
6f69ba04c8 [Jenkinsfile] Add failFast parameter (#721)
It allows us to rebuild Jenkins build and wait until all stages are finished despite of some of them may fail
2020-06-02 20:22:25 +03:00
iimironov
a79cd75596 Imironov/cvs 31297 add yolov4 support (#594)
* Add transformation of softplus operation into log(1.0 + exp(x)).
2020-06-02 19:20:29 +03:00
Evgeny Latkin
b2816dc1ec [IE][Myriad] Gather: add test case (#644) 2020-06-02 17:41:19 +03:00
Gleb Kazantaev
638c7b891c Updated DeconvolutionIE to support dynamic shapes (#671)
* Updated DeconvolutionIE to support dynamic shapes

* Updated DeconvolutionIE to support output_shape input

* Updated ConvertConvolutions pass
2020-06-02 17:26:28 +03:00
Vladimir Paramuzov
cbe45b7d0a [IE CLDNN] Fixed names mapping chain in runtime graph to respect original names (#599) 2020-06-02 17:25:41 +03:00
Vitaliy Urusovskij
1d179fdb39 Add parallel downloads to stress tests (#678) 2020-06-02 17:24:22 +03:00
Gleb Kazantaev
be3b4a3362 specificCreator for Transpose operation (#713)
* Updated Transpose node convertor; replaced get_vector with cast_vector

* Replaced NodeCreator with specificCreator
2020-06-02 17:15:36 +03:00
Andrey Somsikov
5776b66fb2 Enable Control Flow Guard for Windows binaries (#714)
Control Flow Guard is security option.
2020-06-02 16:46:23 +03:00
azhogov
8377c714aa Revert "Add ittnotify from IntelSEAPI"
This reverts commit 0583b37a14.
2020-06-02 12:52:14 +03:00
azhogov
f15096e101 Revert "Use ittnotify from thirdparty"
This reverts commit 3863656f44.
2020-06-02 12:50:06 +03:00
Anton Chetverikov
265e3c7cba Remove TopKnormalizer from MO IR Reader transformation_list (#590)
* Remove TopKnormalizer from transformation_list and added call of normalize_outputs to fix read/save of some models
2020-06-02 12:43:41 +03:00
Maksim Doronin
daaeaa5881 [IE VPU] Enable s32->u8 conversion (#699) 2020-06-02 12:20:06 +03:00
Evgeny Lazarev
278868b7a1 Align MO requirements files (#710) 2020-06-02 11:32:39 +03:00
Vladimir Paramuzov
dbdaaa93dd [IE CLDNN] Quantized deeplabv3 optimizations (#646)
Enabled dilation for imad dw fsv16 kernel
Added argmax and mutable_data to fsv16 white list
Enabled byxf input for quantize scale_shift kernel
2020-06-02 09:17:39 +03:00
Somsikov, Andrey
3863656f44 Use ittnotify from thirdparty
VTune ittnotify lack of support aarch64. Switching to use ittnotify
in sources to support any target architecture.
2020-06-01 20:53:39 +03:00
Somsikov, Andrey
0583b37a14 Add ittnotify from IntelSEAPI
Adding ittnotify component of https://github.com/intel/IntelSEAPI

commit 88a56e0ecd162667c7afd2ee9969221d62a32509 (HEAD -> master, origin/master, origin/HEAD)
Merge: 6d743e1 809062a
Author: Alex <alexander.a.raud@intel.com>
Date:   Wed Jul 10 15:06:46 2019 -0700
2020-06-01 20:53:39 +03:00
Andrew Bakalin
d48e0ef5a6 [VPU][NGraph] Reuse NonZero evaluate in StaticShapeNonZero (#658)
* [VPU][NGraph] Reuse NonZero evaluate in StaticShapeNonZero

* [VPU][Tests] Adopt old tests to work with reverted indices

* [VPU] Update firmware
2020-06-01 18:57:06 +03:00
Katya
41ed6f0891 [IE Python API] fix TensorDesc test file name (#701) 2020-06-01 15:58:05 +03:00
Maksim Doronin
69e3af4c99 [IE VPU] OutShapeOfReshape per-layer tests (#631)
* [IE VPU] OutShapeOfReshape per-layer tests

* [IE VPU] Update firmware

* [IE VPU] OutShapeOfReshape: get rid of code duplication
2020-06-01 14:51:04 +03:00
Piotr Rozen
935b48b978 Added speech recognition demo package for centOS (#682) 2020-06-01 14:41:45 +03:00
Vladislav Vinogradov
88264b895a [IE] Fix build error (#703)
Missing changes in transformation library due to IE API dependency removal.
2020-06-01 13:09:23 +03:00
Mikhail Letavin
65f62945dd [IE CLDNN] Free up first copy of weights/biases that were transferred to USM device memory (#561) 2020-06-01 12:01:28 +03:00
Roman Kazantsev
004f414b89 Fix SparseWeightedSum transform for Wide and Deep (#698)
WhereDecomposition transform is applied to Where operation in for-garbage sub-graph remained after SparseWeightedSum transform.

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2020-06-01 11:48:06 +03:00
Jedrzej Hajduczenia
4001d0d99f [IE CLDNN] Prefer bf(wz)yx format for reshape (#691)
Performance improvement for icnet-camvid model
2020-06-01 10:38:33 +03:00
Ilya Churaev
d970d0494e Removed dependency on Inference Engine from transformation library (#680)
* Removed dependency on Inference Engine from transformation library

* Change transformations export macro

* Fixed comments
2020-06-01 10:31:31 +03:00
Ivan Tikhonov
cd01ccd449 Reshape-Permute-Reshape pattern to DepthToSpace layer transformation (#601)
* implemented depth_to_space transformation

* renaming

* added functional tests, fixed mistakes in implementation of the transformation

* disable ConvertSpaceToDepth/ConvertDepthToSpace transformation for CPU plugin, enable DepthToSpaceFusion for CPU plugin only, add specific creators

* fix wrong include

* fix for functional tests: set transformation callback

* revert callback calls for CPU plugin

* move functions to .cpp file

* Apply review comments

* Apply additional review comments

* fix cast to bool type
2020-06-01 09:24:16 +03:00
Ewa Tusień
b4893945c7 [ONNX] Add Range op to ONNX importer (#548)
* Added Range op to ONNX importer.

* Disable tests for IE.
2020-06-01 05:59:39 +03:00
emmanuelattia-philips
7ec63cafe3 Ie capi callback with explicit calling convention (#697)
* Added explicit calling convention to CAPI callback

* Fixed typo spacing

* Renamed INFERENCE_ENGINE_CALLBACK to INFERENCE_ENGINE_C_API_CALLBAC to make the macro really specific to the C API
2020-05-31 23:19:37 +03:00
Gladilov, Gleb
3bf7a69df1 [IE][VPU]: Faster-RCNN fixes on myriad plugin side (#665)
* [IE][VPU]: Fixes deallocation data for cases of CMX allocator run

The final loop tries to deallocate data objects that keep shape values for
other data objects that're outputs of a model. But the case when allocator
takes only CMX data into consideration was not handled and since allocation
could not happen, it lead to fail on deallocation of a data object that has
not been allocated.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Fixes allocator with work on data to shape edges

Since there is new relationship between data objects: some
data objects may contain shape of other data object - allocator
must properly respect that. The thing is if 2 data objects are
connected in such a way, they represent unite entity (dynamic
data object) and should have the same lifetime.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
2020-05-31 13:17:36 +03:00
emmanuelattia-philips
bad5bb30a3 Added ie_core_read_network_from_memory to the C API ie_bridge. (#674)
* * Added ie_core_read_network_from_memory to the C ie_bridge.

* Added size argument for xml_content, fixed const correctness of the weight_blob, fixed unit test

* * Removed debug message

* Changed variables names from model_xxx to weights_xxx to be more consistent with the argument name of the tested function.

* Added a description for xml_content_size in ie_core_read_network_from_memory.

* * xml_content is now passed as uint8_t
* reading function factorized in the unit-test
2020-05-31 02:25:39 +03:00
Ilya Lavrenov
3d42871523 Added dependency on ONNX reader (#693) 2020-05-30 15:15:20 +03:00
Denis Orlov
9af51a165f [GNA] Workaround support for callbacks (#591) 2020-05-30 00:43:42 +03:00
Edward Shogulin
e2729b87f3 [LPT] Convolution regression tests (#543)
* [LPT] Base test infrastructure extending & Convolution test

* [LPT] LPT test infrastructure refactoring
2020-05-29 22:56:58 +03:00
Anastasia Kuporosova
3ef1a26174 [IE TOOLS] Use input_info in python benchmark app (#660) 2020-05-29 21:28:17 +03:00
Anastasia Kuporosova
cbad43f3a5 [Python API] Fix PreProcessInfo tests (#690) 2020-05-29 21:20:16 +03:00
Vladimir Gavrilov
3a24eb6a62 MO fails generating IR from XLNET model due to a bug in the transformation ConvertGroupedStridedSlice (#625)
* Small fix in the transformation ConvertGroupedStridedSlice. Now VariadicSplit is generated only in the case when node has at least 2 output nodes.

* Added unittests for the case when there is only one StridedSlice.
2020-05-29 21:01:09 +03:00
Ilya Churaev
963f55a189 Fixed CODEOWNERS paths (#684) 2020-05-29 20:57:32 +03:00
Vladimir Paramuzov
f7052a107d [IE CLDNN] Optimized FQ kernel in fsv16 layout (#573)
- Optimized FQ kernel in fsv16 layout. Enabled scaleshift transform for FP16 precision
- Disabled activation_opt kernel with fused ops in some cases
2020-05-29 20:10:30 +03:00
Evgenya Stepyreva
6cfa77223e [ nG ] Added F16 folding support (#686) 2020-05-29 19:09:01 +03:00
Ilya Churaev
11bd4f8a42 Do not use ONNX reader if ONNX importer was disabled (#683) 2020-05-29 17:46:40 +03:00
Anna Khakimova
be3b711972 Pre-processing(GAPI): AVX2/AVX512 implementation of 3C/4C Resize via universal intrinsics. (#612) 2020-05-29 15:44:12 +03:00
Ilya Lavrenov
011128cb54 Python: Fixed installation rules to install additional .so files generated from .pyx (#676) 2020-05-29 14:45:59 +03:00
Katarzyna Mitrus
5f8f9ec108 [nGraph] Reorder nGraph LSTMSequence inputs and outputs dimensions (#560)
* Reorder nGraph LSTMSequence input/outpt dimensions

* Update nGraph pythonAPI for LSTMSequence

* Reorder axes in ONNX importer LSTM

* Tests update

* Fix clang warning

* Use opset3 namespace

* Style apply

* Tests update

* Use opset1  namespace

* Remove usage of  GetOutputElement in ONNX importer LSTM

* Remove opset0 header

* Use Node::output()
2020-05-29 14:29:18 +03:00
Ivan Tikhonov
a4f13ae9fe fix constant folding of Concat op (#675) 2020-05-29 14:09:20 +03:00
Artyom Anokhov
09192b804e [OpenVINO scripts] Fixed *.sh files index from 644 to 755 (#664)
* Fixed *.sh files index from 644 to 755

* Added convert.py executable permission
2020-05-29 13:50:17 +03:00
Gladilov, Gleb
67d733d5a8 Enables VPU maintainers notification in case of PR to VPU related folders and files (#667) 2020-05-29 09:32:10 +03:00
Evgenya Stepyreva
e290b14ab1 [ MO Interpolate ] Fixing broken model reshape-ability (#619) 2020-05-29 09:15:47 +03:00
Evgenya Stepyreva
5cc8114322 [ MO: CVS-32286 ] IdentityN fix (#668) 2020-05-29 09:11:22 +03:00
Ilya Churaev
e51e1682ca Enabled Unit tests and remove IReaderPtr (#653)
* Enabled Unit tests and remove IReaderPtr

* Fixed unicode tests for Windows

* Fixed typo
2020-05-28 22:40:20 +03:00
Andrey Somsikov
5f6999ed7e Remove Safety dependency (#627)
Safety tool should be isolated from the environment it is validating:
https://github.com/pyupio/safety/security/advisories/GHSA-7q25-qrjw-6fg2

Suggesting docker solution by default.
2020-05-28 18:31:10 +03:00
Gleb Kazantaev
bb41994f56 Removed StridedSlice to StridedSliceIE transformation (#661) 2020-05-28 18:27:54 +03:00
Vladimir Gavrilov
33aca7d2c4 SplitConcatPairToInterpolate inserts Interpolate when input is 2D (#596)
* SplitConcatPairToInterpolate transformation was moved to middle stage and is applied only for 4D and 5D inputs.
2020-05-28 18:08:24 +03:00
Andrew Bakalin
77162bf8ee [VPU][Tests] Fix sanitizer issue in unit tests (#630) 2020-05-28 18:01:56 +03:00
Irina Efode
23f41213bb [IE TESTS] MOVE plugin tests (#659) 2020-05-28 17:22:19 +03:00
Gleb Kazantaev
b731ce13d8 Fixed NMSIE shape infer function (#648) 2020-05-28 16:45:48 +03:00
Evgeny Lazarev
0efe474342 Fixes for Mask-RCNN conversion (#654)
* Fixed ONNX Mask-RCNN conversion

* Fixed validate_and_infet_types for NMS ops: added check for number of connected inputs

* Updated NMS ops to properly handle optional input with index 2

* Fixed typo in the implementation
2020-05-28 14:31:42 +03:00
Evgenya Stepyreva
ec5c9db932 [ MO ] Memory usage (#657) 2020-05-28 14:00:42 +03:00
Anton Zaytsev
00b53d6c33 [IE TESTS] Move Config behavior tests (#615)
* [ci-skip][IE TESTS] move config test

* [ci-skip][IE TESTS] fix config
2020-05-28 13:55:37 +03:00
Anton Zaytsev
25d36568f8 [IE TESTS] Move ExecGraphInfoTests (#617)
* [ci-skip][IE TESTS] move ExecGraph test

* [ci-skip][IE TESTS] fix

* [ci-skip][IE TESTS] fix codestyle

Co-authored-by: Zaytsev, Anton <antonzay@intel.com>
2020-05-28 13:48:16 +03:00
Irina Efode
246790f264 [IE TESTS] Move unit tests to the new infra (#641) 2020-05-28 12:33:56 +03:00
Roman Kazantsev
958e425775 Implement Bucketize in MO and MKLDNN for opset3 (#583)
This operation is used for Wide and Deep Model

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2020-05-28 11:11:07 +03:00
Anastasia Kuporosova
e025f1464b [Python API] Add InputInfo and PreProcessInfo (#637) 2020-05-28 10:55:11 +03:00
Michał Karzyński
125dd89c01 Remove Runtime classes from nGraph Python API (#569) 2020-05-28 09:50:57 +02:00
Gleb Kazantaev
f276b5fbb4 Updated StridedSlice to StridedSliceIE conversion to support dynamic shapes (#621)
* Updated ConvertStridedSliceToStridedSliceIE transformation to support dynamic shapes

* Fixed stridesluce to crop transform not to fail with dynamic shapes
2020-05-28 01:14:12 +03:00
Edward Shogulin
57da5ddab8 [LPT] Concat complex graph support (#527)
* [LPT] [Tests] LowPrecisionTransformations base test extending

* [LPT] Concat complex (neighbor) graph support

* [LPT] Multichannel concat: scale per channel support

* [LPT] test improvements

* [TEST] tests infrastructure improvements
2020-05-27 21:53:50 +03:00
Mikołaj Życzyński
e734377590 [IE CLDNN] Grouped convolution bug fix (#572)
Fixes bug in grouped convolution connected with wrong weights layout in SetDefault() method
2020-05-27 21:19:49 +03:00
Ilya Churaev
c75fd4db92 Removed CI and docker scripts (#622) 2020-05-27 20:58:03 +03:00
Alexander Zhogov
79b780413f Azure CI: Add timeouts, *smoke* filter (#574) 2020-05-27 19:47:23 +03:00
Alexander Zhogov
93333780c7 CODEOWNERS: Add link to help, and small fix 2020-05-27 18:50:47 +03:00
Ilya Churaev
3c718809d3 Added ONNX reader for the OpenVINO (#532)
* Added ONNX reader for the OpenVINO

* Fixed comments

* Fixed comments

* Fixed message

* Fixed memory consumption

* Revert IReaderPtr

* Fixed Myriad tests

* Fixed comment

* Renamed inference_engine_ir_readers to inference_engine_ir_reader
2020-05-27 18:37:19 +03:00
Gleb Kazantaev
d5434a036e CODEOWNERS: Added nGraph/Transformations/nGarphTests/IECore (#633) 2020-05-27 18:30:14 +03:00
Irina Efode
7fcb12603e [IE TESTS][IE CMAKE] Fix download 'testdata' repo via HTTPS (#597) 2020-05-27 16:20:09 +03:00
Gorokhov Dmitriy
c5124763db [CPU] Added quantization post op to the list of supported by FP32 DW Conv (#592) 2020-05-27 16:00:13 +03:00
Nikolay Shchegolev
27e8580b7d [CPU] ExtractImagePatches operation (#575) 2020-05-27 15:46:49 +03:00
Andrew Bakalin
fd1cc08cd8 [VPU][GT] Add convert shape notation pass (#559)
* [VPU] Update firmware

* [VPU][GT] Adjust allocator to deal with undeallocated shapes

* [VPU][Tests] Adjust NonZero tests and references

* [VPU][Tests] Add unit tests for pass

* [VPU][GT] Adjust previous unit tests

* [VPU][GT] Introduce convertShapeNotation pass

* [VPU][GT] Review fixes

* [VPU] Change dims order in dynamic output
2020-05-27 15:35:28 +03:00
Pavel Esir
e337350cc1 Fix skipping incorrect names in scale/mean values (#535)
* Fix skipping incorrect names in scale/mean values

* removed inappropriate comment in cli_parser.py
2020-05-27 14:53:50 +03:00
Evgeny Latkin
d24132912e ICV: fix Scatter layers: fix validators (#541)
* ICV: fix Scatter layers: fix validators

* ICV: fix Scatter layers: enable 0D for `axis`

* Revert "ICV: fix Scatter layers: enable 0D for `axis`"

This reverts commit 82da24b989678061a585a5c7ffd7d5dab10f5edc.

* ICV: fix Scatter layers: test, fix CNNNetworkImpl
2020-05-27 13:14:46 +03:00
Vladislav Vinogradov
946ed119c8 [IE CMAKE] Fix OpenBLAS dependency handling for Yocto ARM64 platfrom (#562)
Use `THIRDPARTY_SERVER_PATH` variable to override remote artifacts path.
2020-05-27 13:06:20 +03:00
Michał Karzyński
dcdeb34c8f CODEOWNERS: Add openvino-ngraph-maintainers (#628) 2020-05-27 12:50:18 +03:00
Konstantin Satunin
bfe416704d Change region of VMSS agents (#595) 2020-05-27 12:13:52 +03:00
Irina Efode
44cd77f54b [IE TESTS] Move old IE Unit tests to the new infra (#605) 2020-05-27 11:53:00 +03:00
Egor Churaev
31fe146539 [IE CLDNN] Implement CumSum operation (#533)
CumSum performs cumulative summation of the input elements along the given axis.

Details:
By default, it will do the sum inclusively meaning the first element is
copied as is. Through an "exclusive" attribute, this behavior can change
to exclude the first element. It can also perform summation in the
opposite direction of the axis. For that, set "reverse" attribute to
true.

JIRA: 29994
2020-05-27 11:47:16 +03:00
Nikita Kudriavtsev
2012d084f2 [IE Myriad] "printf" methods were replaced with mvLog (#552) 2020-05-27 11:45:34 +03:00
Andrew Bakalin
d337b4ed97 [VPU][GT] Refine edges methods (#550)
* [VPU][GT] Extract order manipulation into separate methods

* [VPU][GT] Rename data -> dependency

* [VPU][GT] Extend unit tests

* [VPU][GT] Introduce replacement and removal methods for StageDependency

* [VPU][GT] Update DataToShape connection methods
2020-05-27 11:14:02 +03:00
Evgenya Stepyreva
5c2eb05990 [ MO ONNX ] Resize-11 clear error message (#620)
* Small refactoring of extractors

* [ MO ] Throwing an exception while extracting Resize-11 which is not supported
2020-05-27 08:09:15 +03:00
Konrad Dobros
d3ea03bbfc [IE CLDNN] Enable int8 activation for fsv16 format (#516)
This change enables int8/uint8 standalone activation to use optimized
block format (b_fs_yx_fsv16). This should eliminate cases where such
activation had reorders before and after.

Support for this is already provided by activation_kernel_ref implementation.

Related JIRA: CVS-28494
2020-05-27 05:37:38 +03:00
Gleb Kazantaev
6788153ba9 Updated convert_nms_to_nms_ie transformation to support dynamic shapes (#614) 2020-05-27 00:38:25 +03:00
Gleb Kazantaev
851f64946a Updated ConvertGatherToGatherIE transformation to support dynamic shapes (#611) 2020-05-27 00:38:04 +03:00
Evgeny Lazarev
c1625743df Change Elu a regular op since decomposition works extremely slowly (#582)
* Moved Elu operation from Fused to regular ones because the decomposition works extremely slowly.

* Added reference implementation for the Elu op
2020-05-26 21:59:08 +03:00
Evgenya Stepyreva
73f3b7c8fc [ MO ONNX ] TopK-1/10/11 proper extracting (#600) 2020-05-26 21:53:24 +03:00
Vitaliy Urusovskij
4a44f84dab [Stress] Updated test_configs with new path to OMZ mtcnn models (#602) 2020-05-26 20:11:46 +03:00
Ilya Lavrenov
bb039adef8 Fixed compilation with clang-10 + xcode (#521) 2020-05-26 17:17:36 +03:00
Shashwat Dalakoti
4943a954c7 Updated requiremnets.txt (#593)
Alignment with the requirements_tf.txt file
2020-05-26 16:19:43 +03:00
JunX
7595512d1f fix issue log print wrong origin image shape (#581) 2020-05-26 14:27:14 +03:00
Irina Efode
c3aa866a33 [IE CMAKE] FIX PATHS (#553)
* [IE CMAKE] FIX PATHS

* Fix problems
2020-05-26 11:57:02 +03:00
Ilya Churaev
42a8364cb6 Disable nGraph tests if ENABLE_TESTS=OFF (#579) 2020-05-26 11:51:47 +03:00
Gleb Kazantaev
d3764a7563 Updated Mul->Add conversion to support dynamic shapes (#512)
* Updated Mul Add conversion to support dynamic shapes

* Keep changes

* Fix for cases when eltwise performs broadcasting via Constant

* Added comments;Fixed eltwise shape infer; Updated tests
2020-05-26 10:24:52 +03:00
Roman Donchenko
e835a4cf58 MO: Flush after dumping the arguments to stdout (#570)
When stdout is not a terminal, Python will buffer it by default. This
means that a consumer of MO's output will not see the argument information
until the buffer is flushed, which will normally only happen once MO
finishes (which might take a while).

Flushing stdout explicitly allows the consumer to see this info as soon
as it's printed.
2020-05-26 07:44:25 +03:00
Gleb Kazantaev
d3923f2ce0 Update TopKIE operation and transform to support dynamic shapes (#526)
* Update TopKIE operation and transform to support dynamic shapes

* Fix TopKIE shape infer

* Updated TopKIE infer function

* Removed index_element_type; changed swtich with as_string<> method

* Fixed ieFuncTests

* Fixed convert_topk transformation

* Updated convert_topk transformations

* ngraph::copy_runtime_info(topk, new_ops);
2020-05-26 01:19:38 +03:00
Irina Efode
c6e03d73d8 [IE TESTS] Move old IE unit tests to the new infra (#544)
* [IE TESTS] Move ie_blob_proxy tests

* [IE TESTS] Move network serializer tests

* [IE TESTS] Move CNNNetwork tests to the IE func

* [IE TEST] Fix deprecation warnings

* Fix comments
2020-05-25 23:28:59 +03:00
Vladimir Paramuzov
0b23215b72 CODEOWNERS: added cpu/gpu developers teams (#540) 2020-05-25 21:54:54 +03:00
Konstantin Satunin
2f9fd74151 Use compute optimized VMs for CI (#567) 2020-05-25 21:31:57 +03:00
Maxim Vafin
8c8629a4af Support ONNX Clamp-11 (#538) 2020-05-25 19:59:07 +03:00
Ilya Churaev
04bb8ab51d Added case less check for enum names (#534)
* Added case less check for enum names

* Added <algorithm> header
2020-05-25 16:23:55 +03:00
Nikita Kudriavtsev
74e8b54ce3 [IE Myriad] Correct destruction order in functional tests with DISABLE_PLUGIN_CACHE env. variable (#542) 2020-05-25 15:45:59 +03:00
Evgenya Stepyreva
b6a05c232e [ MO TF ] IdentityN support (#529) 2020-05-25 10:52:58 +03:00
Alexander Zhogov
507c06c8bc Azure CI: Enable cpuFuncTests on Windows 2020-05-23 01:29:36 +03:00
Alexander Zhogov
244f4e9fe7 CODEOWNERS: Fix 2020-05-23 01:27:53 +03:00
Alexander Zhogov
43fdf32729 Fix MO CI job name (#520) 2020-05-23 00:24:05 +03:00
Alexander Zhogov
20c1755efc Update public CI (#514)
* Update public CI

* Add MO test check

* Disable cpuFuncTests on Windows
2020-05-22 23:34:26 +03:00
Alexey Suhov
0064c299c3 add plugin template (#515) 2020-05-22 22:34:00 +03:00
Irina Efode
2e3928071f Update CODEOWNERS using openvino-ie-tests-maintainers group (#519) 2020-05-22 22:17:06 +03:00
Irina Efode
f1aa573b79 Update CODEOWNERS (#518) 2020-05-22 21:37:03 +03:00
Irina Efode
acc311e6f9 [IE TESTS] Fix win func test issue (#508) 2020-05-22 21:19:28 +03:00
Evgeny Talanin
d006030ad3 Update codeowners 1 (#517)
* Fine-grained groups to CODEOWNERS at day 1

* Fix

* Fix ie-maintainers

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2020-05-22 21:18:54 +03:00
Ilya Lavrenov
fc899e6ceb Remove test artifact (#511) 2020-05-22 20:02:00 +03:00
Alexander Zhogov
a3d482035e Azure: Update job names, add cpuFuncTests (#509) 2020-05-22 17:47:05 +03:00
Alexey Suhov
ca9a78874a Remove Dimension::size_t and callers
(cherry-pick master commit 72fa20942a3f135ea2e324f47dd401506a913876)
2020-05-22 11:17:20 +03:00
azhogov
b8611139ca Update job name 2020-05-22 10:23:59 +03:00
Alexander Zhogov
6e2cfbca0c CODEOWNERS: Add Jenkinsfile 2020-05-22 10:09:47 +03:00
Alexander Zhogov
b36d0df477 CODEOWNERS: add tools 2020-05-22 10:06:33 +03:00
Alexey Suhov
ccb7438803 publish master branch snapshot, revision ea98a886d925eb152931aab13856e68037665562 2020-05-22 03:42:00 +03:00
Alexey Suhov
deb008a26f publish master branch snapshot, revision 8d31237e2c3f673cbb0f0ba110fc10f5cce1d2bb 2020-05-22 02:23:12 +03:00
Alexey Suhov
eab7ef4895 add submodules for mkl-dnn, gflags and gtest 2020-05-21 23:00:55 +03:00
Konstantin Satunin
778063e5cb fixed latest release 2020-05-21 17:27:44 +03:00
Alexey Suhov
d222c99ca7 add speech demo 2020-05-21 17:14:03 +03:00
Alexey Suhov
29d24c613a move dependencies to https://download.01.org/opencv/master 2020-05-21 15:00:31 +03:00
Alexey Suhov
f1b7a7292b update TBB and VPU binary dependencies 2020-05-20 22:02:34 +03:00
azhogov
ec2ca3e54f Azure: disable IE_Lin 2020-05-20 13:15:07 +03:00
Alexander Zhogov
6d56e824d2 Azure Pipelines: Set -j12 for Lin_self 2020-05-20 12:05:04 +03:00
Konstantin Satunin
7566e8202f Test Ubuntu 1804 VMSS 2020-05-20 11:46:09 +03:00
Alexey Suhov
f30dcc218c publish master branch snapshot, revision 9df5eb1f84e13a35720a918f88324561222ab114 2020-05-20 01:13:06 +03:00
Alexey Suhov
3ad0e4e434 remove ngraph submodule 2020-05-20 00:20:33 +03:00
Alexander Zhogov
4893f27fb9 Update How to Contribute 2020-05-19 19:08:14 +03:00
Alexander Zhogov
12f0fc72db Create CONTRIBUTING.md 2020-05-19 19:04:27 +03:00
Alexander Zhogov
6dd7ce89af Update CODEOWNERS 2020-05-19 14:42:35 +03:00
Andrey Babushkin
ed9cd78421 [ie/scripts/dependencies.bat] Fix unpack for opencv 2020-05-18 20:56:05 +03:00
Alexander Zhogov
eb57da7605 Azure Pipelines: Try -j12 for Win vmss 2020-05-18 20:38:27 +03:00
azhogov
5bc6a1e723 Azure: Try -j8 for Win vmss 2020-05-18 20:33:56 +03:00
Konstantin Satunin
76b3d2d47b Check VMSS 2020-05-18 20:10:29 +03:00
Alexey Suhov
dd0a195f2d fix BOM file for model optimizer 2020-05-18 19:20:16 +03:00
Alexey Suhov
3248c3002a Merge branch 'master' of https://github.com/openvinotoolkit/openvino 2020-05-18 18:29:42 +03:00
Alexander Zhogov
c78a575d23 Azure Pipelines: Check WIN_VMSS_VENV 2020-05-18 18:28:54 +03:00
Alexey Suhov
d22e5e8260 add execute permissions to run_code_checks.sh 2020-05-18 18:28:53 +03:00
Alexey Suhov
ba0a339888 publish master branch snapshot, revision 59af1853ca21ea08acf17b177da0b239753deb46 2020-05-18 17:21:58 +03:00
Alexander Zhogov
0a5a63bc0c Azure Pipelines: change pool for Win_self to WIN_VMSS 2020-05-18 16:02:40 +03:00
azhogov
32488a5c26 Fix VS2017 compilation issue: Intermediate channels count type changed to size_t
(cherry-pick master 0f6155ac3616fb2a7b51cfaddfdad1cc189f968d)
2020-05-18 13:17:10 +03:00
Alexander Zhogov
8081638bb5 Update README.md links 2020-05-16 10:49:12 +03:00
Alexander Zhogov
c50d41826d Azure Pipelines: disable nGraph GPU UT, LTO, Mac crashed UT 2020-05-15 19:16:28 +03:00
Alexander Zhogov
7b5887afba Azure Pipelines: disable nGraph GPU UT 2020-05-15 17:16:04 +03:00
Alexander Zhogov
54bb6b057f Create CODEOWNERS 2020-05-15 11:41:46 +03:00
Alexey Suhov
645641e87d add execute permissions to get_testdata.py 2020-05-14 23:17:54 +03:00
Alexey Suhov
3d63b13ba5 Revert LTO on Windows 2020-05-14 16:30:29 +03:00
Alexey Suhov
5b428f0655 publish master branch snapshot, revision 49482ae3bea0cbaa07474f86f36db11943142687 2020-05-13 21:12:22 +03:00
Alexander Zhogov
9d6501e9a6 Azure Pipelines: exclude failed Mac test fix 2020-05-11 14:59:09 +03:00
Alexander Zhogov
5b07298559 Azure Pipelines: exclude failed Mac test fix 2020-05-11 12:27:57 +03:00
Alexander Zhogov
3a0a7e79ff Azure Pipelines: exclude failed Mac test 2020-05-10 20:30:18 +03:00
Alexander Zhogov
11b84926d4 Azure Pipelines: exclude failed Mac test 2020-05-10 10:58:47 +03:00
Alexander Zhogov
112c58cc40 Azure Pipelines: set -j3 2020-05-08 18:40:14 +03:00
Andrey Babushkin
2430d96a3e Create .coveragerc 2020-05-06 23:38:42 +03:00
Alexey Suhov
64df940035 add scripts which download tests dependencies 2020-05-06 21:52:42 +03:00
Alexander Zhogov
67077e4aa7 Azure Pipelines: Update Mac options 2020-04-30 12:03:14 +03:00
Alexander Zhogov
5b009e9a38 Azure Pipelines: Fix test env on Windows 2020-04-30 01:30:33 +03:00
Andrey Babushkin
1bb752f1b8 Run pylint workflow on pull request events (#476)
Also remove pylint cmdline arguments to ignore import errors
2020-04-29 17:05:24 +03:00
Alexander Zhogov
5176df56dd Azure Pipelines: Fix test env 2020-04-29 15:34:04 +03:00
Alexander Zhogov
107d67e44d Azure Pipelines: exclude backend_api.config_unsupported from nGraph UT 2020-04-29 11:44:15 +03:00
Alexander Zhogov
833ff8b591 Add testdata to Azure Pipelines 2020-04-29 10:23:05 +03:00
Alexey Suhov
9314daeb3c fix NGRAPH_ONNX_IMPORT_ENABLE in cmake 2020-04-28 22:20:54 +03:00
Andrey Babushkin
aa2cb40f17 Add GitHub Actions workflow to run pylint against model optimizer (#474) 2020-04-28 18:41:12 +03:00
Alexander Zhogov
079e16c4d1 Update Azure Pipelines 2020-04-28 11:49:03 +03:00
Alexey Suhov
357cc7eb4c publish master branch snapshot, revision 0110d9c98fd7209589d06344f0d836f61d81f4b3 2020-04-27 21:21:29 +03:00
Alexander Zhogov
822692f526 Update Azure Pipelines 2020-04-17 12:37:19 +03:00
Alexander Zhogov
4ea5ac39fc Update Azure Pipelines 2020-04-16 20:52:03 +03:00
Alexey Suhov
67ac796715 Merge branch 'master' of https://github.com/opencv/dldt 2020-04-16 14:40:24 +03:00
Alexey Suhov
6300b1490d fixed BOM file for model optimizer 2020-04-16 14:38:57 +03:00
Alexander Zhogov
165c00fe6d Update Azure Pipelines 2020-04-16 13:54:47 +03:00
Alexander Zhogov
68bdd184ef Fix typo 2020-04-16 11:07:04 +03:00
Alexander Zhogov
56b67d7d1c Set up CI with Azure Pipelines 2020-04-16 11:02:27 +03:00
Alexey Suhov
ae03bda480 moved pylint configuration files 2020-04-15 21:46:27 +03:00
Alexey Suhov
127cbac5bc publish master branch snapshot, revision cdcab9d7ab48ffb0ee5629fabbfa06cb45debd9b 2020-04-15 19:01:57 +03:00
Alexey Suhov
95a57795dc Publishing 2020.2 content 2020-04-13 21:17:23 +03:00
Alexey Suhov
a347375d01 removed ie_rh_decoder.cmake from install target 2020-03-19 21:14:29 +03:00
Alexey Suhov
b2140c083a Publishing 2020.1 content 2020-02-11 22:48:49 +03:00
Alexey Suhov
949b74059f Merge pull request #296 from dkurt/patch-1
Do not build CMake from source
2020-02-10 21:08:40 +03:00
Alexey Suhov
651161be1c Merge pull request #378 from Danile71/2019
Fix error (libpng-0)
2020-02-06 19:38:41 +03:00
Daniel
f73852ea3d Fix error (libpng-0) 2020-02-06 16:07:14 +03:00
Alexey Suhov
b0c5accaf8 fixed link to Intel models and model downloader 2019-11-15 13:55:44 +03:00
Alexey Suhov
733dae46cc lower minimal cmake version to 3.5 2019-11-06 17:42:34 +03:00
Alexey Suhov
fe3f978b98 Merge pull request #309 from asuhov/2019-r31
Publishing 2019 R3.1 content
2019-10-28 21:34:43 +03:00
Alexey Suhov
6dfc778940 Publishing 2019 R3.1 content 2019-10-28 21:25:18 +03:00
Alexey Suhov
1798ac0d26 turned off cpplint by default 2019-10-24 17:39:17 +03:00
Dmitry Kurtaev
298900790c Do not build CMake from source 2019-10-22 10:32:43 +03:00
9768 changed files with 1275232 additions and 665239 deletions

6
.gitattributes vendored
View File

@@ -63,3 +63,9 @@
#*.PDF diff=astextplain
#*.rtf diff=astextplain
#*.RTF diff=astextplain
*.PNG filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.gif filter=lfs diff=lfs merge=lfs -text
*.vsdx filter=lfs diff=lfs merge=lfs -text

55
.github/workflows/mo.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: MO
on:
push:
paths:
- 'model-optimizer/**'
pull_request:
paths:
- 'model-optimizer/**'
jobs:
Pylint-UT:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: 3.6
- name: Cache pip
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('model-optimizer/requirements*.txt') }}
restore-keys: |
${{ runner.os }}-pip-
${{ runner.os }}-
# tensorflow 1.15 causes modules import
# errors, most likely due to https://github.com/PyCQA/pylint/issues/2603
# for tensorflow.core.framework and tensorflow.contrib
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools
# For Pylint
pip install tensorflow==1.14.0 tensorboard==1.14.0 tensorflow-estimator==1.14.0
# For UT
pip install unittest-xml-reporting==3.0.2
# MO requirements
pip install -r requirements.txt
pip install -r requirements_dev.txt
working-directory: model-optimizer
- name: Pylint
run: pylint -d C,R,W mo/ mo.py extensions/
working-directory: model-optimizer
- name: UT
run: |
export PYTHONPATH=$PYTHONPATH:`pwd`
export MO_ROOT=`pwd`
env
mkdir ../mo-ut-logs
python3 -m xmlrunner discover -p *_test.py --output=../mo-ut-logs
working-directory: model-optimizer

383
.gitignore vendored
View File

@@ -1,342 +1,71 @@
## Ignore Visual Studio temporary files, build results, and
## files generated by popular Visual Studio add-ons.
# build/artifact dirs
_*
# but ensure we don't skip __init__.py
!__init__.py
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
[Xx]64/
[Xx]86/
[Bb]uild/
bld/
[Bb]in/
[Oo]bj/
# PY.TEST
*.pyc
tests/integration/report.html
tests/integration/report.xml
tests/integration/assets/
tests/integration/__pycache__/
# Visual Studio 2015 cache/options directory
.vs/
# Uncomment if you have tasks that create the project's static files in wwwroot
#wwwroot/
# MSTest test Results
[Tt]est[Rr]esult*/
[Bb]uild[Ll]og.*
# NUNIT
*.VisualState.xml
TestResult.xml
# Build Results of an ATL Project
[Dd]ebugPS/
[Rr]eleasePS/
dlldata.c
# DNX
project.lock.json
artifacts/
*_i.c
*_p.c
*_i.h
*.ilk
*.meta
*.obj
*.pch
*.pdb
*.pgc
*.pgd
*.rsp
*.sbr
*.tlb
*.tli
*.tlh
*.tmp
*.tmp_proj
*.log
*.vspscc
*.vssscc
.builds
*.pidb
*.svclog
*.scc
# Chutzpah Test files
_Chutzpah*
# Visual C++ cache files
ipch/
*.aps
*.ncb
*.opendb
*.opensdf
*.sdf
*.cachefile
*.VC.db
# Visual Studio profiler
*.psess
*.vsp
*.vspx
*.sap
# TFS 2012 Local Workspace
$tf/
# Guidance Automation Toolkit
*.gpState
# ReSharper is a .NET coding add-in
_ReSharper*/
*.[Rr]e[Ss]harper
*.DotSettings.user
# JustCode is a .NET coding add-in
.JustCode
# TeamCity is a build add-in
_TeamCity*
# DotCover is a Code Coverage Tool
*.dotCover
# NCrunch
_NCrunch_*
.*crunch*.local.xml
nCrunchTemp_*
# MightyMoose
*.mm.*
AutoTest.Net/
# Web workbench (sass)
.sass-cache/
# Installshield output folder
[Ee]xpress/
# DocProject is a documentation generator add-in
DocProject/buildhelp/
DocProject/Help/*.HxT
DocProject/Help/*.HxC
DocProject/Help/*.hhc
DocProject/Help/*.hhk
DocProject/Help/*.hhp
DocProject/Help/Html2
DocProject/Help/html
# Click-Once directory
publish/
# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
# TODO: Un-comment the next line if you do not want to checkin
# your web deploy settings because they may include unencrypted
# passwords
#*.pubxml
*.publishproj
# NuGet Packages
*.nupkg
# The packages folder can be ignored because of Package Restore
**/packages/*
# except build/, which is used as an MSBuild target.
!**/packages/build/
# Uncomment if necessary however generally it will be regenerated when needed
#!**/packages/repositories.config
# NuGet v3's project.json files produces more ignoreable files
*.nuget.props
*.nuget.targets
# Microsoft Azure Build Output
csx/
*.build.csdef
# Microsoft Azure Emulator
ecf/
rcf/
# Microsoft Azure ApplicationInsights config file
ApplicationInsights.config
# Windows Store app package directory
AppPackages/
BundleArtifacts/
# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!*.[Cc]ache/
# Others
ClientBin/
[Ss]tyle[Cc]op.*
~$*
*~
*.dbmdl
*.dbproj.schemaview
*.pfx
*.publishsettings
node_modules/
orleans.codegen.cs
# RIA/Silverlight projects
Generated_Code/
# Backup & report files from converting an old project file
# to a newer Visual Studio version. Backup files are not needed,
# because we have git ;-)
_UpgradeReport_Files/
Backup*/
UpgradeLog*.XML
UpgradeLog*.htm
# SQL Server files
*.mdf
*.ldf
# Business Intelligence projects
*.rdl.data
*.bim.layout
*.bim_*.settings
# Microsoft Fakes
FakesAssemblies/
# GhostDoc plugin setting file
*.GhostDoc.xml
# Target VS files:
vsx64
# Node.js Tools for Visual Studio
.ntvs_analysis.dat
# Visual Studio 6 build log
*.plg
# Visual Studio 6 workspace options file
*.opt
# Visual Studio LightSwitch build output
**/*.HTMLClient/GeneratedArtifacts
**/*.DesktopClient/GeneratedArtifacts
**/*.DesktopClient/ModelManifest.xml
**/*.Server/GeneratedArtifacts
**/*.Server/ModelManifest.xml
_Pvt_Extensions
# LightSwitch generated files
GeneratedArtifacts/
ModelManifest.xml
# Paket dependency manager
.paket/paket.exe
# FAKE - F# Make
.fake/
*.filters
/External
/Output
/InferenceEngineMain/models
/Test
/HTTPClient/*.a
/InferenceEngineMain/newModels
# developer tools
*.idea
.vscode
cmake-build-*
.DS_Store
# For IDEA
.idea/
VS/
Xcode/
temp/
report/
.kdev4/
*.kdev4
*.kate-swp
/lin-build
/win-build
/CMakeFiles
*.stamp
*.depend
*.vcxproj
*.sln
/CMakeCache.txt
.vimprj/
build_IA32/
.dir-locals.el
GTAGS
GPATH
GRTAGS
GSYMS
**/tags
compile_commands.json
service/dot-net-service/Output
**/sublime_build
/.project
.vscode/
/vsx32
/service/dot-net-service/.klocwork/DotNetService
cmake-build-*/
/lin64
.gdb_history
bin/
build/
.local_vimrc
.ycm_extra_conf.py
tags
.gdb_history
.vimspector.json
doc/
!ngraph/doc
docs/build_documentation/work_dir/
inference-engine/plugins/
inference-engine/temp
inference-engine/report
.repo/
docs/template_plugin/html/
CMakeLists.txt.user
docs/IE_PLUGIN_DG/html/
# from Model Optimizer repo
.idea
.project
.cproject
.pydevproject
.settings
/bin/
/gen/
*.project
*.cproject
*.pydevproject
*.settings
*/gen/
__pycache__
*.swp
/config.xml
# Python-specific
.env3
*.env3
*.pyc
# Tests-specific
.coverage
htmlcov
pylint_report.txt
pylint_report_comments.txt
# Documentation-generated
docs/build
docs/source/_static
docs/source/_templates
docs/source/generated/
*.coverage
*htmlcov
*pylint_report.txt
*pylint_report_comments.txt
# Artifacts
/*.bin
/*.xml
/*.json
/*.so
/*.txt
/*.mapping
/*.dat
/*.svg
/model-optimizer/*.bin
/model-optimizer/*.xml
/model-optimizer/*.json
/model-optimizer/*.so
/model-optimizer/*.txt
/model-optimizer/*.pb
/model-optimizer/*.pbtxt
/model-optimizer/!CMakeLists.txt
/model-optimizer/*.mapping
/model-optimizer/*.dat
/model-optimizer/*.svg
# ngraph
ngraph/src/CPackConfig.cmake
ngraph/src/CPackSourceConfig.cmake
ngraph/src/VERSION
ngraph/src/gtest/
ngraph/src/json/
ngraph/src/ngraphConfig.cmake
ngraph/src/ngraphConfigVersion.cmake
ngraph/src/protobuf/
ngraph/src/src/
ngraph/src/test/

14
.gitmodules vendored
View File

@@ -2,7 +2,15 @@
path = inference-engine/thirdparty/ade
url = https://github.com/opencv/ade.git
ignore = dirty
[submodule "inference-engine/thirdparty/ngraph"]
path = inference-engine/thirdparty/ngraph
url = https://github.com/NervanaSystems/ngraph.git
[submodule "inference-engine/thirdparty/mkl-dnn"]
path = inference-engine/thirdparty/mkl-dnn
url = https://github.com/openvinotoolkit/oneDNN.git
ignore = dirty
[submodule "inference-engine/tests/ie_test_utils/common_test_utils/gtest"]
path = inference-engine/tests/ie_test_utils/common_test_utils/gtest
url = https://github.com/openvinotoolkit/googletest.git
ignore = dirty
[submodule "inference-engine/samples/thirdparty/gflags"]
path = inference-engine/samples/thirdparty/gflags
url = https://github.com/gflags/gflags.git
ignore = dirty

163
CMakeLists.txt Normal file
View File

@@ -0,0 +1,163 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_policy(SET CMP0054 NEW)
# TODO: for make instal / package we need to use 3.13.3 version because
# it allows to install targets created outside of current projects
# See https://blog.kitware.com/cmake-3-13-0-available-for-download/
if (APPLE)
if(CMAKE_GENERATOR STREQUAL "Xcode")
# due to https://gitlab.kitware.com/cmake/cmake/issues/14254
cmake_minimum_required(VERSION 3.12.0 FATAL_ERROR)
else()
# due to https://cmake.org/cmake/help/v3.12/policy/CMP0068.html
cmake_minimum_required(VERSION 3.9 FATAL_ERROR)
endif()
else()
cmake_minimum_required(VERSION 3.7.2 FATAL_ERROR)
endif()
project(OpenVINO)
set(OpenVINO_MAIN_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR})
set(IE_MAIN_SOURCE_DIR ${OpenVINO_MAIN_SOURCE_DIR}/inference-engine)
list(APPEND CMAKE_MODULE_PATH "${OpenVINO_MAIN_SOURCE_DIR}/cmake")
include(CTest)
include(features)
# include developer package
include(developer_package)
# These options are shared with 3rdparty plugins
# by means of developer package
include(check_features)
include(dependencies)
# resolving dependencies for the project
message (STATUS "PROJECT ............................... " ${PROJECT_NAME})
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
message (STATUS "OpenVINO_MAIN_SOURCE_DIR .............. " ${OpenVINO_MAIN_SOURCE_DIR})
message (STATUS "IE_MAIN_SOURCE_DIR .................... " ${IE_MAIN_SOURCE_DIR})
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
# remove file with exported developer targets to force its regeneration
file(REMOVE "${CMAKE_BINARY_DIR}/targets_developer.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/targets.cmake")
function(build_ngraph)
function(ngraph_set option value)
if(NOT DEFINED ${option})
set(${option} ${value} CACHE BOOL "" FORCE)
endif()
endfunction()
set(NGRAPH_BUILD_DIR ${CMAKE_LIBRARY_OUTPUT_DIRECTORY} CACHE STRING "" FORCE)
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${OpenVINO_MAIN_SOURCE_DIR}/ngraph/cmake/Modules/")
if (ENABLE_SANITIZER)
ngraph_set(NGRAPH_ADDRESS_SANITIZER TRUE)
else ()
ngraph_set(NGRAPH_ADDRESS_SANITIZER FALSE)
endif ()
ngraph_set(NGRAPH_PYTHON_BUILD_ENABLE FALSE)
if (NOT ANDROID)
if(ENABLE_TESTS)
ngraph_set(NGRAPH_UNIT_TEST_ENABLE TRUE)
ngraph_set(NGRAPH_IE_ENABLE TRUE)
else()
ngraph_set(NGRAPH_UNIT_TEST_ENABLE FALSE)
ngraph_set(NGRAPH_IE_ENABLE FALSE)
endif()
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE TRUE)
else()
ngraph_set(NGRAPH_UNIT_TEST_ENABLE FALSE)
ngraph_set(NGRAPH_TEST_UTIL_ENABLE FALSE)
ngraph_set(NGRAPH_IE_ENABLE FALSE)
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE FALSE)
endif()
ngraph_set(NGRAPH_INTERPRETER_ENABLE TRUE)
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
ie_add_compiler_flags(-Wno-error=uninitialized -Wno-error=literal-conversion)
elseif(UNIX)
ie_add_compiler_flags(-Wno-error=maybe-uninitialized -Wno-error=return-type -fPIC)
endif()
if(ANDROID)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=defaulted-function-deleted -Wno-error=unused-command-line-argument")
endif()
# WA for GCC 7.0
if (UNIX)
ie_add_compiler_flags(-Wno-error=return-type -Wno-undef)
elseif(WIN32)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4308 /wd4146 /wd4703 /wd4244 /wd4819")
endif()
if(ENABLE_LTO)
ie_enable_lto()
endif()
ie_cpack_add_component(ngraph)
set(SDL_cmake_included ON)
# set(NGRAPH_COMPONENT_PREFIX "deployment_tools/ngraph/")
add_subdirectory(ngraph)
set(NGRAPH_LIBRARIES ngraph PARENT_SCOPE)
endfunction()
build_ngraph()
add_subdirectory(inference-engine)
add_subdirectory(docs)
# cpack
# install setupvars
ie_cpack_add_component(setupvars REQUIRED)
if(UNIX)
install(PROGRAMS scripts/setupvars/setupvars.sh
DESTINATION bin
COMPONENT setupvars)
elseif(WIN32)
install(PROGRAMS scripts/setupvars/setupvars.bat
DESTINATION bin
COMPONENT setupvars)
endif()
# install install_dependencies
if(UNIX)
ie_cpack_add_component(install_dependencies REQUIRED)
install(DIRECTORY scripts/install_dependencies/
DESTINATION install_dependencies
COMPONENT install_dependencies)
endif()
# install files for demo
ie_cpack_add_component(demo_scripts REQUIRED DEPENDS core)
if(UNIX)
install(DIRECTORY scripts/demo/
DESTINATION deployment_tools/demo
COMPONENT demo_scripts
USE_SOURCE_PERMISSIONS
PATTERN *.bat EXCLUDE)
elseif(WIN32)
install(DIRECTORY scripts/demo/
DESTINATION deployment_tools/demo
COMPONENT demo_scripts
USE_SOURCE_PERMISSIONS
PATTERN *.sh EXCLUDE)
endif()
ie_cpack(${IE_CPACK_COMPONENTS_ALL})

66
CODEOWNERS Normal file
View File

@@ -0,0 +1,66 @@
# See help here: https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
* @openvinotoolkit/openvino-maintainers
CODEOWNERS @openvinotoolkit/openvino-admins @openvinotoolkit/openvino-maintainers
# CI:
Jenkinsfile @openvinotoolkit/openvino-admins
azure-pipelines.yml @openvinotoolkit/openvino-admins
/.github/ @openvinotoolkit/openvino-admins
# QA Tests:
/tests/ @openvinotoolkit/openvino-tests-maintainers
# IE Core:
/inference-engine/ @openvinotoolkit/openvino-ie-maintainers
/inference-engine/src/transformations/ @GlebKazantaev @ichuraev
/inference-engine/src/legacy_api/ @openvinotoolkit/openvino-ngraph-maintainers
/inference-engine/src/readers/ @openvinotoolkit/openvino-ngraph-maintainers
# IE CPU:
/inference-engine/src/mkldnn_plugin/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/inference-engine/src/low_precision_transformations/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/inference-engine/thirdparty/mkl-dnn/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
# IE GPU:
/inference-engine/src/cldnn_engine/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/inference-engine/include/gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/inference-engine/include/cldnn/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
/inference-engine/thirdparty/clDNN/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
# IE VPU:
/inference-engine/src/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers
/inference-engine/include/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers
/inference-engine/thirdparty/movidius/ @openvinotoolkit/openvino-ie-vpu-maintainers
/inference-engine/tests_deprecated/unit/engines/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests_deprecated/functional/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests_deprecated/behavior/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests/functional/plugin/myriad/ @openvinotoolkit/openvino-ie-vpu-maintainers @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests/unit/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests/unit/engines/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tools/vpu/ @openvinotoolkit/openvino-ie-vpu-maintainers
/inference-engine/scripts/run_tests_myriad_multistick.sh @openvinotoolkit/openvino-ie-vpu-maintainers
# IE GNA:
/inference-engine/src/gna_plugin/ @openvinotoolkit/openvino-ie-gna-maintainers
/inference-engine/include/gna/ @openvinotoolkit/openvino-ie-gna-maintainers
# IE MULTI:
/inference-engine/src/multi_device/ @openvinotoolkit/openvino-ie-multi-maintainers
/inference-engine/include/multi-device/ @openvinotoolkit/openvino-ie-multi-maintainers
# IE Tests:
/inference-engine/tests/ @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests_deprecated/ @openvinotoolkit/openvino-ie-tests-maintainers
/inference-engine/tests/functional/inference_engine/ngraph_reader/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers
/inference-engine/tests/functional/inference_engine/transformations/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers
# MO:
/model-optimizer/ @openvinotoolkit/openvino-mo-maintainers
# nGraph:
/ngraph/ @openvinotoolkit/openvino-ngraph-maintainers
# Tools
/tools/ @openvinotoolkit/openvino-tools-maintainers

18
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,18 @@
# How to Contribute
We welcome community contributions to the OpenVINO™ repository.
If you have an idea how to improve the product, please share it
with us doing the following steps:
* Make sure you can build the product and run all tests and samples with your patch
* In case of a larger feature, provide relevant unit tests and one or more sample
* Submit a pull request at https://github.com/openvinotoolkit/openvino/pulls
## OpenVINO™ Coding Style Guide
We basically use the Google style (https://google.github.io/styleguide/cppguide.html) with some exceptions:
* 4 spaces instead of 2 spaces for indentations
* Limitation of 160 symbols for the line length
* Exceptions are allowed
* Using namespace are allowed in cpp and prohibited in headers
* Underscore symbol before member in classes/structures
* thisStyleForFunctions()
* theSameStyleForVariables

View File

@@ -1,83 +0,0 @@
OpenVINO Int8 Workflow In a Nutshell
-----------------------------------
To operate with int8, all the data (weights, inputs, activations, etc) should be carefully quantized. The quantization process is driven by:
* Normalization (or scaling) factor, determined by range of the data
* Quantization level, which depends on whether data is signed or unsigned, and destination precision.
OpenVINO supports two main sources of this information, and thus two main sources of the int8 models:
* Conversion of the framework-quantized models. This approach relies on the training for low precision and subsequent conversion of the resulting model with [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) tool. This approach usually gives optimal accuracy and performance, but requires careful model re-training/fine-tuning. Then for inference, both normalization and quantization factors are deduced fully from the model data (e.g. FakeQuantize layers) and no additional steps are required.
* Post-training quantization of the floating point models with the [Calibration tool](https://docs.openvinotoolkit.org/latest_docs_IE_DG_Int8Inference.html#low_precision_8_bit_integer_inference_workflow). Just like approach described earlier, the calibration is also fully offline additional step to equip a model with (optional) int8 information. The approach is somewhat more universal, requiring just floating point model and no retraining to leverage the int8. The calibration is iterative process of gathering _activations_ statistics like histogram (for determining scaling/parameters), applying the quantization parameters and evaluating resulting model accuracy to keep it as close to original as possible. For _weights_, in contrast, the maximum abs value per output channel m is found. The per-channel range is then [-m,m]. This calibration process trades the performance vs accuracy and results in a mixed precision model which are a combination of fp32 (high accuracy) and int8 (high performance) layers.
Notice that OpenVINO assumes the symmetrically quantized models (with respect to weights) and either symmetric (signed) or fully unsigned activations.
Quantized Model Example
-----------------------------------
For the MLPerf 0.5 submission, the only directly converted quantized model is ssd-mobilenet from Habana ("ssd-mobilenet 300x300 symmetrically quantized finetuned"), referenced at https://github.com/mlperf/inference/tree/master/v0.5/classification_and_detection.
To convert the model, just call the [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). There are certain specifics for [converting The TensorFlow Object Detection API models](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html). For example, the original pipeline.config is needed. For the symmetrically quantized it is actually the same as for another Habana's model ("ssd-mobilenet 300x300 quantized finetuned").
Conversion command-line is as follows:
```
$ python3 <OPENVINO_INSTALL_DIR/deployment_tools/model_optimizer/>mo.py
--input_model <path_to_model/>ssd_mobilenet_v1_quant_ft_no_zero_point_frozen_inference_graph.pb
--input_shape [1,300,300,3]
--reverse_input_channels
--tensorflow_use_custom_operations_config <OPENVINO_INSTALL_DIR/deployment_tools/model_optimizer/>extensions/front/tf/ssd_v2_support.json
--tensorflow_object_detection_api_pipeline_config <path_to_model/>pipeline.config
```
Model Calibration Example
-----------------------------------
To give an example of the [calibration workflow](https://docs.openvinotoolkit.org/latest/_inference_engine_tools_calibration_tool_README.html), let's consider ResNet-50 (v1.5) example ("resnet50-v1.5 tensorflowfp32 NHWC").
* First, the model is converted from an original framework format using the Model Optimizer tool. Since this is classification (and not detection) model, the command-line is really simple:
```
$ python3 <OPENVINO_INSTALL_DIR/deployment_tools/model_optimizer/> mo.py --input_model ./resnet50_v1.pb --input_shape [1,224,224,3] --reverse_input_channels
```
This outputs the model in Intermediate Representation (IR) format ( *.xml and *.bin file). FP32 is default precision
(use '--data_type FP16' to get fp16 model instead, which is more GPU-friendly).
* Secondly, perform model calibration using the [Calibration tool](https://docs.openvinotoolkit.org/latest_docs_IE_DG_Int8Inference.html#low_precision_8_bit_integer_inference_workflow). The tool is framework-agnostic and accepts the model in the IR format. Model calibration requires a validation dataset (to keep track of the accuracy during calibration). Currently, the calibration tool comes with example support of classification and object detection models on ImageNet and VOC2007/COCO data sets respectively and associated accuracy metrics. It is relatively straightforward to add another datasets and metrics.
The accuracy validation in turn comes via [Accuracy Checker](https://github.com/opencv/open_model_zoo/tree/develop/tools/accuracy_checker/accuracy_checker/) tool.
For that, the dataset specific annotations [are converted in the common format](https://github.com/opencv/open_model_zoo/tree/develop/tools/accuracy_checker/accuracy_checker/annotation_converters).
Specifically for the ImageNet required for the ResNet, the command-line is as follows:
```
$ convert_annotation imagenet --annotation_file <PATH_TO_IMAGES>/ILSVRC2012_val.txt --labels_file <PATH_TO_IMAGES>/synset_words.txt --has_background True
```
This outputs *.pickle and *.json files used in calibration via
[configuration files in YML](https://docs.openvinotoolkit.org/latest/_inference_engine_tools_calibration_tool_README.html).
Alternatively, you can specify the annotation conversion parameters in the config file and let the calibration tool call the 'convert_annotation' tool.
Similarly, the calibration tool can either accept the converted model as an IR, or the original model directly and perform conversion on the flight.
Both ways are governed by the 'launchers' section of the config file.
Care must be taken on the configuration in general, as there are many items like pre-processing
(mean and scale values, RGB vs BGR), resizing (with and without crop, etc), and so on, that can severely
affect the resulting accuracy. Notice that the pre-processing applied during calibration should match the pre-processing that is later used for inference.
Also, the pre-processing parameters (like mean/scale, or RGB-BGR conversion) can be either part of the Model Optimizer cmd-line
('mo_params' section of the config file) and this will bake the input transformations directly _into the resulting model_,
or 'preprocessing' section of the 'dataset'. The latter doesn't not include the pre-processing into the model,
but applies it to _every loaded dataset image_ instead (before using within the calibration).
The choice depends on your inference pipeline: if the pre-processing is explicitly performed in the code,
the model shouldn't include that, to avoid double pre-processing.
See example YML files for the MLPerf models in the 'example_calibration_files' folder.
The files define the original models, govern conversion to the IR, dataset annotations conversion,
and finally the calibration itself. You only have to patch the paths to your local machines.
*Notice that the pre-processing is not included into a model
(and thus assumed to be applied to an input image before inferencing that), see earlier this section*.
Finally, the calibration command-line is as simple as:
```
$ python3 calibrate.py
-c <PATH_TO_CONFIG>/resnet_v1.5_50.yml
-M <PATH_TO_MODEL_OPTIMIZER>
-C <PATH_TO_OUTPUT_FP_IR>
--output_dir <PATH_TO_OUTPUT_I8_IR>
```
Resulting IR contains original floating point (that all OpenVINO device plugins should support) and (optional) int8 statistics, that some devices might ignore (if int8 is not supported on the device), falling back to the original model.

10
Jenkinsfile vendored Executable file
View File

@@ -0,0 +1,10 @@
#!groovy
properties([
parameters([
booleanParam(defaultValue: true,
description: 'Cancel the rest of parallel stages if one of them fails and return status immediately',
name: 'failFast')
])
])
dldtPipelineEntrypoint(this)

View File

@@ -1,43 +1,48 @@
# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository
[![Stable release](https://img.shields.io/badge/version-2019.R3-green.svg)](https://github.com/opencv/dldt/releases/tag/2019_R3)
[![Stable release](https://img.shields.io/badge/version-2020.4-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2020.4.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes two components, namely Model Optimizer and Inference Engine, as well as CPU, GPU and heterogeneous plugins to accelerate deep learning inferencing on Intel(R) CPUs and Intel(R) Processor Graphics. It supports pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo/) along with 100+ open source and public models in popular formats such as Caffe*, Tensorflow*, MXNet* and ONNX*.
For int8 workflow primer, please see INT8_WORKFLOW.md.
This open source version includes two components: namely [Model Optimizer] and
[Inference Engine], as well as CPU, GPU and heterogeneous plugins to accelerate
deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as Caffe\*, TensorFlow\*,
MXNet\* and ONNX\*.
## Repository components:
* [Inference Engine](https://software.intel.com/en-us/articles/OpenVINO-InferEngine)
* [Model Optimizer](https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer)
* [Inference Engine]
* [Model Optimizer]
## License
Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0](LICENSE).
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
## Documentation
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Inference Engine build instructions](inference-engine/README.md)
* [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
* [OpenVINO™ Inference Engine Build Instructions](build-instruction.md)
* [Get Started with Deep Learning Deployment Toolkit on Linux](get-started-linux.md)\*
* [Introduction to Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
## How to Contribute
We welcome community contributions to the Deep Learning Deployment Toolkit repository. If you have an idea how to improve the product, please share it with us doing the following steps:
* Make sure you can build the product and run all tests and samples with your patch
* In case of a larger feature, provide a relevant unit tests and sample
* Submit a pull request at https://github.com/opencv/dldt/pulls
We will review your contribution and, if any additional fixes or modifications are necessary, may give some feedback to guide you. When accepted, your pull request will be merged into GitHub* repositories.
Deep Learning Deployment Toolkit is licensed under Apache License, Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
## Support
Please report questions, issues and suggestions using:
* [\#openvino](https://stackoverflow.com/search?q=%23openvino) tag on StackOverflow*
* [GitHub* Issues](https://github.com/opencv/dldt/issues)
* The `openvino` [tag on StackOverflow]\*
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
---
\* Other names and brands may be claimed as the property of others.
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/opencv/open_model_zoo
[Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine
[Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino

333
azure-pipelines.yml Normal file
View File

@@ -0,0 +1,333 @@
jobs:
- job: Lin
# About 150% of total time
timeoutInMinutes: 75
pool:
#vmImage: 'ubuntu-18.04'
name: LIN_VMSS_VENV_F8S_WU2
variables:
BUILD_TYPE: Release
BIN_DIR: ../bin/intel64/$(BUILD_TYPE)
steps:
- script: |
whoami
uname -a
which python3
gcc --version
lsb_release
env
cat /proc/cpuinfo
cat /proc/meminfo
vmstat -s
df
displayName: 'System properties'
- script: |
sudo apt --assume-yes install libusb-1.0-0-dev
python3 -m pip install -r ./inference-engine/ie_bridges/python/requirements.txt
# For running Python API tests
python3 -m pip install -r ./inference-engine/ie_bridges/python/src/requirements-dev.txt
displayName: 'Install dependencies'
- script: |
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
displayName: 'Install Ninja'
- script: git submodule update --init --recursive --jobs 8
displayName: 'Clone submodules'
- script: |
mkdir dldt-build
cd dldt-build
displayName: 'Create build directory'
- task: CMake@1
inputs:
workingDirectory: dldt-build
# CMake must get Python 3.x version by default
cmakeArgs: .. -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE=/usr/bin/python3.6 -DENABLE_TESTS=ON
- script: ninja
workingDirectory: dldt-build
displayName: 'Build Lin'
- script: ls -alR ../bin/
workingDirectory: dldt-build
displayName: 'List files'
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*
workingDirectory: dldt-build
displayName: 'nGraph UT'
continueOnError: false
- script: $(BIN_DIR)/InferenceEngineUnitTests
workingDirectory: dldt-build
displayName: 'IE UT old'
continueOnError: false
- script: $(BIN_DIR)/ieUnitTests
workingDirectory: dldt-build
displayName: 'IE UT'
continueOnError: false
- script: $(BIN_DIR)/cpuUnitTests
workingDirectory: dldt-build
displayName: 'CPU UT'
continueOnError: false
- script: $(BIN_DIR)/gnaUnitTests
workingDirectory: dldt-build
displayName: 'GNA UT'
continueOnError: false
- script: $(BIN_DIR)/vpuUnitTests
workingDirectory: dldt-build
displayName: 'VPU UT'
continueOnError: false
- script: $(BIN_DIR)/ieFuncTests
workingDirectory: dldt-build
displayName: 'IE FuncTests'
continueOnError: false
- script: $(BIN_DIR)/cpuFuncTests
workingDirectory: dldt-build
displayName: 'CPU FuncTests'
continueOnError: false
- script: $(BIN_DIR)/MklDnnBehaviorTests
workingDirectory: dldt-build
displayName: 'MklDnnBehaviorTests'
continueOnError: false
- script: git clone https://github.com/openvinotoolkit/testdata.git
displayName: 'Clone testdata'
- script: |
export DATA_PATH=`pwd`/../testdata
export MODELS_PATH=`pwd`/../testdata
$(BIN_DIR)/MklDnnFunctionalTests --gtest_filter=*smoke*:-smoke_MobileNet/ModelTransformationsTest.LPT/mobilenet_v2_tf_depthwise_batch1_inPluginDisabled_inTestDisabled_asymmetric*
workingDirectory: dldt-build
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
export DATA_PATH=`pwd`/../testdata
export MODELS_PATH=`pwd`/../testdata
$(BIN_DIR)/InferenceEngineCAPITests
workingDirectory: dldt-build
displayName: 'IE CAPITests'
continueOnError: false
- script: |
export DATA_PATH=`pwd`/../testdata
export MODELS_PATH=`pwd`/../testdata
export LD_LIBRARY_PATH=`pwd`/$(BIN_DIR)/lib
export PYTHONPATH=`pwd`/$(BIN_DIR)/lib/python_api/python3.6
env
cd ../inference-engine/ie_bridges/python/tests
pytest
workingDirectory: dldt-build
displayName: 'Python API Tests'
continueOnError: false
enabled: false
- job: Mac
# About 200% of total time (perfomace of Mac hosts is unstable)
timeoutInMinutes: 180
pool:
vmImage: 'macOS-10.15'
variables:
BUILD_TYPE: Release
BIN_DIR: ../bin/intel64/$(BUILD_TYPE)
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
- script: |
whoami
uname -a
which python3
gcc --version
xcrun --sdk macosx --show-sdk-version
env
sysctl -a
displayName: 'System properties'
- script: |
brew install cython
brew install automake
displayName: 'Install dependencies'
- script: brew install ninja
displayName: 'Install Ninja'
- script: git submodule update --init --recursive --jobs 8
displayName: 'Clone submodules'
- script: |
mkdir dldt-build
cd dldt-build
displayName: 'Create build directory'
- script: |
export PATH="/usr/local/opt/cython/bin:$PATH"
export CC=gcc
export CXX=g++
# Disable errors with Ninja
export CXXFLAGS="-Wno-error=unused-command-line-argument"
export CFLAGS="-Wno-error=unused-command-line-argument"
cmake .. -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON
workingDirectory: dldt-build
displayName: 'CMake'
- script: ninja
workingDirectory: dldt-build
displayName: 'Build Mac'
- script: ls -alR ../bin/
workingDirectory: dldt-build
displayName: 'List files'
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid
workingDirectory: dldt-build
displayName: 'nGraph UT'
continueOnError: false
- script: $(BIN_DIR)/InferenceEngineUnitTests
workingDirectory: dldt-build
displayName: 'IE UT old'
continueOnError: false
- script: $(BIN_DIR)/ieUnitTests
workingDirectory: dldt-build
displayName: 'IE UT'
continueOnError: false
- script: $(BIN_DIR)/cpuUnitTests
workingDirectory: dldt-build
displayName: 'CPU UT'
continueOnError: false
- script: $(BIN_DIR)/vpuUnitTests
workingDirectory: dldt-build
displayName: 'VPU UT'
continueOnError: false
- script: $(BIN_DIR)/ieFuncTests
workingDirectory: dldt-build
displayName: 'IE FuncTests'
continueOnError: false
- script: $(BIN_DIR)/cpuFuncTests
workingDirectory: dldt-build
displayName: 'CPU FuncTests'
continueOnError: false
- script: $(BIN_DIR)/MklDnnBehaviorTests
workingDirectory: dldt-build
displayName: 'MklDnnBehaviorTests'
continueOnError: false
- script: git clone https://github.com/openvinotoolkit/testdata.git
displayName: 'Clone testdata'
- script: |
export DATA_PATH=`pwd`/../testdata
export MODELS_PATH=`pwd`/../testdata
$(BIN_DIR)/MklDnnFunctionalTests --gtest_filter=*smoke*:-smoke_MobileNet/ModelTransformationsTest.LPT/mobilenet_v2_tf_depthwise_batch1_inPluginDisabled_inTestDisabled_asymmetric*
workingDirectory: dldt-build
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
export DATA_PATH=`pwd`/../testdata
export MODELS_PATH=`pwd`/../testdata
$(BIN_DIR)/InferenceEngineCAPITests
workingDirectory: dldt-build
displayName: 'IE CAPITests'
continueOnError: false
- job: Win
# About 150% of total time
timeoutInMinutes: 120
pool:
#vmImage: 'vs2017-win2016'
name: WIN_VMSS_VENV_F8S_WU2
variables:
BUILD_TYPE: Release
BUILD_DIR: D:\dldt-build
BIN_DIR: ..\bin\intel64
MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
steps:
- script: |
where python3
wmic computersystem get TotalPhysicalMemory
wmic cpu list
wmic logicaldisk get description,name
wmic VOLUME list
set
displayName: 'System properties'
- script: |
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
powershell -command "Expand-Archive -Force ninja-win.zip"
displayName: Install Ninja
- script: git submodule update --init --recursive --jobs 8
displayName: 'Clone submodules'
- script: |
rd /Q /S $(BUILD_DIR)
mkdir $(BUILD_DIR)\bin
rd /Q /S dldt-build
mkdir dldt-build
displayName: 'Create build directory'
- script: |
set PATH=$(Build.Repository.LocalPath)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(Build.Repository.LocalPath)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: |
set PATH=$(Build.Repository.LocalPath)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Win'
- script: dir ..\bin\ /s /b
workingDirectory: dldt-build
displayName: 'List files'
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*
workingDirectory: dldt-build
displayName: 'nGraph UT'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\InferenceEngineUnitTests
workingDirectory: dldt-build
displayName: 'IE UT old'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\ieUnitTests
workingDirectory: dldt-build
displayName: 'IE UT'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\cpuUnitTests
workingDirectory: dldt-build
displayName: 'CPU UT'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\gnaUnitTests
workingDirectory: dldt-build
displayName: 'GNA UT'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\vpuUnitTests
workingDirectory: dldt-build
displayName: 'VPU UT'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\ieFuncTests
workingDirectory: dldt-build
displayName: 'IE FuncTests'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\cpuFuncTests
workingDirectory: dldt-build
displayName: 'CPU FuncTests'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\MklDnnBehaviorTests
workingDirectory: dldt-build
displayName: 'MklDnnBehaviorTests'
continueOnError: false
- script: git clone https://github.com/openvinotoolkit/testdata.git
workingDirectory: $(BUILD_DIR)
displayName: 'Clone testdata'
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;$(Build.Repository.LocalPath)\inference-engine\temp\opencv_4.3.0\opencv\bin;%PATH%
set DATA_PATH=$(BUILD_DIR)\testdata
set MODELS_PATH=$(BUILD_DIR)\testdata
$(BIN_DIR)\MklDnnFunctionalTests --gtest_filter=*smoke*:-smoke_MobileNet/ModelTransformationsTest.LPT/mobilenet_v2_tf_depthwise_batch1_inPluginDisabled_inTestDisabled_asymmetric*
workingDirectory: dldt-build
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
set PATH=$(Build.Repository.LocalPath)\inference-engine\temp\tbb\bin;$(Build.Repository.LocalPath)\inference-engine\temp\opencv_4.3.0\opencv\bin;%PATH%
set DATA_PATH=$(BUILD_DIR)\testdata
set MODELS_PATH=$(BUILD_DIR)\testdata
$(BIN_DIR)\InferenceEngineCAPITests
workingDirectory: dldt-build
displayName: 'IE CAPITests'
continueOnError: false

704
build-instruction.md Normal file
View File

@@ -0,0 +1,704 @@
# Build OpenVINO™ Inference Engine
## Contents
- [Introduction](#introduction)
- [Build on Linux\* Systems](#build-on-linux-systems)
- [Software Requirements](#software-requirements)
- [Build Steps](#build-steps)
- [Additional Build Options](#additional-build-options)
- [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
- [Hardware Requirements](#hardware-requirements)
- [Native Compilation](#native-compilation)
- [Cross Compilation Using Docker\*](#cross-compilation-using-docker)
- [Additional Build Options](#additional-build-options-1)
- [Build on Windows* Systems](#build-on-windows-systems)
- [Software Requirements](#software-requirements-1)
- [Build Steps](#build-steps-1)
- [Additional Build Options](#additional-build-options-2)
- [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
- [Build on macOS\* Systems](#build-on-macos-systems)
- [Software Requirements](#software-requirements-2)
- [Build Steps](#build-steps-2)
- [Additional Build Options](#additional-build-options-3)
- [Build on Android\* Systems](#build-on-android-systems)
- [Software Requirements](#software-requirements-3)
- [Build Steps](#build-steps-3)
- [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
- [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
- [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
- [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
- [Next Steps](#next-steps)
- [Additional Resources](#additional-resources)
## Introduction
The Inference Engine can infer models in different formats with various input
and output formats.
The open source version of Inference Engine includes the following plugins:
| PLUGIN | DEVICE TYPES |
| ---------------------| -------------|
| CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
| GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
| GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa\* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
| MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
| Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
Inference Engine plugin for Intel® FPGA is distributed only in a binary form,
as a part of [Intel® Distribution of OpenVINO™].
## Build on Linux\* Systems
The software was validated on:
- Ubuntu\* 18.04 (64-bit) with default GCC\* 7.5.0
- Ubuntu\* 16.04 (64-bit) with default GCC\* 5.4.0
- CentOS\* 7.4 (64-bit) with default GCC\* 4.8.5
### Software Requirements
- [CMake]\* 3.11 or higher
- GCC\* 4.8 or higher to build the Inference Engine
- Python 3.5 or higher for Inference Engine Python API wrapper
- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
### Build Steps
1. Clone submodules:
```sh
cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
project root folder.
```sh
chmod +x install_dependencies.sh
```
```sh
./install_dependencies.sh
```
3. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
before running the build. If you don't want to use the GPU plugin, use the
`-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
Intel® Graphics Compute Runtime for OpenCL™ Driver.
4. Create a build folder:
```sh
mkdir build && cd build
```
5. Inference Engine uses a CMake-based build system. In the created `build`
directory, run `cmake` to fetch project dependencies and create Unix
makefiles, then run `make` to build the project:
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
make --jobs=$(nproc --all)
```
### Additional Build Options
You can use the following additional build options:
- The default build uses an internal JIT GEMM implementation.
- To switch to an OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option with
`BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify a path to the
OpenBLAS headers and library. For example, the following options on CentOS\*:
`-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
- To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL`
and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked
MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded
from the Intel® [MKL-DNN repository].
- Threading Building Blocks (TBB) is used by default. To build the Inference
Engine with OpenMP\* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by
the CMake-based script. If you want to use the automatically downloaded
packages but you already have installed TBB or OpenCV packages configured in
your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
environment variables before running the `cmake` command, otherwise they
will not be downloaded and the build may fail if incompatible versions were
installed.
- If the CMake-based build script can not find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, refer to the
[Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
section for details.
- To build the Python API wrapper:
1. Install all additional packages listed in the
`/inference-engine/ie_bridges/python/requirements.txt` file:
```sh
pip install -r requirements.txt
```
2. Use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following
options:
```
-DPYTHON_EXECUTABLE=`which python3.7` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7
```
- To switch the CPU and GPU plugins off/on, use the `cmake` options
`-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build for Raspbian Stretch* OS
> **NOTE**: Only the MYRIAD plugin is supported.
### Hardware Requirements
* Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
> **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
* [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
* [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
### Native Compilation
Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
1. Install dependencies:
```bash
sudo apt-get update
sudo apt-get install -y git cmake libusb-1.0-0-dev
```
2. Go to the cloned `openvino` repository:
```bash
cd openvino
```
3. Initialize submodules:
```bash
git submodule update --init --recursive
```
4. Create a build folder:
```bash
mkdir build && cd build
```
5. Build the Inference Engine:
```bash
cmake -DCMAKE_BUILD_TYPE=Release \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_GNA=OFF .. && make
```
### Cross Compilation Using Docker*
This compilation was tested on the following configuration:
* Host: Ubuntu\* 18.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
* Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
1. Install Docker\*:
```bash
sudo apt-get install -y docker.io
```
2. Add a current user to `docker` group:
```bash
sudo usermod -a -G docker $USER
```
Log out and log in for this to take effect.
3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
with the following content:
```docker
FROM debian:stretch
USER root
RUN dpkg --add-architecture armhf && \
apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
crossbuild-essential-armhf \
git \
wget \
libusb-1.0-0-dev:armhf \
libgtk-3-dev:armhf \
libavcodec-dev:armhf \
libavformat-dev:armhf \
libswscale-dev:armhf \
libgstreamer1.0-dev:armhf \
libgstreamer-plugins-base1.0-dev:armhf \
libpython3-dev:armhf \
python3-pip
RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
tar xf cmake-3.14.3.tar.gz && \
(cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
```
It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
4. Build a Docker\* image:
```bash
docker image build -t ie_cross_armhf ie_cross_armhf
```
5. Run Docker\* container with mounted source code folder from host:
```bash
docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
```
6. While in the container:
1. Go to the cloned `openvino` repository:
```bash
cd openvino
```
2. Create a build folder:
```bash
mkdir build && cd build
```
3. Build the Inference Engine:
```bash
cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
-DTHREADS_PTHREAD_ARG="-pthread" \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
```
7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
in the `openvino/bin/armv7l/` directory and the OpenCV*
installation in the `openvino/inference-engine/temp`.
>**NOTE**: Native applications that link to cross-compiled Inference Engine
library require an extra compilation flag `-march=armv7-a`.
### Additional Build Options
You can use the following additional build options:
- Required versions of OpenCV packages are downloaded automatically by the
CMake-based script. If you want to use the automatically downloaded packages
but you already have installed OpenCV packages configured in your environment,
you may need to clean the `OpenCV_DIR` environment variable before running
the `cmake` command; otherwise they won't be downloaded and the build may
fail if incompatible versions were installed.
- If the CMake-based build script cannot find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, see: [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
for details.
- To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip`
packages using `apt-get`; then install `numpy` and `cython` python modules
via `pip3`, adding the following options:
```sh
-DENABLE_PYTHON=ON \
-DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.5
```
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build on Windows* Systems
The software was validated on:
- Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2017 and Intel® C++
Compiler 2018 Update 3
### Software Requirements
- [CMake]\*3.11 or higher
- Microsoft\* Visual Studio 2017, 2019 or [Intel® C++ Compiler] 18.0
- (Optional) Intel® Graphics Driver for Windows* (26.20) [driver package].
- Python 3.5 or higher for Inference Engine Python API wrapper
### Build Steps
1. Clone submodules:
```sh
git submodule update --init --recursive
```
2. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to [download and install
the Intel® Graphics Driver for Windows (26.20) [driver package] before
running the build. If you don't want to use the GPU plugin, use the
`-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
Intel® Graphics Driver.
3. Create build directory:
```sh
mkdir build
```
4. In the `build` directory, run `cmake` to fetch project dependencies and
generate a Visual Studio solution.
For Microsoft\* Visual Studio 2017:
```sh
cmake -G "Visual Studio 15 2017 Win64" -DCMAKE_BUILD_TYPE=Release ..
```
For Microsoft\* Visual Studio 2019:
```sh
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Release ..
```
For Intel® C++ Compiler 18:
```sh
cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
-DCMAKE_BUILD_TYPE=Release ^
-DICCLIB="C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\compiler\lib" ..
```
5. Build generated solution in Visual Studio or run
`cmake --build . --config Release` to build from the command line.
6. Before running the samples, add paths to the TBB and OpenCV binaries used for
the build to the `%PATH%` environment variable. By default, TBB binaries are
downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.3.0/opencv/bin`
folder.
### Additional Build Options
- Internal JIT GEMM implementation is used by default.
- To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake
option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include`
and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Download
a prebuilt OpenBLAS\* package via the [OpenBLAS] link. mingw64* runtime
dependencies can be downloaded via the [mingw64\* runtime dependencies] link.
- To switch to the optimized MKL-ML\* GEMM implementation, use the
`-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to
unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be
downloaded from the Intel&reg; [MKL-DNN repository for Windows].
- Threading Building Blocks (TBB) is used by default. To build the Inference
Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by
the CMake-based script. If you want to use the automatically-downloaded
packages but you already have installed TBB or OpenCV packages configured in
your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
environment variables before running the `cmake` command; otherwise they won't
be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
section for details.
- To switch off/on the CPU and GPU plugins, use the `cmake` options
`-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
-DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
-DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
```
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
### Building Inference Engine with Ninja* Build System
```sh
call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
set CXX=icl
set CC=icl
:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
set TBBROOT=
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
```
## Build on macOS* Systems
> **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports
inference on Intel CPUs only.
The software was validated on:
- macOS\* 10.14, 64-bit
### Software Requirements
- [CMake]\* 3.11 or higher
- Clang\* compiler from Xcode\* 10.1 or higher
- Python\* 3.5 or higher for the Inference Engine Python API wrapper
### Build Steps
1. Clone submodules:
```sh
cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
project root folder:
```sh
chmod +x install_dependencies.sh
```
```sh
./install_dependencies.sh
```
3. Create a build folder:
```sh
mkdir build
```
4. Inference Engine uses a CMake-based build system. In the created `build`
directory, run `cmake` to fetch project dependencies and create Unix makefiles,
then run `make` to build the project:
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
make --jobs=$(nproc --all)
```
### Additional Build Options
You can use the following additional build options:
- Internal JIT GEMM implementation is used by default.
- To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and
`-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML
with the `include` and `lib` folders. MKL-ML\* [package for Mac] can be downloaded
[here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
- Threading Building Blocks (TBB) is used by default. To build the Inference
Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by
the CMake-based script. If you want to use the automatically downloaded
packages but you already have installed TBB or OpenCV packages configured in
your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
environment variables before running the `cmake` command, otherwise they won't
be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, refer to the
[Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
section for details.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE=/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 \
-DPYTHON_LIBRARY=/Library/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
-DPYTHON_INCLUDE_DIR=/Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m
```
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_JSON_ENABLE=ON` enables nGraph JSON-based serialization.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build on Android* Systems
This section describes how to build Inference Engine for Android x86 (64-bit) operating systems.
### Software Requirements
- [CMake]\* 3.11 or higher
- Android NDK (this guide has been validated with r20 release)
### Build Steps
1. Download and unpack Android NDK: https://developer.android.com/ndk/downloads. Let's assume that `~/Downloads` is used as a working folder.
```sh
cd ~/Downloads
wget https://dl.google.com/android/repository/android-ndk-r20-linux-x86_64.zip
unzip android-ndk-r20-linux-x86_64.zip
mv android-ndk-r20 android-ndk
```
2. Clone submodules
```sh
cd openvino
git submodule update --init --recursive
```
3. Create a build folder:
```sh
mkdir build
```
4. Change working directory to `build` and run `cmake` to create makefiles. Then run `make`.
```sh
cd build
cmake .. \
-DCMAKE_TOOLCHAIN_FILE=~/Downloads/android-ndk/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=x86_64 \
-DANDROID_PLATFORM=21 \
-DANDROID_STL=c++_shared \
-DENABLE_OPENCV=OFF
make --jobs=$(nproc --all)
```
* `ANDROID_ABI` specifies target architecture (`x86_64`)
* `ANDROID_PLATFORM` - Android API version
* `ANDROID_STL` specifies that shared C++ runtime is used. Copy `~/Downloads/android-ndk/sources/cxx-stl/llvm-libc++/libs/x86_64/libc++_shared.so` from Android NDK along with built binaries
## Use Custom OpenCV Builds for Inference Engine
> **NOTE**: The recommended and tested version of OpenCV is 4.4.0.
Required versions of OpenCV packages are downloaded automatically during the
building Inference Engine library. If the build script can not find and download
the OpenCV package that is supported on your platform, you can use one of the
following options:
* Download the most suitable version from the list of available pre-build
packages from [https://download.01.org/opencv/2020/openvinotoolkit] from the
`<release_version>/inference_engine` directory.
* Use a system-provided OpenCV package (e.g with running the
`apt install libopencv-dev` command). The following modules must be enabled:
`imgcodecs`, `videoio`, `highgui`.
* Get the OpenCV package using a package manager: pip, conda, conan etc. The
package must have the development components included (header files and CMake
scripts).
* Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
After you got the built OpenCV library, perform the following preparation steps
before running the Inference Engine build:
1. Set the `OpenCV_DIR` environment variable to the directory where the
`OpenCVConfig.cmake` file of you custom OpenCV build is located.
2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF`
option for CMake-based build script for Inference Engine.
## Add Inference Engine to Your Project
For CMake projects, set the `InferenceEngine_DIR` environment variable:
```sh
export InferenceEngine_DIR=/path/to/openvino/build/
```
Then you can find Inference Engine by `find_package`:
```cmake
find_package(InferenceEngine)
include_directories(${InferenceEngine_INCLUDE_DIRS})
target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
```
## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
> **NOTE**: These steps are only required if you want to perform inference on
Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using
the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started].
### For Linux, Raspbian\* Stretch OS
1. Add the current Linux user to the `users` group; you will need to log out and
log in for it to take effect:
```sh
sudo usermod -a -G users "$(whoami)"
```
2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel®
Neural Compute Stick 2, install the USB rules as follows:
```sh
cat <<EOF > 97-myriad-usbboot.rules
SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
EOF
```
```sh
sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
```
```sh
sudo udevadm control --reload-rules
```
```sh
sudo udevadm trigger
```
```sh
sudo ldconfig
```
```sh
rm 97-myriad-usbboot.rules
```
## Next Steps
Congratulations, you have built the Inference Engine. To get started with the
OpenVINO™, proceed to the Get Started guides:
* [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
## Notice
To enable some additional nGraph features and use your custom nGraph library with the OpenVINO™ binary package,
make sure the following:
- nGraph library was built with the same version which is used in the Inference Engine.
- nGraph library and the Inference Engine were built with the same compilers. Otherwise you might face application binary interface (ABI) problems.
To prepare your custom nGraph library for distribution, which includes collecting all headers, copy
binaries, and so on, use the `install` CMake target.
This target collects all dependencies, prepares the nGraph package and copies it to a separate directory.
## Additional Resources
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
---
\* Other names and brands may be claimed as the property of others.
[Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
[CMake]:https://cmake.org/download/
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
[MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
[MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
[OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
[mingw64\* runtime dependencies]:https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download
[https://download.01.org/opencv/2020/openvinotoolkit]:https://download.01.org/opencv/2020/openvinotoolkit
[build instructions]:https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html
[driver package]:https://downloadcenter.intel.com/download/29335/Intel-Graphics-Windows-10-DCH-Drivers
[Intel® Neural Compute Stick 2 Get Started]:https://software.intel.com/en-us/neural-compute-stick/get-started
[Intel® C++ Compiler]:https://software.intel.com/en-us/intel-parallel-studio-xe
[OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download

73
cmake/arm.toolchain.cmake Normal file
View File

@@ -0,0 +1,73 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR armv7l)
set(CMAKE_C_COMPILER arm-linux-gnueabihf-gcc)
set(CMAKE_CXX_COMPILER arm-linux-gnueabihf-g++)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

View File

@@ -0,0 +1,73 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
macro(__cmake_find_root_save_and_reset)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(__save_${v} ${${v}})
set(${v} NEVER)
endforeach()
endmacro()
macro(__cmake_find_root_restore)
foreach(v
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
)
set(${v} ${__save_${v}})
unset(__save_${v})
endforeach()
endmacro()
# macro to find programs on the host OS
macro(find_host_program)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_program(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()
# macro to find packages on the host OS
macro(find_host_package)
__cmake_find_root_save_and_reset()
if(CMAKE_HOST_WIN32)
SET(WIN32 1)
SET(UNIX)
elseif(CMAKE_HOST_APPLE)
SET(APPLE 1)
SET(UNIX)
endif()
find_package(${ARGN})
SET(WIN32)
SET(APPLE)
SET(UNIX 1)
__cmake_find_root_restore()
endmacro()

View File

@@ -0,0 +1,39 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON CACHE BOOL "" FORCE)
endif()
#64 bits platform
if (CMAKE_SIZEOF_VOID_P EQUAL 8)
message(STATUS "Detected 64 bit architecture")
SET(ARCH_64 ON)
else()
message(STATUS "Detected 32 bit architecture")
SET(ARCH_64 OFF)
endif()
if (NOT ENABLE_MKL_DNN)
set(ENABLE_MKL OFF)
endif()
if(ENABLE_AVX512F)
if ((CMAKE_CXX_COMPILER_ID STREQUAL "MSVC") AND (MSVC_VERSION VERSION_LESS 1920))
# 1920 version of MSVC 2019. In MSVC 2017 AVX512F not work
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 6))
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 10))
# TBD: clarify which AppleClang version supports avx512
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9))
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
endif()
print_enabled_features()

View File

@@ -0,0 +1,211 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT TARGET ie_coverage_clean)
add_custom_target(ie_coverage_clean)
set_target_properties(ie_coverage_clean PROPERTIES FOLDER coverage)
endif()
if(NOT TARGET ie_coverage_init)
add_custom_target(ie_coverage_init)
set_target_properties(ie_coverage_init PROPERTIES FOLDER coverage)
endif()
if(NOT TARGET ie_coverage)
add_custom_target(ie_coverage)
set_target_properties(ie_coverage PROPERTIES FOLDER coverage)
endif()
set(IE_COVERAGE_REPORTS "${CMAKE_BINARY_DIR}/coverage")
set(IE_COVERAGE_SCRIPT_DIR "${CMAKE_CURRENT_SOURCE_DIR}/cmake/coverage")
include(CMakeParseArguments)
#
# ie_coverage_clean(REPOSITORY <repo> DIRECTORY <dir>)
#
function(ie_coverage_clean)
cmake_parse_arguments(IE_COVERAGE "" "REPOSITORY;DIRECTORY" "" ${ARGN})
add_custom_target(ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
COMMAND lcov --zerocounters --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
COMMENT "Add zero counters for coverage for ${IE_COVERAGE_REPOSITORY}"
VERBATIM)
add_custom_target(ie_coverage_clean_${IE_COVERAGE_REPOSITORY}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_REPORTS=${IE_COVERAGE_REPORTS}"
-D "IE_COVERAGE_DIRECTORY=${IE_COVERAGE_DIRECTORY}"
-D "CMAKE_BINARY_DIRECTORY=${CMAKE_BINARY_DIR}"
-D "CMAKE_SOURCE_DIRECTORY=${CMAKE_SOURCE_DIR}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_clean.cmake"
COMMENT "Clean previously created HTML report files for ${IE_COVERAGE_REPOSITORY}"
DEPENDS "${IE_COVERAGE_SCRIPT_DIR}/coverage_clean.cmake"
VERBATIM)
set_target_properties(ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
ie_coverage_clean_${IE_COVERAGE_REPOSITORY}
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_clean ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
ie_coverage_clean_${IE_COVERAGE_REPOSITORY})
endfunction()
#
# ie_coverage_capture(INFO_FILE <info_file>
# BASE_DIRECTORY <base dir>
# DIRECTORY <gcda dir>)
#
function(ie_coverage_capture)
cmake_parse_arguments(IE_COVERAGE "" "INFO_FILE;BASE_DIRECTORY;DIRECTORY" "" ${ARGN})
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}.info")
set(output_base_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}_base.info")
set(output_tests_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}_tests.info")
add_custom_command(OUTPUT ${output_base_file}
COMMAND ${CMAKE_COMMAND} -E make_directory "${IE_COVERAGE_REPORTS}"
COMMAND lcov --no-external --capture --initial --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
--base-directory "${IE_COVERAGE_BASE_DIRECTORY}"
--output-file ${output_base_file}
COMMENT "Capture initial coverage data ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_command(OUTPUT ${output_tests_file}
COMMAND ${CMAKE_COMMAND} -E make_directory "${IE_COVERAGE_REPORTS}"
COMMAND lcov --no-external --capture --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
--base-directory "${IE_COVERAGE_BASE_DIRECTORY}"
--output-file ${output_tests_file}
COMMENT "Capture test coverage data ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_OUTPUT_FILE=${output_file}"
-D "IE_COVERAGE_INPUT_FILES=${output_base_file};${output_tests_file}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_merge.cmake"
COMMENT "Generate total coverage data ${IE_COVERAGE_INFO_FILE}"
DEPENDS ${output_base_file} ${output_tests_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_INFO_FILE}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_INFO_FILE}_info
PROPERTIES FOLDER coverage)
endfunction()
#
# ie_coverage_extract(INPUT <info_file> OUTPUT <output_file> PATTERNS <patterns ...>)
#
function(ie_coverage_extract)
cmake_parse_arguments(IE_COVERAGE "" "INPUT;OUTPUT" "PATTERNS" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INPUT}.info")
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
set(commands lcov --quiet)
foreach(pattern IN LISTS IE_COVERAGE_PATTERNS)
list(APPEND commands --extract ${input_file} ${pattern})
endforeach()
list(APPEND commands --output-file ${output_file})
add_custom_command(OUTPUT ${output_file}
COMMAND ${commands}
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ie_coverage_${IE_COVERAGE_INPUT}_info)
endfunction()
#
# ie_coverage_remove(INPUT <info_file> OUTPUT <output_file> PATTERNS <patterns ...>)
#
function(ie_coverage_remove)
cmake_parse_arguments(IE_COVERAGE "" "INPUT;OUTPUT" "PATTERNS" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INPUT}.info")
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
set(commands lcov --quiet)
foreach(pattern IN LISTS IE_COVERAGE_PATTERNS)
list(APPEND commands --remove ${input_file} ${pattern})
endforeach()
list(APPEND commands --output-file ${output_file})
add_custom_command(OUTPUT ${output_file}
COMMAND ${commands}
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ie_coverage_${IE_COVERAGE_INPUT}_info)
endfunction()
#
# ie_coverage_merge(OUTPUT <output file> INPUTS <input files ...>)
#
function(ie_coverage_merge)
cmake_parse_arguments(IE_COVERAGE "" "OUTPUT" "INPUTS" ${ARGN})
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
foreach(input_info_file IN LISTS IE_COVERAGE_INPUTS)
set(input_file ${IE_COVERAGE_REPORTS}/${input_info_file}.info)
list(APPEND dependencies ie_coverage_${input_info_file}_info)
list(APPEND input_files ${input_file})
endforeach()
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_OUTPUT_FILE=${output_file}"
-D "IE_COVERAGE_INPUT_FILES=${input_files}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_merge.cmake"
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_files}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ${dependencies})
endfunction()
#
# ie_coverage_genhtml(INFO_FILE <info_file> PREFIX <prefix>)
#
function(ie_coverage_genhtml)
cmake_parse_arguments(IE_COVERAGE "" "INFO_FILE;PREFIX" "" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}.info")
set(output_directory "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}")
add_custom_command(OUTPUT "${output_directory}/index.html"
COMMAND genhtml ${input_file} --title "${IE_COVERAGE_INFO_FILE}" --legend
--no-branch-coverage --demangle-cpp
--output-directory "${output_directory}"
--num-spaces 4 --quiet
--prefix "${IE_COVERAGE_PREFIX}"
DEPENDS ${input_file}
COMMENT "Generate HTML report for ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml
DEPENDS "${output_directory}/index.html")
set_target_properties(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml ie_coverage_${IE_COVERAGE_INFO_FILE}_info)
add_dependencies(ie_coverage ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml)
endfunction()

View File

@@ -0,0 +1,30 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT DEFINED IE_COVERAGE_REPORTS)
message(FATAL_ERROR "IE_COVERAGE_REPORTS variable is not defined")
return()
endif()
file(REMOVE_RECURSE "${IE_COVERAGE_REPORTS}")
if(NOT DEFINED IE_COVERAGE_DIRECTORY)
message(FATAL_ERROR "IE_COVERAGE_DIRECTORY variable is not defined")
return()
endif()
# remove .gcno files which are kept from the previous build
file(GLOB_RECURSE gcno_files "${IE_COVERAGE_DIRECTORY}/*.gcno")
foreach(file IN LISTS gcno_files)
string(REPLACE ".gcno" "" temp_file "${file}")
string(REGEX REPLACE "CMakeFiles/.+dir/" "" temp_file "${temp_file}")
string(REPLACE "${CMAKE_BINARY_DIRECTORY}" "${CMAKE_SOURCE_DIRECTORY}" source_file "${temp_file}")
if(NOT EXISTS "${source_file}")
file(REMOVE "${file}")
string(REPLACE "${CMAKE_BINARY_DIRECTORY}/" "" file "${file}")
message("Removing ${file}")
endif()
endforeach()

View File

@@ -0,0 +1,22 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT DEFINED IE_COVERAGE_OUTPUT_FILE)
message(FATAL_ERROR "IE_COVERAGE_OUTPUT_FILE is not defined")
endif()
if(NOT DEFINED IE_COVERAGE_INPUT_FILES)
message(FATAL_ERROR "IE_COVERAGE_INPUT_FILES is not defined")
endif()
set(command lcov --quiet)
foreach(input_info_file IN LISTS IE_COVERAGE_INPUT_FILES)
file(SIZE ${input_info_file} size)
if(NOT size EQUAL 0)
list(APPEND command --add-tracefile "${input_info_file}")
endif()
endforeach()
list(APPEND command --output-file ${IE_COVERAGE_OUTPUT_FILE})
execute_process(COMMAND ${command})

View File

@@ -0,0 +1,105 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# =================================================================
#
# Generates cpp file with dispatcher for cross compiled function
# Parameters:
# XARCH_API_HEADER -- path to header with function declaration
# XARCH_FUNC_NAME -- name of function to dispatch
# XARCH_NAMESPACES -- full namespace used to keep ODR
# XARCH_DISP_FILE -- dispatcher file name to generate
# XARCH_SET -- set of ARCH supported by dispatcher. space delimited
#
# =================================================================
set(_CPU_CHECK_ANY "true")
set(_CPU_CHECK_SSE42 "with_cpu_x86_sse42()")
set(_CPU_CHECK_AVX "with_cpu_x86_avx()")
set(_CPU_CHECK_AVX2 "with_cpu_x86_avx2()")
set(_CPU_CHECK_AVX512F "with_cpu_x86_avx512f()")
function(_generate_dispatcher)
_find_signature_in_file(${XARCH_API_HEADER} ${XARCH_FUNC_NAME} SIGNATURE)
_generate_call_line_from_signature("${SIGNATURE}" CALL_LINE)
string(REPLACE " " ";" XARCH_SET "${XARCH_SET}")
string(REPLACE "::" ";" XARCH_NAMESPACES "${XARCH_NAMESPACES}")
list(GET XARCH_NAMESPACES -1 XARCH_CURRENT_NAMESPACE)
set(PARENT_NAMESPACES ${XARCH_NAMESPACES})
list(REMOVE_AT PARENT_NAMESPACES -1)
set(DISP_CONTENT
"
//
// Auto generated file by CMake macros cross_compiled_file()
// !! do not modify it !!!
//
#include \"${XARCH_API_HEADER}\"
#include \"ie_system_conf.h\"
")
foreach(_namespace ${PARENT_NAMESPACES})
string(APPEND DISP_CONTENT
"namespace ${_namespace} {\n")
endforeach()
foreach(_arch ${XARCH_SET})
string(APPEND DISP_CONTENT
"namespace ${_arch} {\n ${SIGNATURE}\; \n}\n")
endforeach()
string(APPEND DISP_CONTENT
"namespace ${XARCH_CURRENT_NAMESPACE} {\n\n${SIGNATURE} {\n")
foreach(_arch ${XARCH_SET})
string(APPEND DISP_CONTENT
" if (${_CPU_CHECK_${_arch}}) {\n return ${_arch}::${CALL_LINE}\;\n }\n")
endforeach()
string(APPEND DISP_CONTENT "}\n\n}\n")
foreach(_namespace ${PARENT_NAMESPACES})
string(APPEND DISP_CONTENT "} // namespace ${_namespace}\n")
endforeach()
file(WRITE ${XARCH_DISP_FILE} ${DISP_CONTENT})
endfunction()
function(_find_signature_in_file FILE FUNCTION RESULT_NAME)
file(READ "${FILE}" CONTENT)
set(valid_chars "<>:_*& a-zA-Z0-9\n") ## valid chars for type/var specification (including new line /n)
string(REGEX MATCH "[${valid_chars}]*${FUNCTION}[ ]*[(][=,${valid_chars}]*[)]" SIGNATURE ${CONTENT})
string(STRIP "${SIGNATURE}" SIGNATURE)
set (${RESULT_NAME} "${SIGNATURE}" PARENT_SCOPE)
endfunction()
function(_generate_call_line_from_signature SIGNATURE RESULT_NAME)
## extract func name
set(_name ${SIGNATURE})
string(REGEX REPLACE "[ ]*[(].*[)]" "" _name "${_name}") # remove arguments
string(REGEX MATCH "[a-zA-Z0-9_]*[ ]*$" _name "${_name}") # extract func name
set(nt_chars "[:_*& a-zA-Z0-9\n]*") ## any sequence of chars to describe object type (no template)
## extract arg names
set(_args ${SIGNATURE})
string(REGEX MATCH "[(].*[)]" _args "${_args}") # extract args with types, all inside brackets
string(REGEX REPLACE "<${nt_chars},${nt_chars}>" "" _args "${_args}") # remove template brackets with ','
string(REPLACE "(" "" _args ${_args})
string(REPLACE ")" "" _args ${_args})
string(REPLACE "," ";" _args ${_args}) # now it's list
foreach(_arg_elem ${_args})
string(REGEX MATCH "[a-zA-Z0-9_]*[ ]*$" _arg_elem "${_arg_elem}")
list(APPEND _arg_names ${_arg_elem})
endforeach()
string(REPLACE ";" ", " _arg_names "${_arg_names}") # back to comma separated string
set (${RESULT_NAME} "${_name}(${_arg_names})" PARENT_SCOPE)
endfunction()
_generate_dispatcher()

View File

@@ -0,0 +1,16 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# =================================================================
#
# This file is used to add dependency on option value. If the args
# was changes the configure file will be updated. And the dependent
# add_custom_command will rerun.
#
# Otherwise the changing of CMake options will not have affect on
# generated file.
#
# =================================================================
@_GEN_ARGS_LIST@

View File

@@ -0,0 +1,227 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
## list of available instruction sets
set(_ARCH_LIST ANY SSE42 AVX AVX2 AVX512F)
set(_ACCEPTED_ARCHS_ANY "^(ANY)$")
set(_ACCEPTED_ARCHS_SSE42 "^(ANY|SSE42)$")
set(_ACCEPTED_ARCHS_AVX "^(ANY|SSE42|AVX)$")
set(_ACCEPTED_ARCHS_AVX2 "^(ANY|SSE42|AVX|AVX2)$")
set(_ACCEPTED_ARCHS_AVX512F "^(ANY|SSE42|AVX|AVX2|AVX512F)$")
## Arch specific definitions
set(_DEFINE_ANY "")
set(_DEFINE_SSE42 "-DHAVE_SSE42" ${_DEFINE_ANY})
set(_DEFINE_AVX "-DHAVE_AVX" ${_DEFINE_SSE42})
set(_DEFINE_AVX2 "-DHAVE_AVX2" ${_DEFINE_AVX})
set(_DEFINE_AVX512F "-DHAVE_AVX512F" ${_DEFINE_AVX2})
## Arch specific compile options
ie_avx512_optimization_flags(_FLAGS_AVX512F)
ie_avx2_optimization_flags (_FLAGS_AVX2)
ie_sse42_optimization_flags (_FLAGS_SSE42)
set(_FLAGS_AVX "") ## TBD is not defined for IE project yet
set(_FLAGS_ANY "") ##
## way to duplicate file via cmake tool set
if (UNIX)
## Clone sources via sym link because it allow to modify original file in IDE along with debug
set(TO_DUPLICATE create_symlink)
else()
## Windows and others - just copy
set(TO_DUPLICATE copy)
endif()
set(DISPATCHER_GEN_SCRIPT ${CMAKE_CURRENT_LIST_DIR}/cross_compiled_disp_gen.cmake)
set(DISPATCHER_GEN_OPTIONS_HOLDER ${CMAKE_CURRENT_LIST_DIR}/cross_compiled_disp_gen_options.in)
#######################################
#
# Allow to enable multiple cross compilation of source file inside one module
# with keeping requirements on minimal instruction set. The CPU check performed
# in runtime via common utils declared in "ie_system_conf.h".
#
# Usage example:
# cross_compiled_file(<target>
# ARCH
# ANY <source_file>
# SSE SSE42 <source_file>
# AVX AVX2 <source_file>
# AVX512F <source_file>
# API <header_file>
# NAMESPACE <namespace> # like "IE::Ext::CPU::XARCH"
# NAME <function_name> # like "my_fun"
# )
#
function(cross_compiled_file TARGET)
set(oneValueArgs API ## Header with declaration of cross compiled function
NAMESPACE ## The namespace where cross compiled function was declared
NAME) ## String with function signature to make cross compiled
set(multiValueArgs ARCH) ## List of architecture described in _ARCH_LIST
cmake_parse_arguments(X "" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
## verification
if(X_UNPARSED_ARGUMENTS)
message(FATAL_ERROR "Unknown argument: " ${X_UNPARSED_ARGUMENTS})
endif()
if((NOT TARGET) OR (NOT X_NAME) OR (NOT X_NAMESPACE) OR (NOT X_API) OR (NOT X_ARCH))
message(FATAL_ERROR "Missed arguments")
endif()
_currently_requested_top_arch(TOP_ARCH)
set(_CURRENT_ARCH_FILTER "${_ACCEPTED_ARCHS_${TOP_ARCH}}")
## format: ARCH1 ARCH2 <src1> ARCH3 <src2> ...
foreach(_it ${X_ARCH})
if (_it IN_LIST _ARCH_LIST)
## that is arch ID
set(_arch ${_it})
if(_arch MATCHES ${_CURRENT_ARCH_FILTER})
list(APPEND _CUR_ARCH_SET ${_arch})
list(APPEND _FULL_ARCH_SET ${_arch})
endif()
else()
## that is source file name
set(_src_name ${_it})
_remove_source_from_target(${TARGET} ${_src_name})
_clone_source_to_target(${TARGET} ${_src_name} "${_CUR_ARCH_SET}")
set(_CUR_ARCH_SET "")
endif()
endforeach()
_add_dispatcher_to_target(${TARGET} ${X_API} ${X_NAME} "${X_NAMESPACE}" "${_FULL_ARCH_SET}")
endfunction()
##########################################
#
# Add source multiple time per each element in ARCH_SET.
# Also provide corresponding arch specific flags and defines.
#
function(_clone_source_to_target TARGET SOURCE ARCH_SET)
foreach(_arch ${ARCH_SET})
set(_arch_dir cross-compiled/${_arch})
get_filename_component(ARCH_NAME ${SOURCE} NAME)
get_filename_component(ARCH_INCLUDE_DIR ${SOURCE} DIRECTORY)
set(ARCH_SOURCE "${_arch_dir}/${ARCH_NAME}")
add_custom_command(
OUTPUT ${ARCH_SOURCE}
COMMAND ${CMAKE_COMMAND} -E make_directory
${CMAKE_CURRENT_BINARY_DIR}/${_arch_dir}
COMMAND ${CMAKE_COMMAND} -E ${TO_DUPLICATE}
${CMAKE_CURRENT_SOURCE_DIR}/${SOURCE}
${CMAKE_CURRENT_BINARY_DIR}/${ARCH_SOURCE}
DEPENDS ${SOURCE}
)
set(_ARCH_SPECIFIC_FLAGS
${_DEFINE_${_arch}}
${_FLAGS_${_arch}}
"-DXARCH=${_arch}" ## to replace XARCH with direct ARCH name
"-I${CMAKE_CURRENT_SOURCE_DIR}/${ARCH_INCLUDE_DIR}" ## To make valid #include "some.hpp"
)
_add_source_compile_flags(${ARCH_SOURCE} ${_ARCH_SPECIFIC_FLAGS})
list(APPEND _ARCH_SOURCES ${ARCH_SOURCE})
endforeach()
_add_source_to_target(${TARGET} ${_ARCH_SOURCES})
endfunction()
##########################################
#
# Generate dispatcher for provided function
# for archs in ARCH_SET.
#
function(_add_dispatcher_to_target TARGET HEADER FUNC_NAME NAMESPACE ARCH_SET)
get_filename_component(DISPATCHER_NAME ${HEADER} NAME_WE)
get_filename_component(DISPATCHER_INCLUDE_DIR ${HEADER} DIRECTORY)
set(DISPATCHER_SOURCE "cross-compiled/${DISPATCHER_NAME}_disp.cpp")
set(DISPATCHER_OPT_HOLDER "cross-compiled/${DISPATCHER_NAME}_holder.txt")
set(_GEN_ARGS_LIST
-DXARCH_FUNC_NAME="${X_NAME}"
-DXARCH_NAMESPACES="${NAMESPACE}"
-DXARCH_API_HEADER="${CMAKE_CURRENT_SOURCE_DIR}/${HEADER}"
-DXARCH_DISP_FILE="${CMAKE_CURRENT_BINARY_DIR}/${DISPATCHER_SOURCE}"
-DXARCH_SET="${ARCH_SET}"
)
configure_file(${DISPATCHER_GEN_OPTIONS_HOLDER} ${DISPATCHER_OPT_HOLDER})
add_custom_command(
OUTPUT ${DISPATCHER_SOURCE}
COMMAND ${CMAKE_COMMAND} ${_GEN_ARGS_LIST}
-P ${DISPATCHER_GEN_SCRIPT}
DEPENDS ${HEADER}
${DISPATCHER_GEN_SCRIPT}
${CMAKE_CURRENT_BINARY_DIR}/${DISPATCHER_OPT_HOLDER} ## Just to make run dependency on args value
)
_add_source_compile_flags(${DISPATCHER_SOURCE} "-I${DISPATCHER_INCLUDE_DIR}")
_add_source_to_target(${TARGET} ${DISPATCHER_SOURCE})
endfunction()
#######################################
#
# Return currently requested ARCH id
#
function(_currently_requested_top_arch VAR)
if(ENABLE_AVX512F)
set(RES AVX512F)
elseif(ENABLE_AVX2)
set(RES AVX2)
elseif(ENABLE_SSE42)
set(RES SSE42)
else()
set(RES ANY)
endif()
set (${VAR} "${RES}" PARENT_SCOPE)
endfunction()
#####################################
#
# Utils to handle with cmake target
#
function(_remove_source_from_target TARGET SOURCE_FILE)
get_target_property(ORIGINAL_SOURCES ${TARGET} SOURCES)
## To match by file name only. The path is any.
list(FILTER ORIGINAL_SOURCES EXCLUDE REGEX ".*${SOURCE_FILE}$")
set_target_properties(${TARGET}
PROPERTIES
SOURCES "${ORIGINAL_SOURCES}")
endfunction()
function(_add_source_to_target TARGET)
get_target_property(ORIGINAL_SOURCES ${TARGET} SOURCES)
list(APPEND ORIGINAL_SOURCES ${ARGN})
set_target_properties(${TARGET}
PROPERTIES
SOURCES "${ORIGINAL_SOURCES}")
endfunction()
function(_add_source_compile_flags SOURCE)
get_source_file_property(ORIGINAL_FLAGS ${SOURCE} COMPILE_FLAGS)
## Empty list of COMPILE_FLAGS represented as NOTFOUND
if(NOT ORIGINAL_FLAGS)
set(ORIGINAL_FLAGS "")
endif()
string(REPLACE ";" " " NEW_FLAGS "${ARGN}")
string(APPEND ORIGINAL_FLAGS " " ${NEW_FLAGS})
set_source_files_properties(${SOURCE}
PROPERTIES
COMPILE_FLAGS "${ORIGINAL_FLAGS}")
endfunction()

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -26,11 +26,11 @@ function (log_rpath_remove_top component component_remove_top lib lib_remove_top
# debug_message(STATUS "LIB-IN=${lib} ")
# debug_message(STATUS "TOPLIB-IN=${top_lib_dir} ")
get_filename_component(top_lib_dir ${${component}} DIRECTORY)
get_filename_component(top_lib_dir "${${component}}" DIRECTORY)
if (${component_remove_top} AND ${component})
else()
get_filename_component(add_name ${${component}} NAME)
get_filename_component(add_name "${${component}}" NAME)
set(top_lib_dir "${top_lib_dir}/${add_name}")
endif()
if (${lib_remove_top} AND lib)
@@ -70,4 +70,4 @@ endfunction()
# This macro is redefined (with additional checks) within the InferenceEngineConfig.cmake file.
macro(ext_message TRACE_LEVEL)
message(${TRACE_LEVEL} "${ARGN}")
endmacro()
endmacro()

37
cmake/dependencies.cmake Normal file
View File

@@ -0,0 +1,37 @@
# Copyright (C) 2018 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set_temp_directory(TEMP "${IE_MAIN_SOURCE_DIR}")
include(dependency_solver)
if(CMAKE_CROSSCOMPILING)
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(HOST_X86_64 ON)
endif()
set(protoc_version "3.7.1")
if(CMAKE_HOST_SYSTEM_NAME MATCHES Linux)
RESOLVE_DEPENDENCY(SYSTEM_PROTOC_ROOT
ARCHIVE_LIN "protoc-${protoc_version}-linux-x86_64.tar.gz"
TARGET_PATH "${TEMP}/protoc-${protoc_version}-linux-x86_64")
debug_message(STATUS "host protoc-${protoc_version} root path = " ${SYSTEM_PROTOC_ROOT})
else()
message(FATAL_ERROR "Unsupported host system (${CMAKE_HOST_SYSTEM_NAME}) and arch (${CMAKE_HOST_SYSTEM_PROCESSOR}) for cross-compilation")
endif()
reset_deps_cache(SYSTEM_PROTOC)
message("${SYSTEM_PROTOC_ROOT}/bin")
find_program(
SYSTEM_PROTOC
NAMES protoc
PATHS "${SYSTEM_PROTOC_ROOT}/bin"
NO_DEFAULT_PATH)
if(NOT SYSTEM_PROTOC)
message(FATAL_ERROR "[ONNX IMPORTER] Missing host protoc binary")
endif()
update_deps_cache(SYSTEM_PROTOC "${SYSTEM_PROTOC}" "Path to host protoc for ONNX Importer")
endif()

View File

@@ -0,0 +1,226 @@
# Copyright (C) 2018 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
list(APPEND CMAKE_MODULE_PATH
"${OpenVINO_MAIN_SOURCE_DIR}/cmake/download"
"${OpenVINO_MAIN_SOURCE_DIR}/cmake/cross_compile"
)
include(CPackComponent)
unset(IE_CPACK_COMPONENTS_ALL CACHE)
set(IE_CPACK_IE_DIR deployment_tools/inference_engine)
# Search packages for the host system instead of packages for the target system
# in case of cross compilation these macros should be defined by the toolchain file
if(NOT COMMAND find_host_package)
macro(find_host_package)
find_package(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_program)
macro(find_host_program)
find_program(${ARGN})
endmacro()
endif()
#
# ie_cpack_set_library_dir()
#
# Set library directory for cpack
#
function(ie_cpack_set_library_dir)
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH)
if(ARCH STREQUAL "x86_64" OR ARCH STREQUAL "amd64") # Windows detects Intel's 64-bit CPU as AMD64
set(ARCH intel64)
elseif(ARCH STREQUAL "i386")
set(ARCH ia32)
endif()
if(WIN32)
set(IE_CPACK_LIBRARY_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
set(IE_CPACK_RUNTIME_PATH ${IE_CPACK_IE_DIR}/bin/${ARCH}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
set(IE_CPACK_ARCHIVE_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
else()
set(IE_CPACK_LIBRARY_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH} PARENT_SCOPE)
set(IE_CPACK_RUNTIME_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH} PARENT_SCOPE)
set(IE_CPACK_ARCHIVE_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH} PARENT_SCOPE)
endif()
endfunction()
ie_cpack_set_library_dir()
#
# ie_cpack_add_component(NAME ...)
#
# Wraps original `cpack_add_component` and adds component to internal IE list
#
macro(ie_cpack_add_component NAME)
list(APPEND IE_CPACK_COMPONENTS_ALL ${NAME})
set(IE_CPACK_COMPONENTS_ALL "${IE_CPACK_COMPONENTS_ALL}" CACHE STRING "" FORCE)
cpack_add_component(${NAME} ${ARGN})
endmacro()
macro(ie_cpack)
set(CPACK_GENERATOR "TGZ")
string(REPLACE "/" "_" CPACK_PACKAGE_VERSION "${CI_BUILD_NUMBER}")
if(WIN32)
set(CPACK_PACKAGE_NAME inference-engine_${CMAKE_BUILD_TYPE})
else()
set(CPACK_PACKAGE_NAME inference-engine)
endif()
set(CPACK_INCLUDE_TOPLEVEL_DIRECTORY OFF)
set(CPACK_ARCHIVE_COMPONENT_INSTALL ON)
set(CPACK_PACKAGE_VENDOR "Intel")
set(CPACK_COMPONENTS_ALL ${ARGN})
set(CPACK_STRIP_FILES ON)
if(OS_FOLDER)
set(CPACK_SYSTEM_NAME "${OS_FOLDER}")
endif()
include(CPack)
endmacro()
# prepare temporary folder
function(set_temp_directory temp_variable source_tree_dir)
if (DEFINED ENV{DL_SDK_TEMP} AND NOT $ENV{DL_SDK_TEMP} STREQUAL "")
message(STATUS "DL_SDK_TEMP environment is set : $ENV{DL_SDK_TEMP}")
if (WIN32)
string(REPLACE "\\" "\\\\" temp $ENV{DL_SDK_TEMP})
else()
set(temp $ENV{DL_SDK_TEMP})
endif()
if (ENABLE_ALTERNATIVE_TEMP)
set(ALTERNATIVE_PATH ${source_tree_dir}/temp)
endif()
else ()
set(temp ${source_tree_dir}/temp)
endif()
set("${temp_variable}" "${temp}" CACHE PATH "Path to temp directory")
if(ALTERNATIVE_PATH)
set(ALTERNATIVE_PATH "${ALTERNATIVE_PATH}" PARENT_SCOPE)
endif()
endfunction()
include(coverage/coverage)
# External dependencies
find_package(Threads)
# Detect target
include(target_flags)
# printing debug messages
include(debug)
# linking libraries without discarding symbols
include(whole_archive)
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH_FOLDER)
if(ARCH_FOLDER STREQUAL "x86_64" OR ARCH_FOLDER STREQUAL "amd64") # Windows detects Intel's 64-bit CPU as AMD64
set(ARCH_FOLDER intel64)
elseif(ARCH_FOLDER STREQUAL "i386")
set(ARCH_FOLDER ia32)
endif()
if(OS_FOLDER)
message ("**** OS FOLDER IS: [${OS_FOLDER}]")
if("${OS_FOLDER}" STREQUAL "ON")
message ("**** USING OS FOLDER: [${CMAKE_SYSTEM_NAME}]")
set(BIN_FOLDER "bin/${CMAKE_SYSTEM_NAME}/${ARCH_FOLDER}")
else()
set(BIN_FOLDER "bin/${OS_FOLDER}/${ARCH_FOLDER}")
endif()
else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
debug_message(STATUS "CMAKE_BUILD_TYPE not defined, 'Release' will be used")
set(CMAKE_BUILD_TYPE "Release")
endif()
# allow to override default OUTPUT_ROOT root
if(NOT DEFINED OUTPUT_ROOT)
set(OUTPUT_ROOT ${OpenVINO_MAIN_SOURCE_DIR})
endif()
# Enable postfixes for Debug/Release builds
set(IE_DEBUG_POSTFIX_WIN "d")
set(IE_RELEASE_POSTFIX_WIN "")
set(IE_DEBUG_POSTFIX_LIN "")
set(IE_RELEASE_POSTFIX_LIN "")
set(IE_DEBUG_POSTFIX_MAC "d")
set(IE_RELEASE_POSTFIX_MAC "")
if(WIN32)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_WIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_WIN})
elseif(APPLE)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_MAC})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_MAC})
else()
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_LIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_LIN})
endif()
set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
if (WIN32 OR CMAKE_GENERATOR STREQUAL "Xcode")
# Support CMake multiconfiguration for Visual Studio or Xcode build
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
else ()
if (${CMAKE_BUILD_TYPE} STREQUAL "Debug" )
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
else()
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
endif()
endif()
message(STATUS "CMAKE_BUILD_TYPE: ${CMAKE_BUILD_TYPE}")
add_definitions(-DIE_BUILD_POSTFIX=\"${IE_BUILD_POSTFIX}\")
if(NOT UNIX)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
else()
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
endif()
if(APPLE)
# WA for Xcode generator + object libraries issue:
# https://gitlab.kitware.com/cmake/cmake/issues/20260
# http://cmake.3232098.n2.nabble.com/XCODE-DEPEND-HELPER-make-Deletes-Targets-Before-and-While-They-re-Built-td7598277.html
set(CMAKE_XCODE_GENERATE_TOP_LEVEL_PROJECT_ONLY ON)
set(CMAKE_MACOSX_RPATH ON)
endif()
# Use solution folders
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
set(CMAKE_POLICY_DEFAULT_CMP0054 NEW)
include(sdl)
include(os_flags)
include(sanitizer)
include(cross_compiled_func)
function(set_ci_build_number)
set(OpenVINO_MAIN_SOURCE_DIR "${CMAKE_SOURCE_DIR}")
include(version)
set(CI_BUILD_NUMBER "${CI_BUILD_NUMBER}" PARENT_SCOPE)
endfunction()
set_ci_build_number()

View File

@@ -0,0 +1,195 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include ("download")
function (resolve_archive_dependency VAR COMPONENT ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC ARCHIVE_ANDROID TARGET_PATH FOLDER ENVIRONMENT)
if (ENVIRONMENT AND (DEFINED ${ENVIRONMENT} OR DEFINED ENV{${ENVIRONMENT}}))
set(HAS_ENV "TRUE")
endif()
if (NOT DEFINED HAS_ENV)
if (ARCHIVE)
#TODO: check whether this is platform specific binary with same name per or it is in common folder
DownloadAndExtract(${COMPONENT} ${ARCHIVE} ${TARGET_PATH} result_path ${FOLDER})
else()
DownloadAndExtractPlatformSpecific(${COMPONENT} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID} ${TARGET_PATH} result_path ${FOLDER})
endif()
set (${VAR} ${result_path} PARENT_SCOPE)
else()
if (DEFINED ${ENVIRONMENT})
set (${VAR} ${${ENVIRONMENT}} PARENT_SCOPE)
else ()
set (${VAR} $ENV{${ENVIRONMENT}} PARENT_SCOPE)
endif ()
endif()
endfunction(resolve_archive_dependency)
function(resolve_pull_request GITHUB_PULL_REQUEST TARGET_PATH)
get_filename_component(FILE_NAME ${GITHUB_PULL_REQUEST} NAME)
set (PATCH_URL "")
DownloadAndApply("${PATCH_URL}/${GITHUB_PULL_REQUEST}" "${IE_MAIN_SOURCE_DIR}/${TARGET_PATH}/${FILE_NAME}")
endfunction(resolve_pull_request)
function(extract_version_from_filename filename regex version)
string(REGEX MATCH ${regex} match ${filename})
if (CMAKE_MATCH_1)
set(${version} ${CMAKE_MATCH_1} PARENT_SCOPE)
else()
set(${version} ${filename} PARENT_SCOPE)
endif()
endfunction(extract_version_from_filename)
function(read_version archive regex version_var)
extract_version_from_filename(${archive} ${regex} version)
set(${version_var} "${version}" CACHE INTERNAL "" FORCE)
debug_message(STATUS "${version_var} = " ${version})
endfunction(read_version)
function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
list(REMOVE_AT ARGV 0)
set(SUPPORTED_ARGS FOLDER ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC ARCHIVE_ANDROID TARGET_PATH ENVIRONMENT GITHUB_PULL_REQUEST VERSION_REGEX)
#unnecessary vars
foreach(arg ${ARGV})
#message("one_arg=" ${one_arg})
#message("arg=" ${arg})
#parse no arg vars
if (";${SUPPORTED_ARGS};" MATCHES ";${arg};")
if(DEFINED one_arg)
set(${one_arg} TRUE)
endif()
set (one_arg ${arg})
elseif(DEFINED one_arg)
set(${one_arg} ${arg})
unset(one_arg)
else()
message(FATAL_ERROR "invalid argument passed to resolve dependency: " ${arg})
endif()
endforeach(arg)
#if last token was bool
if(DEFINED one_arg)
set(${one_arg} TRUE)
endif()
if (NOT DEFINED ARCHIVE)
SET(ARCHIVE "OFF")
endif()
if (NOT DEFINED ARCHIVE_UNIFIED)
SET(ARCHIVE_UNIFIED "OFF")
endif()
if (NOT DEFINED ARCHIVE_WIN)
SET(ARCHIVE_WIN "OFF")
endif()
if (NOT DEFINED ARCHIVE_LIN)
SET(ARCHIVE_LIN "OFF")
endif()
if (NOT DEFINED ARCHIVE_MAC)
SET(ARCHIVE_MAC "OFF")
endif()
if (NOT DEFINED ARCHIVE_ANDROID)
SET(ARCHIVE_ANDROID "OFF")
endif()
if (NOT DEFINED ENVIRONMENT)
set (ENVIRONMENT "OFF")
endif()
if (NOT DEFINED FOLDER)
set (FOLDER FALSE)
endif()
#for each dependency type have to do separate things
if (ARCHIVE_WIN OR ARCHIVE_LIN OR ARCHIVE_MAC OR ARCHIVE_ANDROID OR ARCHIVE OR ARCHIVE_UNIFIED)
if (NOT DEFINED TARGET_PATH)
message(FATAL_ERROR "TARGET_PATH should be defined for every dependency")
endif()
resolve_archive_dependency(RESULT ${NAME_OF_CMAKE_VAR} ${ARCHIVE} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID} ${TARGET_PATH} ${FOLDER} ${ENVIRONMENT})
set(${NAME_OF_CMAKE_VAR} ${RESULT} PARENT_SCOPE)
if (VERSION_REGEX)
GetNameAndUrlToDownload(archive RELATIVE_URL ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID})
if (archive)
read_version(${archive} ${VERSION_REGEX} "${NAME_OF_CMAKE_VAR}_VERSION")
endif()
endif()
elseif (DEFINED GITHUB_PULL_REQUEST)
resolve_pull_request(${GITHUB_PULL_REQUEST} ${TARGET_PATH})
else()
message(FATAL_ERROR "Dependency of unknowntype, SHOULD set one of ARCHIVE_WIN, ARCHIVE, ARCHIVE_LIN, ARCHIVE_MAC, ARCHIVE_ANDROID, GITHUB_PULL_REQUEST")
endif()
endfunction(RESOLVE_DEPENDENCY)
function (resolve_model_dependency network archive network_model_path)
RESOLVE_DEPENDENCY(${network_model_path}
ARCHIVE "models_archives/${archive}"
TARGET_PATH "${MODELS_PATH}/${network}")
string (REPLACE ${MODELS_PATH} "" relative_path ${${network_model_path}})
set(${network_model_path} ".${relative_path}" PARENT_SCOPE)
endfunction()
function(reset_deps_cache)
#
# Reset the dependencies cache if it was set by dependency solver
#
set(need_reset FALSE)
foreach(var_name IN LISTS ARGN)
if(DEFINED ${var_name})
if(${var_name} MATCHES ${TEMP})
set(need_reset TRUE)
endif()
endif()
endforeach()
foreach(var_name IN LISTS ARGN)
if(DEFINED ENV{${var_name}})
if($ENV{${var_name}} MATCHES ${TEMP})
set(need_reset TRUE)
endif()
endif()
endforeach()
if(need_reset)
foreach(var_name IN LISTS ARGN)
unset(${var_name} CACHE)
endforeach()
foreach(var_name IN LISTS ARGN)
unset(ENV{${var_name}})
endforeach()
endif()
endfunction()
function(update_deps_cache VAR_NAME INTERNAL_VALUE DOC_MSG)
#
# Update the variable value if it wasn't provided by the user
#
if(NOT DEFINED ${VAR_NAME} AND NOT DEFINED ENV{${VAR_NAME}})
# User didn't provide its own value, use INTERNAL_VALUE
set(${VAR_NAME} ${INTERNAL_VALUE} CACHE PATH ${DOC_MSG})
else()
# The variable was provided by the user, don't use INTERNAL_VALUE
if(NOT DEFINED ${VAR_NAME} AND DEFINED ENV{${VAR_NAME}})
# User provided the variable via environment, convert it to the CACHE variable
set(${VAR_NAME} $ENV{${VAR_NAME}} CACHE PATH ${DOC_MSG})
endif()
endif()
endfunction()

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#

View File

@@ -1,53 +1,49 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include ("extract")
include ("download_and_check")
function (GetNameAndUrlToDownload name url archive_name_unified archive_name_win archive_name_lin archive_name_mac)
function (GetNameAndUrlToDownload name url archive_name_unified archive_name_win archive_name_lin archive_name_mac archive_name_android)
if (archive_name_unified)
set (${url} "${archive_name_unified}" PARENT_SCOPE)
set (${url} "thirdparty/unified/${archive_name_unified}" PARENT_SCOPE)
set (${name} ${archive_name_unified} PARENT_SCOPE)
else()
if (LINUX OR (APPLE AND NOT archive_name_mac))
if (NOT archive_name_lin)
return()
endif()
if(archive_name_lin)
set (PLATFORM_FOLDER linux)
set (archive_name ${archive_name_lin})
elseif(APPLE)
if (NOT archive_name_mac)
return()
endif()
elseif(archive_name_mac)
set (PLATFORM_FOLDER mac)
set (archive_name ${archive_name_mac})
else()
#if no dependency for target platfrom skip it
if (NOT archive_name_win)
return()
endif()
elseif(archive_name_android)
set (PLATFORM_FOLDER android)
set (archive_name ${archive_name_android})
elseif(archive_name_win)
set (PLATFORM_FOLDER windows)
set (archive_name ${archive_name_win})
else()
return()
endif()
set (${name} ${archive_name} PARENT_SCOPE)
set (${url} "${archive_name}" PARENT_SCOPE)
set (${url} "thirdparty/${PLATFORM_FOLDER}/${archive_name}" PARENT_SCOPE)
endif()
endfunction(GetNameAndUrlToDownload)
#download from paltform specific folder from share server
function (DownloadAndExtractPlatformSpecific
component
archive_name_unified
archive_name_win
archive_name_lin
archive_name_mac
unpacked_path
function (DownloadAndExtractPlatformSpecific
component
archive_name_unified
archive_name_win
archive_name_lin
archive_name_mac
archive_name_android
unpacked_path
result_path
folder)
GetNameAndUrlToDownload(archive_name RELATIVE_URL ${archive_name_unified} ${archive_name_win} ${archive_name_lin} ${archive_name_mac} )
GetNameAndUrlToDownload(archive_name RELATIVE_URL ${archive_name_unified} ${archive_name_win} ${archive_name_lin} ${archive_name_mac} ${archive_name_android} )
if (NOT archive_name OR NOT RELATIVE_URL)
return()
endif()
@@ -61,35 +57,35 @@ function (DownloadAndExtract component archive_name unpacked_path result_path fo
set (RELATIVE_URL "${archive_name}")
set(fattal TRUE)
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} result_path2 ${folder} ${fattal} result TRUE)
if (NOT ${result})
DownloadAndExtractPlatformSpecific(${component} ${archive_name} ${archive_name} ${archive_name} ${unpacked_path} ${result_path2} ${folder})
endif()
endif()
set (${result_path} ${result_path2} PARENT_SCOPE)
endfunction(DownloadAndExtract)
function (DownloadAndExtractInternal URL archive_path unpacked_path folder fattal result123)
function (DownloadAndExtractInternal URL archive_path unpacked_path folder fattal resultExt)
set (status "ON")
DownloadAndCheck(${URL} ${archive_path} ${fattal} result1)
if ("${result1}" STREQUAL "ARCHIVE_DOWNLOAD_FAIL")
#check alternative url as well
set (status "OFF")
file(REMOVE_RECURSE "${archive_path}")
file(REMOVE_RECURSE "${archive_path}")
endif()
if ("${result1}" STREQUAL "CHECKSUM_DOWNLOAD_FAIL" OR "${result1}" STREQUAL "HASH_MISMATCH")
set(status FALSE)
file(REMOVE_RECURSE "${archive_path}")
file(REMOVE_RECURSE "${archive_path}")
endif()
if("${status}" STREQUAL "ON")
ExtractWithVersion(${URL} ${archive_path} ${unpacked_path} ${folder} result)
endif()
set (result123 ${status} PARENT_SCOPE)
set (${resultExt} ${status} PARENT_SCOPE)
endfunction(DownloadAndExtractInternal)
@@ -98,36 +94,49 @@ function (ExtractWithVersion URL archive_path unpacked_path folder result)
debug_message("ExtractWithVersion : ${archive_path} : ${unpacked_path}")
extract(${archive_path} ${unpacked_path} ${folder} status)
#dont need archive actually after unpacking
file(REMOVE_RECURSE "${archive_path}")
file(REMOVE_RECURSE "${archive_path}")
if (${status})
set (version_file ${unpacked_path}/ie_dependency.info)
file(WRITE ${version_file} ${URL})
else()
file(REMOVE_RECURSE "${unpacked_path}")
message(FATAL_ERROR "Failed to extract the archive from ${URL}, archive ${archive_path} to folder ${unpacked_path}")
endif()
set (${result} ${status} PARENT_SCOPE)
set (${result} ${status} PARENT_SCOPE)
endfunction (ExtractWithVersion)
function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal result123)
function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal resultExt)
debug_message("checking wether archive downloaded : ${archive_path}")
set (downloadStatus "NOTOK")
if (NOT EXISTS ${archive_path})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
if (${result})
set (downloadStatus "OK")
endif()
else()
if (ENABLE_UNSAFE_LOCATIONS)
ExtractWithVersion(${URL} ${archive_path} ${unpacked_path} ${folder} result)
if(NOT ${result})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
if (${result})
set (downloadStatus "OK")
endif()
endif()
else()
debug_message("archive found on FS : ${archive_path}, however we cannot check it's checksum and think that it is invalid")
file(REMOVE_RECURSE "${archive_path}")
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
endif()
if (${result})
set (downloadStatus "OK")
endif()
endif()
endif()
endif()
if (NOT ${downloadStatus} STREQUAL "OK")
message(FATAL_ERROR "Failed to download and extract the archive from ${URL}, archive ${archive_path} to folder ${unpacked_path}")
endif()
if (NOT ${result})
message(FATAL_ERROR "error: extract of '${archive_path}' failed")
@@ -137,15 +146,17 @@ endfunction(DownloadOrExtractInternal)
file(REMOVE ${CMAKE_BINARY_DIR}/dependencies_64.txt)
function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked_path result_path folder fattal result123 use_alternatives)
function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked_path result_path folder fattal resultExt use_alternatives)
set (archive_path ${TEMP}/download/${archive_name})
set (status "ON")
set (on_master FALSE)
if(DEFINED ENV{IE_PATH_TO_DEPS})
if(DEFINED IE_PATH_TO_DEPS)
set(URL "${IE_PATH_TO_DEPS}/${RELATIVE_URL}")
elseif(DEFINED ENV{IE_PATH_TO_DEPS})
set(URL "$ENV{IE_PATH_TO_DEPS}/${RELATIVE_URL}")
else()
set(URL "https://download.01.org/opencv/2019/openvinotoolkit/R3/inference_engine/${RELATIVE_URL}")
set(URL "https://download.01.org/opencv/master/openvinotoolkit/${RELATIVE_URL}")
endif()
#no message on recursive calls
@@ -159,7 +170,7 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
if (NOT EXISTS ${unpacked_path})
DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} status)
else(NOT EXISTS ${unpacked_path})
else(NOT EXISTS ${unpacked_path})
#path exists, so we would like to check what was unpacked version
set (version_file ${unpacked_path}/ie_dependency.info)
@@ -176,7 +187,7 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
"\trm -rf ${unpacked_path}\n"
"and rerun cmake.\n"
"If your dependency is fine, then execute:\n\techo ${URL} > ${unpacked_path}/ie_dependency.info\n")
# file(REMOVE_RECURSE "${unpacked_path}")
# file(REMOVE_RECURSE "${unpacked_path}")
# DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${fattal} status)
else()
if (EXISTS ${version_file})
@@ -196,11 +207,11 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
string(REPLACE ${TEMP} ${ALTERNATIVE_PATH} archive_path ${archive_path})
debug_message("dependency different: use local path for fetching updated version: ${alternative_path}")
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} ${result_path} ${folder} ${fattal} ${result123} FALSE)
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} ${result_path} ${folder} ${fattal} ${resultExt} FALSE)
else()
debug_message("dependency updated: download it again")
file(REMOVE_RECURSE "${unpacked_path}")
file(REMOVE_RECURSE "${unpacked_path}")
DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} status)
endif()
endif ()
@@ -208,11 +219,10 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
endif()
if (${use_alternatives} OR ${on_master})
set (${result123} "${status}" PARENT_SCOPE)
set (${resultExt} "${status}" PARENT_SCOPE)
set (${result_path} ${unpacked_path} PARENT_SCOPE)
endif()
endfunction(CheckOrDownloadAndExtract)
endfunction(CheckOrDownloadAndExtract)

View File

@@ -1,10 +1,9 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function (extract archive_path unpacked_path folder result)
# Slurped from a generated extract-TARGET.cmake file.
if (NOT EXISTS ${unpacked_path})
get_filename_component(unpacked_dir ${unpacked_path} DIRECTORY)
file(MAKE_DIRECTORY ${unpacked_path})
@@ -40,6 +39,5 @@ function (extract archive_path unpacked_path folder result)
else()
set(${result} 1 PARENT_SCOPE)
endif()
endif()
endfunction (extract)

43
cmake/features.cmake Normal file
View File

@@ -0,0 +1,43 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include (target_flags)
include (options)
# these options are aimed to optimize build time on development system
if(X86_64)
set(ENABLE_MKL_DNN_DEFAULT ON)
else()
set(ENABLE_MKL_DNN_DEFAULT OFF)
endif()
ie_option (ENABLE_TESTS "unit, behavior and functional tests" OFF)
ie_option (ENABLE_MKL_DNN "MKL-DNN plugin for inference engine" ${ENABLE_MKL_DNN_DEFAULT})
ie_dependent_option (ENABLE_CLDNN "clDnn based plugin for inference engine" ON "WIN32 OR X86_64;NOT APPLE;NOT MINGW" OFF)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX OR WIN32;NOT CMAKE_CROSSCOMPILING" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
# FIXME: ARM cross-compiler generates several "false positive" warnings regarding __builtin_memcpy buffer overflow
ie_dependent_option (TREAT_WARNING_AS_ERROR "Treat build warnings as errors" ON "X86 OR X86_64" OFF)
ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)
ie_option (ENABLE_THREAD_SANITIZER "enable checking data races via ThreadSanitizer" OFF)
ie_dependent_option (COVERAGE "enable code coverage" OFF "CMAKE_CXX_COMPILER_ID STREQUAL GNU" OFF)
# Define CPU capabilities
ie_dependent_option (ENABLE_SSE42 "Enable SSE4.2 optimizations" ON "X86_64 OR X86" OFF)
ie_dependent_option (ENABLE_AVX2 "Enable AVX2 optimizations" ON "X86_64 OR X86" OFF)
ie_dependent_option (ENABLE_AVX512F "Enable AVX512 optimizations" ON "X86_64 OR X86" OFF)

View File

@@ -1,10 +1,10 @@
# Copyright (C) 2018-2019 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang" AND NOT WIN32)
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
# Communicate libfuzzer is enabled
set(WITH_LIBFUZZER ON PARENT_SCOPE)
add_compile_definitions(WITH_LIBFUZZER)

27
cmake/options.cmake Normal file
View File

@@ -0,0 +1,27 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Usage: ie_option(<option_variable> "description" <initial value or boolean expression> [IF <condition>])
include (CMakeDependentOption)
include (version)
macro (ie_option variable description value)
option(${variable} "${description}" ${value})
list(APPEND IE_OPTIONS ${variable})
endmacro()
macro (ie_dependent_option variable description def_value condition fallback_value)
cmake_dependent_option(${variable} "${description}" ${def_value} "${condition}" ${fallback_value})
list(APPEND IE_OPTIONS ${variable})
endmacro()
function (print_enabled_features)
message(STATUS "Inference Engine enabled features: ")
message(STATUS "")
message(STATUS " CI_BUILD_NUMBER: ${CI_BUILD_NUMBER}")
foreach(_var ${IE_OPTIONS})
message(STATUS " ${_var} = ${${_var}}")
endforeach()
message(STATUS "")
endfunction()

297
cmake/os_flags.cmake Normal file
View File

@@ -0,0 +1,297 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(ProcessorCount)
#
# Disables deprecated warnings generation
# Defines ie_c_cxx_deprecated varaible which contains C / C++ compiler flags
#
macro(disable_deprecated_warnings)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(ie_c_cxx_deprecated "/Qdiag-disable:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(ie_c_cxx_deprecated "-diag-disable=1478,1786")
else()
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
endif()
endif()
if(NOT ie_c_cxx_deprecated)
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${ie_c_cxx_deprecated}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${ie_c_cxx_deprecated}")
endmacro()
#
# Don't threat deprecated warnings as errors
# Defines ie_c_cxx_deprecated_no_errors varaible which contains C / C++ compiler flags
#
macro(ie_deprecated_no_errors)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(ie_c_cxx_deprecated "/Qdiag-warning:1478,1786")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(ie_c_cxx_deprecated "/wd4996")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(ie_c_cxx_deprecated_no_errors "-diag-warning=1478,1786")
else()
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
endif()
if(NOT ie_c_cxx_deprecated_no_errors)
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${ie_c_cxx_deprecated_no_errors}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${ie_c_cxx_deprecated_no_errors}")
endmacro()
#
# Provides SSE4.2 compilation flags depending on an OS and a compiler
#
function(ie_sse42_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# No such option for MSVC 2019
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} "/arch:SSE4.2 /QxSSE4.2" PARENT_SCOPE)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} "-msse4.2 -xSSE4.2" PARENT_SCOPE)
else()
set(${flags} "-msse4.2" PARENT_SCOPE)
endif()
endif()
endfunction()
#
# Provides AVX2 compilation flags depending on an OS and a compiler
#
function(ie_avx2_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} "/QxCORE-AVX2" PARENT_SCOPE)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} "/arch:AVX2" PARENT_SCOPE)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} "-march=core-avx2 -xCORE-AVX2 -mtune=core-avx2" PARENT_SCOPE)
else()
set(${flags} "-mavx2 -mfma" PARENT_SCOPE)
endif()
endif()
endfunction()
#
# Provides common AVX512 compilation flags for AVX512F instruction set support
# depending on an OS and a compiler
#
function(ie_avx512_optimization_flags flags)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} "/QxCOMMON-AVX512" PARENT_SCOPE)
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(${flags} "/arch:AVX512" PARENT_SCOPE)
else()
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
endif()
else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(${flags} "-xCOMMON-AVX512" PARENT_SCOPE)
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(${flags} "-mavx512f -mfma" PARENT_SCOPE)
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Clang|AppleClang)$")
set(${flags} "-mavx512f -mfma" PARENT_SCOPE)
endif()
endif()
endfunction()
#
# Enables Link Time Optimization compilation
#
macro(ie_enable_lto)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND OFF)
ProcessorCount(N)
if(UNIX)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ipo")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -ipo")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -ipo-jobs${N}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} -ipo-jobs${N}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} -ipo-jobs${N}")
else()
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /Qipo")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /Qipo")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /Qipo-jobs:${N}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /Qipo-jobs:${N}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} /Qipo-jobs:${N}")
endif()
elseif(UNIX)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -flto")
# LTO causes issues with gcc 4.8.5 during cmake pthread check
if(NOT CMAKE_C_COMPILER_VERSION VERSION_LESS 4.9)
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -flto")
endif()
# modify linker and ar
if(LINUX)
set(CMAKE_AR "gcc-ar")
set(CMAKE_RANLIB "gcc-ranlib")
endif()
elseif(MSVC AND OFF)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /GL")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /GL")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /LTCG:STATUS")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /LTCG:STATUS")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} /LTCG:STATUS")
endif()
endmacro()
#
# Adds compiler flags to C / C++ sources
#
macro(ie_add_compiler_flags)
foreach(flag ${ARGN})
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${flag}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${flag}")
endforeach()
endmacro()
#
# Compilation and linker flags
#
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
set(THREADS_PREFER_PTHREAD_FLAG ON)
# to allows to override CMAKE_CXX_STANDARD from command line
if(NOT DEFINED CMAKE_CXX_STANDARD)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(CMAKE_CXX_STANDARD 14)
else()
set(CMAKE_CXX_STANDARD 11)
endif()
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
endif()
if(ENABLE_COVERAGE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --coverage")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} --coverage")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --coverage")
endif()
if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsigned-char")
endif()
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
set(CMAKE_CXX_VISIBILITY_PRESET hidden)
set(CMAKE_C_VISIBILITY_PRESET hidden)
set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
if(WIN32)
ie_add_compiler_flags(-D_CRT_SECURE_NO_WARNINGS -D_SCL_SECURE_NO_WARNINGS)
ie_add_compiler_flags(/EHsc) # no asynchronous structured exception handling
ie_add_compiler_flags(/Gy) # remove unreferenced functions: function level linking
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /LARGEADDRESSAWARE")
if (TREAT_WARNING_AS_ERROR)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
ie_add_compiler_flags(/WX)
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# ie_add_compiler_flags(/WX) # Too many warnings
endif()
endif()
# Compiler specific flags
ie_add_compiler_flags(/bigobj)
# Disable noisy warnings
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
# 161 unrecognized pragma
# 177 variable was declared but never referenced
# 556 not matched type of assigned function pointer
# 1744: field of class type without a DLL interface used in a class with a DLL interface
# 2586 decorated name length exceeded, name was truncated
# 2651: attribute does not apply to any entity
# 3180 unrecognized OpenMP pragma
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
# 15335 was not vectorized: vectorization possible but seems inefficient. Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:161,177,556,1744,2586,2651,3180,11075,15335)
endif()
# Debug information flags
set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /Z7")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /Z7")
else()
# TODO: enable for C sources as well
# ie_add_compiler_flags(-Werror)
if(TREAT_WARNING_AS_ERROR)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror")
endif()
ie_add_compiler_flags(-ffunction-sections -fdata-sections)
ie_add_compiler_flags(-fdiagnostics-show-option)
ie_add_compiler_flags(-Wundef)
# Disable noisy warnings
if (CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
ie_add_compiler_flags(-Wswitch)
elseif(UNIX)
ie_add_compiler_flags(-Wuninitialized -Winit-self)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
ie_add_compiler_flags(-Wno-error=switch)
else()
ie_add_compiler_flags(-Wmaybe-uninitialized)
endif()
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
ie_add_compiler_flags(-diag-disable=remark)
# noisy warnings from Intel Compiler 19.1.1.217 20200306
ie_add_compiler_flags(-diag-disable=2196)
endif()
# Linker flags
if(APPLE)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,-dead_strip")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} -Wl,-dead_strip")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,-dead_strip")
elseif(LINUX)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--gc-sections -Wl,--exclude-libs,ALL")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} -Wl,--gc-sections -Wl,--exclude-libs,ALL")
endif()
endif()

43
cmake/sanitizer.cmake Normal file
View File

@@ -0,0 +1,43 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(CheckCXXCompilerFlag)
if (ENABLE_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "-g -fsanitize=address -fno-omit-frame-pointer")
CHECK_CXX_COMPILER_FLAG("-fsanitize-recover=address" SANITIZE_RECOVER_SUPPORTED)
if (SANITIZE_RECOVER_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=address")
endif()
set(SANITIZER_LINKER_FLAGS "-fsanitize=address")
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=gold")
elseif(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SANITIZER_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SANITIZER_COMPILER_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${SANITIZER_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${SANITIZER_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${SANITIZER_LINKER_FLAGS}")
endif()
if (ENABLE_THREAD_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "-g -fsanitize=thread -fno-omit-frame-pointer")
set(SANITIZER_LINKER_FLAGS "-fsanitize=thread")
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
else()
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -static-libsan")
endif()
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SANITIZER_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SANITIZER_COMPILER_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${SANITIZER_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${SANITIZER_LINKER_FLAGS}")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${SANITIZER_LINKER_FLAGS}")
endif()

45
cmake/sdl.cmake Normal file
View File

@@ -0,0 +1,45 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if (CMAKE_BUILD_TYPE STREQUAL "Release")
if(UNIX)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wformat -Wformat-security")
if (NOT ENABLE_SANITIZER)
# ASan does not support fortification https://github.com/google/sanitizers/issues/247
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
endif()
if(NOT APPLE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
else()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
endif()
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
endif()
elseif(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if (NOT ENABLE_SANITIZER)
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wl,--strip-all")
endif()
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -z noexecstack -z relro -z now")
endif()
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl /guard:cf")
endif()
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${IE_C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${IE_C_CXX_FLAGS}")
endif()

35
cmake/target_flags.cmake Normal file
View File

@@ -0,0 +1,35 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Target system specific flags
if(CMAKE_CL_64)
set(MSVC64 ON)
endif()
if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -dumpmachine
OUTPUT_VARIABLE OPENVINO_GCC_TARGET_MACHINE
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(OPENVINO_GCC_TARGET_MACHINE MATCHES "amd64|x86_64|AMD64")
set(MINGW64 ON)
endif()
endif()
if(MSVC64 OR MINGW64)
set(X86_64 ON)
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)")
set(AARCH64 ON)
endif()
if(UNIX AND NOT APPLE)
set(LINUX ON)
endif()

43
cmake/version.cmake Normal file
View File

@@ -0,0 +1,43 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function (branchName VAR)
execute_process(
COMMAND git rev-parse --abbrev-ref HEAD
WORKING_DIRECTORY ${OpenVINO_MAIN_SOURCE_DIR}
OUTPUT_VARIABLE GIT_BRANCH
OUTPUT_STRIP_TRAILING_WHITESPACE)
set (${VAR} ${GIT_BRANCH} PARENT_SCOPE)
endfunction()
function (commitHash VAR)
execute_process(
COMMAND git rev-parse HEAD
WORKING_DIRECTORY ${OpenVINO_MAIN_SOURCE_DIR}
OUTPUT_VARIABLE GIT_COMMIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE)
set (${VAR} ${GIT_COMMIT_HASH} PARENT_SCOPE)
endfunction()
if (DEFINED ENV{CI_BUILD_NUMBER})
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
else()
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
set(custom_build "custom_${GIT_BRANCH}_${GIT_COMMIT_HASH}")
set(CI_BUILD_NUMBER "${custom_build}")
endif()
function (addVersionDefines FILE)
foreach (VAR ${ARGN})
if (DEFINED ${VAR} AND NOT "${${VAR}}" STREQUAL "")
set_property(
SOURCE ${FILE}
APPEND
PROPERTY COMPILE_DEFINITIONS
${VAR}="${${VAR}}")
endif()
endforeach()
endfunction()

53
cmake/whole_archive.cmake Normal file
View File

@@ -0,0 +1,53 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
#[[
function links static library without removing any symbol from it.
ieTargetLinkWholeArchive(<target name> <lib1> [<lib2> ...])
Example:
ieTargetLinkWholeArchive("MyriadFunctionalTests" "CommonLib" "AnotherLib")
#]]
function(ieTargetLinkWholeArchive targetName)
set(libs)
foreach(staticLib ${ARGN})
if (MSVC)
# CMake does not support generator expression in LINK_FLAGS, so we workaround it a little bit:
# passing same static library as normal link (to get build deps working, and includes too), than using WHOLEARCHIVE option
# it's important here to not use slash '/' for option !
if (CMAKE_GENERATOR MATCHES "Visual Studio")
# MSBuild is unhappy when parsing double quotes in combination with WHOLEARCHIVE flag.
# remove quotes from path - so build path with spaces not supported, but it's better than nothing.
list(APPEND libs ${staticLib}
"-WHOLEARCHIVE:$<TARGET_FILE:${staticLib}>"
)
if (CMAKE_CURRENT_BINARY_DIR MATCHES " ")
message(WARNING "Visual Studio CMake generator may cause problems if your build directory contains spaces. "
"Remove spaces from path or select different generator.")
endif()
else()
list(APPEND libs ${staticLib}
"-WHOLEARCHIVE:\"$<TARGET_FILE:${staticLib}>\""
)
endif()
elseif(APPLE)
list(APPEND libs
"-Wl,-all_load"
${staticLib}
"-Wl,-noall_load"
)
else()
list(APPEND libs
"-Wl,--whole-archive"
${staticLib}
"-Wl,--no-whole-archive"
)
endif()
endforeach()
if (libs)
target_link_libraries(${targetName} PRIVATE ${libs})
endif()
endfunction()

60
docs/CMakeLists.txt Normal file
View File

@@ -0,0 +1,60 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
add_subdirectory(examples)
# Detect nGraph
find_package(ngraph QUIET)
if(NOT ngraph_FOUND)
set(ngraph_DIR ${CMAKE_BINARY_DIR}/ngraph)
endif()
# Detect InferenceEngine
find_package(InferenceEngine QUIET)
if(NOT InferenceEngine_FOUND)
set(InferenceEngine_DIR ${CMAKE_BINARY_DIR})
endif()
add_subdirectory(template_extension)
set(all_docs_targets
ie_docs_examples
template_extension
templatePlugin TemplateBehaviorTests TemplateFunctionalTests)
foreach(target_name IN LISTS all_docs_targets)
if (TARGET ${target_name})
set_target_properties(${target_name} PROPERTIES FOLDER docs)
endif()
endforeach()
# OpenVINO docs
set(OPENVINO_DOCS_PATH "" CACHE PATH "Path to openvino-documentation local repository")
set(args "")
if(OPENVINO_DOCS_PATH)
set(args "${args} ovinodoc_path:${OPENVINO_DOCS_PATH}")
endif()
file(GLOB_RECURSE docs_files "${OpenVINO_MAIN_SOURCE_DIR}/docs")
file(GLOB_RECURSE include_files "${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/include")
file(GLOB_RECURSE ovino_files "${OPENVINO_DOCS_PATH}")
add_custom_target(ie_docs
COMMAND ./build_docs.sh ${args}
WORKING_DIRECTORY "${OpenVINO_MAIN_SOURCE_DIR}/docs/build_documentation"
COMMENT "Generating OpenVINO documentation"
SOURCES ${docs_files} ${include_files} ${ovino_files}
VERBATIM)
set_target_properties(ie_docs PROPERTIES FOLDER docs)
find_program(browser NAMES xdg-open)
if(browser)
add_custom_target(ie_docs_open
COMMAND ${browser} "${OpenVINO_MAIN_SOURCE_DIR}/doc/html/index.html"
DEPENDS ie_docs
COMMENT "Open OpenVINO documentation"
VERBATIM)
set_target_properties(ie_docs_open PROPERTIES FOLDER docs)
endif()

View File

@@ -0,0 +1,212 @@
# Custom Layers Guide {#openvino_docs_HOWTO_Custom_Layers_Guide}
The Intel® Distribution of OpenVINO™ toolkit supports neural network model layers in multiple frameworks including TensorFlow*, Caffe*, MXNet*, Kaldi* and ONYX*. The list of known layers is different for each of the supported frameworks. To see the layers supported by your framework, refer to [supported frameworks](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in the list of known layers, the Model Optimizer classifies them as custom.
This guide illustrates the workflow for running inference on topologies featuring custom layers, allowing you to plug in your own implementation for existing or completely new layers.
For a step-by-step example of creating and executing a custom layer, see the [Custom Layer Implementation Tutorials for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0)
## Terms used in this guide
- *Layer* — The abstract concept of a math function that is selected for a specific purpose (relu, sigmoid, tanh, convolutional). This is one of a sequential series of building blocks within the neural network.
- *Kernel* — The implementation of a layer function, in this case, the math programmed (in C++ and Python) to perform the layer operation for target hardware (CPU or GPU).
- *Intermediate Representation (IR)* — Neural Network used only by the Inference Engine in OpenVINO abstracting the different frameworks and describing topology, layer parameters and weights.
The original format will be a supported framework such as TensorFlow, Caffe, or MXNet.
- *Model Extension Generator* — Generates template source code files for each of the extensions needed by the Model Optimizer and the Inference Engine.
- *Inference Engine Extension* — Device-specific module implementing custom layers (a set of kernels).
## Custom Layer Overview
The [Model Optimizer](https://docs.openvinotoolkit.org/2019_R1.1/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) searches the list of known layers for each layer contained in the input model topology before building the model's internal representation, optimizing the model, and producing the Intermediate Representation files.
The [Inference Engine](https://docs.openvinotoolkit.org/2019_R1.1/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) loads the layers from the input model IR files into the specified device plugin, which will search a list of known layer implementations for the device. If your topology contains layers that are not in the list of known layers for the device, the Inference Engine considers the layer to be unsupported and reports an error. To see the layers that are supported by each device plugin for the Inference Engine, refer to the [Supported Devices](https://docs.openvinotoolkit.org/2019_R1.1/_docs_IE_DG_supported_plugins_Supported_Devices.html) documentation.
<br>
**Note:** If a device doesn't support a particular layer, an alternative to creating a new custom layer is to target an additional device using the HETERO plugin. The [Heterogeneous Plugin](https://docs.openvinotoolkit.org/2019_R1.1/_docs_IE_DG_supported_plugins_HETERO.html) may be used to run an inference model on multiple devices allowing the unsupported layers on one device to "fallback" to run on another device (e.g., CPU) that does support those layers.
## Custom Layer Implementation Workflow
When implementing a custom layer for your pre-trained model in the Intel® Distribution of OpenVINO™ toolkit, you will need to add extensions to both the Model Optimizer and the Inference Engine.
## Custom Layer Extensions for the Model Optimizer
The following figure shows the basic processing steps for the Model Optimizer highlighting the two necessary custom layer extensions, the Custom Layer Extractor and the Custom Layer Operation.
![](img/MO_extensions_flow.png)
The Model Optimizer first extracts information from the input model which includes the topology of the model layers along with parameters, input and output format, etc., for each layer. The model is then optimized from the various known characteristics of the layers, interconnects, and data flow which partly comes from the layer operation providing details including the shape of the output for each layer. Finally, the optimized model is output to the model IR files needed by the Inference Engine to run the model.
The Model Optimizer starts with a library of known extractors and operations for each [supported model framework](https://docs.openvinotoolkit.org/2019_R1.1/_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html) which must be extended to use each unknown custom layer. The custom layer extensions needed by the Model Optimizer are:
- Custom Layer Extractor
- Responsible for identifying the custom layer operation and extracting the parameters for each instance of the custom layer. The layer parameters are stored per instance and used by the layer operation before finally appearing in the output IR. Typically the input layer parameters are unchanged, which is the case covered by this tutorial.
- Custom Layer Operation
- Responsible for specifying the attributes that are supported by the custom layer and computing the output shape for each instance of the custom layer from its parameters. <br> The `--mo-op` command-line argument shown in the examples below generates a custom layer operation for the Model Optimizer.
## Custom Layer Extensions for the Inference Engine
The following figure shows the basic flow for the Inference Engine highlighting two custom layer extensions for the CPU and GPU Plugins, the Custom Layer CPU extension and the Custom Layer GPU Extension.
![](img/IE_extensions_flow.png)
Each device plugin includes a library of optimized implementations to execute known layer operations which must be extended to execute a custom layer. The custom layer extension is implemented according to the target device:
- Custom Layer CPU Extension
- A compiled shared library (.so or .dll binary) needed by the CPU Plugin for executing the custom layer on the CPU.
- Custom Layer GPU Extension
- OpenCL source code (.cl) for the custom layer kernel that will be compiled to execute on the GPU along with a layer description file (.xml) needed by the GPU Plugin for the custom layer kernel.
## Model Extension Generator
Using answers to interactive questions or a *.json* configuration file, the Model Extension Generator tool generates template source code files for each of the extensions needed by the Model Optimizer and the Inference Engine. To complete the implementation of each extension, the template functions may need to be edited to fill-in details specific to the custom layer or the actual custom layer functionality itself.
### Command-line
The Model Extension Generator is included in the Intel® Distribution of OpenVINO™ toolkit installation and is run using the command (here with the "--help" option):
```bash
python3 /opt/intel/openvino/deployment_tools/tools/extension_generator/extgen.py new --help
```
where the output will appear similar to:
```
usage: You can use any combination of the following arguments:
Arguments to configure extension generation in the interactive mode:
optional arguments:
-h, --help show this help message and exit
--mo-caffe-ext generate a Model Optimizer Caffe* extractor
--mo-mxnet-ext generate a Model Optimizer MXNet* extractor
--mo-tf-ext generate a Model Optimizer TensorFlow* extractor
--mo-op generate a Model Optimizer operation
--ie-cpu-ext generate an Inference Engine CPU extension
--ie-gpu-ext generate an Inference Engine GPU extension
--output_dir OUTPUT_DIR
set an output directory. If not specified, the current
directory is used by default.
```
The available command-line arguments are used to specify which extension(s) to generate templates for the Model Optimizer or Inference Engine. The generated extension files for each argument will appear starting from the top of the output directory as follows:
Command-line Argument | Output Directory Location |
--------------------- | ------------------------------ |
`--mo-caffe-ext` | user_mo_extensions/front/caffe |
`--mo-mxnet-ext` | user_mo_extensions/front/mxnet |
`--mo-tf-ext` | user_mo_extensions/front/tf |
`--mo-op` | user_mo_extensions/ops |
`--ie-cpu-ext` | user_ie_extensions/cpu |
`--ie-gpu-ext` | user_ie_extensions/gpu |
### Extension Workflow
The workflow for each generated extension follows the same basic steps:
![](img/MEG_generic_flow.png)
**Step 1: Generate:** Use the Model Extension Generator to generate the Custom Layer Template Files.
**Step 2: Edit:** Edit the Custom Layer Template Files as necessary to create the specialized Custom Layer Extension Source Code.
**Step 3: Specify:** Specify the custom layer extension locations to be used by the Model Optimizer or Inference Engine.
## Caffe\* Models with Custom Layers <a name="caffe-models-with-custom-layers"></a>
If your Caffe\* model has custom layers:
**Register the custom layers as extensions to the Model Optimizer**. For instructions, see [Extending Model Optimizer with New Primitives](../MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You will need a bit of Python\* code that lets the Model Optimizer;
- Generate a valid Intermediate Representation according to the rules you specified.
- Be independent from the availability of Caffe on your computer.
If your model contains Custom Layers, it is important to understand the internal workflow of the Model Optimizer. Consider the following example.
**Example**:
The network has:
* One input layer (#1)
* One output Layer (#5)
* Three internal layers (#2, 3, 4)
The custom and standard layer types are:
* Layers #2 and #5 are implemented as Model Optimizer extensions.
* Layers #1 and #4 are supported in Model Optimizer out-of-the box.
* Layer #3 is neither in the list of supported layers nor in extensions, but is specified in CustomLayersMapping.xml.
> **NOTE**: If any of the layers are not in one of three categories described above, the Model Optimizer fails with an appropriate message and a link to the corresponding question in [Model Optimizer FAQ](../MO_DG/prepare_model/Model_Optimizer_FAQ.md).
The general process is as shown:
![Example custom layer network](img/mo_caffe_priorities.png)
<br>
**Step 1:** The example model is fed to the Model Optimizer that **loads the model** with the special parser built on top of the `caffe.proto` file. In case of failure, the Model Optimizer asks you to prepare the parser that can read the model. For more information, refer to the Model Optimizer, <a href="MO_FAQ.html#FAQ1">FAQ #1</a>.
**Step 2:** The Model Optimizer **extracts the attributes of all layers** by going through the list of layers and attempting to find the appropriate extractor. In order of priority, the Model Optimizer checks if the layer is:
* A. Registered as a Model Optimizer extension
* B. Registered as a standard Model Optimizer layer
When the Model Optimizer finds a satisfying condition from the list above, it extracts the attributes according to the following rules:
* For A. - takes only the parameters specified in the extension
* For B. - takes only the parameters specified in the standard extractor
<br>
**Step 3:** The Model Optimizer **calculates the output shape of all layers**. The logic is the same as it is for the priorities. **Important:** the Model Optimizer always takes the first available option.
**Step 4:** The Model Optimizer **optimizes the original model and produces the two Intermediate Representation (IR) files in .xml and .bin**.
<br>
## TensorFlow\* Models with Custom Layers <a name="Tensorflow-models-with-custom-layers"></a>
You have two options for TensorFlow\* models with custom layers:
<br>
* **Register those layers as extensions to the Model Optimizer.** In this case, the Model Optimizer generates a valid and optimized Intermediate Representation.
* **If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option.** This feature is helpful for many TensorFlow models. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
## MXNet\* Models with Custom Layers <a name="mxnet-models-with-custom-layers"></a>
There are two options to convert your MXNet* model that contains custom layers:
1. Register the custom layers as extensions to the Model Optimizer. For instructions, see [Extending MXNet Model Optimizer with New Primitives](../MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You can create Model Optimizer extensions for both MXNet layers with op `Custom` and layers which are not standard MXNet layers.
2. If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option. In MXNet the function is actively used for ssd models provides an opportunity to for the necessary subgraph sequences and replace them. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
## Kaldi\* Models with Custom Layers <a name="Kaldi-models-with-custom-layers"></a>
For information on converting your Kaldi* model containing custom layers see [Converting a Kaldi Model in the Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi.html).
## ONNX\* Models with Custom Layers <a name="ONNX-models-with-custom-layers"></a>
For information on converting your ONNX* model containing custom layers see [Converting an ONNX Model in the Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html).
## Step-by-Step Custom Layers Tutorial
For a step-by-step walk-through creating and executing a custom layer, see [Custom Layer Implementation Tutorial for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0)
## Additional Resources
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)
- [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
- [Kernel Extensivility in the Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Integrate_your_kernels_into_IE.html)
- [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](https://docs.openvinotoolkit.org/latest/_intel_models_index.html)
- [Inference Engine Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
## Converting Models:
- [Convert Your Caffe* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
- [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
- [Convert Your MXNet* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
- [Convert Your ONNX* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)

View File

@@ -0,0 +1,83 @@
# Regression tests howto {#openvino_docs_HOWTO_add_regression_test_vpu}
## Purpose
This document contains instructions for correctly modifying a set of regression tests.
## Common
Regression tests for Myriad and HDDL plugins are on the path:
`inference-engine/tests/functional/vpu/regression_tests/`
The tests are divided into 4 groups:
* Classification
* Detection
* Raw-results
* Compilation
* VPU hetero
Testing framework [Google Test](https://github.com/google/googletest/).
Each group contains [parameterized](https://github.com/google/googletest/blob/master/googletest/docs/advanced.md) tests. The main idea is that to add a new test, you only need to add a new parameter. Except for scenarios different from the generalized case.
## Classsification and Detection tests
These groups contains two cases:
* For generalized scenario (` VpuNoClassificationRegression, VpuNoDetectionRegression`)
* For specific scenario (` VpuNoClassificationRegressionSpecific, VpuNoDetectionRegressionSpecific`)
### Generalized scenario
If You want test new parameter(batch, precision, model and etc.) then You need to edit the existing initialization of parameterized tests or create a new one.
Example of initialization of parameterized tests:
``` c++
INSTANTIATE_TEST_CASE_P(
VPURegTestWithResources_nightly,
VpuNoClassificationRegression,
Combine(ValuesIn(VpuTestParamsContainer::testingPlugin()),
Values(Precision::FP16),
Values(1), // batches
Values(true), //IsHwAdaptiveMode
Values(false), //DoReshape
Values(3, 5, 7), //Resources
Values(false), //IsIgnoreStatistic
Values(ClassificationSrcParam{ModelName::GoogleNetV1, SourceImages::kCat3, 0.01, Regression::EMean::eValues})),
VpuNoClassificationRegression::getTestCaseName);
```
### Specific scenario
If You need a test to perform some actions that are not provided in the generalized scenario, then add a specific test case. As with the generalized scenario You can change parameters for these tests.
Example of specific test case:
``` c++
TEST_P(VpuNoClassificationRegressionSpecific, onAlexNetWithNetworkConfig) {
DISABLE_ON_WINDOWS_IF(HDDL_PLUGIN);
DISABLE_IF(do_reshape_);
if (!hw_adaptive_mode_) {
config_[VPU_CONFIG_KEY(NETWORK_CONFIG)] = "data=data,scale=1";
}
assertThat().classificationResultsForInferRequestAPI()
.on(SourceImages::kDog2)
.withInputPrecision(in_precision_)
.times(batch_)
.withBatch(batch_)
.onModel(ModelName::AlexNet)
.setMean(Regression::EMean::eImage)
.onFP16()
.withTopK(1)
.withPluginConfig(config_)
.equalToReferenceWithDelta(0.04);
}
```
## Raw-results tests
There is no generalized scenario and recommendations are the same as for specific test cases for Classification/Detection groups.
## Compilation tests
The tests are in the `vpu_classification_regression.cpp` file and contains only one scenario ` VpuNoRegressionWithCompilation `. To add a new test just update parameters just as in generalized scenarion of Classification/Detection test groups.

View File

@@ -0,0 +1,94 @@
# Fuzzing howto {#openvino_docs_HOWTO_fuzzing_HOWTO}
## Intended Audience
This document is for a developer who wants to contribute fuzz tests.
## Purpose
This document walks you through creating your first fuzzer, running it and evaluating its quality.
## Prerequisites
- Linux OS or Mac OS.
- [American Fuzzy Loop](http://lcamtuf.coredump.cx/afl/) if building with GCC.
## Steps
1. Create a fuzz test in the existing project at `./tests/fuzz`. Fuzz test must
follow `<test name>-fuzzer.cc` naming scheme and implement a
`LLVMFuzzerTestOneInput` entry point.
``` bash
cat << EOF > ./tests/fuzz/test_name-fuzzer.cc
#include <stdint.h>
#include <cstdlib>
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
// put your fuzzing code here and use data+size as input.
return 0; // always return 0
}
EOF
```
2. Implement test logic under `LLVMFuzzerTestOneInput`.
See example fuzz test at `tests/fuzz/read_network-fuzzer.cc`.
3. Build fuzz tests with `-DENABLE_FUZZING=ON` flag for cmake.
``` bash
mkdir -p build && \
(cd build && \
CXX=afl-g++ CC=afl-gcc cmake -DCMAKE_BUILD_TYPE=Debug -DENABLE_FUZZING=ON -DENABLE_TESTS=ON .. && \
make fuzz --jobs=$(getconf _NPROCESSORS_ONLN))
```
4. Prepare sample inputs for your fuzz test to teach fuzzer engine on input
structure
``` bash
(cd bin/intel64/Debug && \
mkdir test_name-corpus && \
echo sample input > test_name-corpus/in1.txt)
```
5. Evaluate fuzz test with `afl-fuzz` fuzzing engine
Run fuzz test:
``` bash
(cd bin/intel64/Debug && \
afl-fuzz -i test_name-corpus -o test_name-out -- ./test_name-fuzzer @@
```
While fuzz test is running it prints out statistics. Besides just crashes `uniq
crashes` and hangs `uniq hangs` you should care about fuzz test quality:
- Fuzz test should be fast - speed of execution `exec speed` should be at least
100 exec/s. Speed less than 20 exec/s is not acceptable.
- Fuzz test should be able to explore new code paths `map coverage` and
`findings in depth`. Confirm it is increasing while fuzz test is running.
6. Reproduce fuzz test findings
All issues found by fuzz test are stored as a file in output folder specified
earlier via `-o` afl-fuzz option. To reproduce an issue run fuzz test executable
with an issue file as an argument.
## Summary
We have created a simple fuzz test, run it and asses its results.
## Extension
Try run parallel fuzzing with the help of
[afl-utils](https://gitlab.com/rc0r/afl-utils).
## Tips or FAQs
GCC 7 in Ubuntu 18.04 LTS has a
[defect](https://bugs.launchpad.net/ubuntu/+source/afl/+bug/1774816). Upgrade
GCC 7 for AFL to work. GCC version `Ubuntu 7.3.0-27ubuntu1~18.04` works OK.

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c2f362a39ae6c2af080e4f055b6fdba4954f918f85731545d1df3d687d9213d5
size 421056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cb5c700d003936779455353bfa4ed9432410c0975c46e2dfd30c6a1abccd1727
size 23320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:99d6b5146be85fa408dc5432883c3e2745cffe890133854a97dcf22f5c5962d4
size 47564

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0a4de6e502cae7542f1f311bcdbea6bb145f960f0d27d86a03160d1a60133778
size 301310

546
docs/IE_DG/API_Changes.md Normal file
View File

@@ -0,0 +1,546 @@
# Inference Engine API Changes History {#openvino_docs_IE_DG_API_Changes}
The sections below contain detailed list of changes made to the Inference Engine API in recent releases.
## 2020.4
### New API
**CPU Plugin API:**
* InferenceEngine::PluginConfigParams::KEY_ENFORCE_BF16 config key
**Metrics and values for Query API:**
* METRIC_KEY(OPTIMIZATION_CAPABILITIES)
* METRIC_VALUE(BF16)
### Deprecated API
**Myriad Plugin API:**
* VPU_CONFIG_KEY(IGNORE_IR_STATISTIC)
### Removed API
**Inference Engine NN Builder API:**
* InferenceEngine::Builder::EltwiseLayer
* InferenceEngine::Builder::MemoryLayer
* InferenceEngine::Builder::ROIPoolingLayer
* InferenceEngine::Builder::DeconvolutionLayer
* InferenceEngine::Builder::ReLULayer
* InferenceEngine::Builder::TanHLayer
* InferenceEngine::Builder::InputLayer
* InferenceEngine::Builder::PoolingLayer
* InferenceEngine::Builder::CropLayer
* InferenceEngine::Builder::GRUSequenceLayer
* InferenceEngine::Builder::NormLayer
* InferenceEngine::Builder::LSTMSequenceLayer
* InferenceEngine::Builder::ClampLayer
* InferenceEngine::Builder::PSROIPoolingLayer
* InferenceEngine::Builder::Layer
* InferenceEngine::Builder::RNNSequenceLayer
* InferenceEngine::Builder::ReorgYoloLayer
* InferenceEngine::Builder::NormalizeLayer
* InferenceEngine::Builder::PriorBoxClusteredLayer
* InferenceEngine::Builder::MVNLayer
* InferenceEngine::Builder::PermuteLayer
* InferenceEngine::Builder::SimplerNMSLayer
* InferenceEngine::Builder::ConstLayer
* InferenceEngine::Builder::DeformableConvolutionLayer
* InferenceEngine::Builder::FullyConnectedLayer
* InferenceEngine::Builder::PriorBoxLayer
* InferenceEngine::Builder::SoftMaxLayer
* InferenceEngine::Builder::OutputLayer
* InferenceEngine::Builder::TileLayer
* InferenceEngine::Builder::SplitLayer
* InferenceEngine::Builder::PReLULayer
* InferenceEngine::Builder::RegionYoloLayer
* InferenceEngine::Builder::ReshapeLayer
* InferenceEngine::Builder::ConvolutionLayer
* InferenceEngine::Builder::DetectionOutputLayer
* InferenceEngine::Builder::ConcatLayer
* InferenceEngine::Builder::ELULayer
* InferenceEngine::Builder::GRNLayer
* InferenceEngine::Builder::LRNLayer
* InferenceEngine::Builder::ArgMaxLayer
* InferenceEngine::Builder::ReLU6Layer
* InferenceEngine::Builder::ScaleShiftLayer
* InferenceEngine::Builder::ProposalLayer
* InferenceEngine::Builder::SigmoidLayer
* InferenceEngine::Builder::ResampleLayer
* InferenceEngine::Builder::CTCGreedyDecoderLayer
* InferenceEngine::Builder::BatchNormalizationLayer
* InferenceEngine::Builder::LayerDecorator
* InferenceEngine::Builder::PowerLayer
* InferenceEngine::Builder::Network
* InferenceEngine::Builder::PortInfo
* InferenceEngine::Builder::Connection
* InferenceEngine::Builder::PortData
* InferenceEngine::Builder::Port
* InferenceEngine::Builder::ILayer
* InferenceEngine::Builder::INetworkIterator
* InferenceEngine::Builder::INetwork
* InferenceEngine::Builder::ILayer
## 2020.2
### New API
**Extensibility API:**
* InferenceEngine::IExtension::getImplTypes(const std::shared_ptr<ngraph::Node>& node) method
* InferenceEngine::IExtension::getImplementation(const std::shared_ptr<ngraph::Node>& node, const std::string& implType) method
### Deprecated API
**Extensibility API:**
* InferenceEngine::ILayerImplFactory class
* InferenceEngine::IShapeInferImpl class
* InferenceEngine::IShapeInferImpl class
* InferenceEngine::IShapeInferExtension class
* InferenceEngine::IExtension::getFactoryFor(ILayerImplFactory\*& factory, const CNNLayer\* cnnLayer, ResponseDesc\* resp) noexcept method
* InferenceEngine::IExtension::getPrimitiveTypes(char\*\*& types, unsigned int& size, ResponseDesc\* resp) noexcept method
* InferenceEngine::ShapeInferImpl class
* InferenceEngine::Extension::getFactoryFor(ILayerImplFactory\*& factory, const CNNLayer\* cnnLayer, ResponseDesc\* resp) noexcept method
* InferenceEngine::Extension::getPrimitiveTypes(char\*\*& types, unsigned int& size, ResponseDesc\* resp) noexcept method
**Network API:**
* InferenceEngine::details::CNNNetworkIterator class
* InferenceEngine::CNNNetwork::getPrecision() const method
* InferenceEngine::CNNNetwork::getLayerByName(const char\* layerName) const method
* InferenceEngine::CNNNetwork::size() const method
* InferenceEngine::CNNNetwork::begin() const method
* InferenceEngine::CNNNetwork::end() const method
* InferenceEngine::CNNNetwork::AddExtension(const IShapeInferExtensionPtr& extension) method
* InferenceEngine::ICNNNetwork::getPrecision() const noexcept method
* InferenceEngine::ICNNNetwork::getName(char\* pName, size_t len) const noexcept method
* InferenceEngine::ICNNNetwork::getData(const char\* dname) noexcept method
* InferenceEngine::ICNNNetwork::addLayer(const CNNLayerPtr& layer) noexcept method
* InferenceEngine::ICNNNetwork::getLayerByName(const char\* layerName, CNNLayerPtr& out, ResponseDesc\* resp) const noexcept method
* InferenceEngine::ICNNNetwork::AddExtension(const IShapeInferExtensionPtr& extension, ResponseDesc\* resp) noexcept method
* InferenceEngine::ICNNNetwork::getStats(ICNNNetworkStats\*\* stats, ResponseDesc\* resp) const noexcept method
* InferenceEngine::ICNNNetworkStats class
* InferenceEngine::NetworkNodeStats class
* InferenceEngine::Data::getCreatorLayer() method
* InferenceEngine::Data::getInputTo() method
* InferenceEngine::LayerParams class
**Layer API:**
* InferenceEngine::CNNLayer class
* InferenceEngine::WeightableLayer class
* InferenceEngine::BatchNormalizationLayer class
* InferenceEngine::BatchToSpaceLayer class
* InferenceEngine::BinaryConvolutionLayer class
* InferenceEngine::BroadcastLayer class
* InferenceEngine::BucketizeLayer class
* InferenceEngine::ClampLayer class
* InferenceEngine::ConcatLayer class
* InferenceEngine::ConvolutionLayer class
* InferenceEngine::CropLayer class
* InferenceEngine::DeconvolutionLayer class
* InferenceEngine::DeformableConvolutionLayer class
* InferenceEngine::DepthToSpaceLayer class
* InferenceEngine::EltwiseLayer class
* InferenceEngine::ExperimentalDetectronPriorGridGenerator class
* InferenceEngine::ExperimentalDetectronPriorGridGeneratorLayer class
* InferenceEngine::ExperimentalSparseWeightedReduceLayer class
* InferenceEngine::FillLayer class
* InferenceEngine::FullyConnectedLayer class
* InferenceEngine::GRNLayer class
* InferenceEngine::GRUCell class
* InferenceEngine::GatherLayer class
* InferenceEngine::GemmLayer class
* InferenceEngine::LSTMCell class
* InferenceEngine::MVNLayer class
* InferenceEngine::MathLayer class
* InferenceEngine::NonMaxSuppression class
* InferenceEngine::NormLayer class
* InferenceEngine::OneHotLayer class
* InferenceEngine::PReLULayer class
* InferenceEngine::PadLayer class
* InferenceEngine::PoolingLayer class
* InferenceEngine::PowerLayer class
* InferenceEngine::QuantizeLayer class
* InferenceEngine::RNNCell class
* InferenceEngine::RNNCellBase class
* InferenceEngine::RNNSequenceLayer class
* InferenceEngine::RangeLayer class
* InferenceEngine::ReLU6Layer class
* InferenceEngine::ReLULayer class
* InferenceEngine::ReduceLayer class
* InferenceEngine::ReshapeLayer class
* InferenceEngine::ReverseSequenceLayer class
* InferenceEngine::ScaleShiftLayer class
* InferenceEngine::ScatterLayer class
* InferenceEngine::SelectLayer class
* InferenceEngine::ShuffleChannelsLayer class
* InferenceEngine::SoftMaxLayer class
* InferenceEngine::SpaceToBatchLayer class
* InferenceEngine::SpaceToDepthLayer class
* InferenceEngine::SparseFillEmptyRowsLayer class
* InferenceEngine::SparseSegmentReduceLayer class
* InferenceEngine::SparseToDenseLayer class
* InferenceEngine::SplitLayer class
* InferenceEngine::StridedSliceLayer class
* InferenceEngine::TensorIterator class
* InferenceEngine::TileLayer class
* InferenceEngine::TopKLayer class
* InferenceEngine::UniqueLayer class
## 2020.1
### New API
**Integration with ngraph API:**
* InferenceEngine::CNNNetwork(const std::shared_ptr<ngraph::Function>& network) ctor from ngraph::Function
* InferenceEngine::CNNNetwork::getFunction() const noexcept method
* InferenceEngine::ICNNNetwork::getFunction() const noexcept method
* InferenceEngine::Parameter(const std::shared_ptr<ngraph::Variant>& var) ctor
* InferenceEngine::Parameter::asVariant() const method
* InferenceEngine::Parameter::operator std::shared_ptr<ngraph::Variant>() const operator
* InferenceEngine::Core::ReadNetwork(const std::wstring& modelPath, const std::wstring& binPath) method
* InferenceEngine::Core::ReadNetwork(const std::string& modelPath, const std::string& binPath = "") method
* InferenceEngine::Core::ReadNetwork(const std::string& model, const Blob::CPtr& weights) method
* InferenceEngine::Code::AddExtension(const IExtensionPtr& extension) method
* InferenceEngine::IExtension::getOpSets() method
**Offline compilation: import / export to std::stream:**
* InferenceEngine::ExecutableNetwork::Export(std::ostream& networkModel) method
* InferenceEngine::Core::ImportNetwork(std::istream& networkModel, const std::string& deviceName = {}, const std::map<std::string, std::string>& config = {}) method
* InferenceEngine::IExecutableNetwork::Export(std::ostream& networkModel, ResponseDesc \*resp) noexcept method
**RemoteBlob accelerator memory sharing API:**
* InferenceEngine::RemoteContext class
* InferenceEngine::RemoteBlob class
* InferenceEngine::Core::CreateContext(const std::string& deviceName, const ParamMap& params) method
* InferenceEngine::Core::GetDefaultContext(const std::string& deviceName) method
* InferenceEngine::Core::LoadNetwork(CNNNetwork network, RemoteContext::Ptr context, const std::map<std::string, std::string>& config = std::map<std::string, std::string>()) method
**GNA firmware model image generation:**
* GNA_CONFIG_KEY(FIRMWARE_MODEL_IMAGE_GENERATION) config key
* GNA_CONFIG_VALUE(GEN) value
* GNA_CONFIG_VALUE(GEN_EXACT) value
* GNA_CONFIG_VALUE(SSE) value
* GNA_CONFIG_VALUE(SSE_EXACT) value
* GNA_CONFIG_VALUE(AVX1) value
* GNA_CONFIG_VALUE(AVX1_EXACT) value
* GNA_CONFIG_VALUE(AVX2) value
* GNA_CONFIG_VALUE(AVX2_EXACT) value
**MemoryBlob mapping of memory to the user space:**
* InferenceEngine::MemoryBlob::rwmap() noexcept method
* InferenceEngine::MemoryBlob::rmap() noexcept method
* InferenceEngine::MemoryBlob::wmap() noexcept method
**Memory interoperability on acceleration devices. General classes and GPU helper functions**
* InferenceEngine::RemoteBlob class
* InferenceEngine::RemoteContext class
* InferenceEngine::Core::CreateContext(const std::string& deviceName, const ParamMap& params) method
* InferenceEngine::Core::GetDefaultContext(const std::string& deviceName) method
* InferenceEngine::make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx) function
* InferenceEngine::gpu::make_shared_blob_nv12(size_t height, size_t width, RemoteContext::Ptr ctx, VASurfaceID nv12_surf) function
* InferenceEngine::gpu::make_shared_context(Core& core, std::string deviceName, VADisplay device) function
* InferenceEngine::gpu::make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, VASurfaceID surface, uint32_t plane = 0) function
* InferenceEngine::gpu::make_shared_blob_nv12(RemoteContext::Ptr ctx, cl::Image2D& nv12_image_plane_y, cl::Image2D& nv12_image_plane_uv) function
* InferenceEngine::gpu::make_shared_context(Core& core, std::string deviceName, cl_context ctx) function
* InferenceEngine::gpu::make_shared_blob(const TensorDesc& desc, ClContext::Ptr ctx) function
* InferenceEngine::gpu::make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl::Buffer& buffer) function
* InferenceEngine::gpu::make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl_mem buffer) function
* InferenceEngine::gpu::make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl::Image2D& image) function
### Deprecated API
**Inference Engine NN Builder API:**
* InferenceEngine::Builder::EltwiseLayer
* InferenceEngine::Builder::MemoryLayer
* InferenceEngine::Builder::ROIPoolingLayer
* InferenceEngine::Builder::DeconvolutionLayer
* InferenceEngine::Builder::ReLULayer
* InferenceEngine::Builder::TanHLayer
* InferenceEngine::Builder::InputLayer
* InferenceEngine::Builder::PoolingLayer
* InferenceEngine::Builder::CropLayer
* InferenceEngine::Builder::GRUSequenceLayer
* InferenceEngine::Builder::NormLayer
* InferenceEngine::Builder::LSTMSequenceLayer
* InferenceEngine::Builder::ClampLayer
* InferenceEngine::Builder::PSROIPoolingLayer
* InferenceEngine::Builder::Layer
* InferenceEngine::Builder::RNNSequenceLayer
* InferenceEngine::Builder::ReorgYoloLayer
* InferenceEngine::Builder::NormalizeLayer
* InferenceEngine::Builder::PriorBoxClusteredLayer
* InferenceEngine::Builder::MVNLayer
* InferenceEngine::Builder::PermuteLayer
* InferenceEngine::Builder::SimplerNMSLayer
* InferenceEngine::Builder::ConstLayer
* InferenceEngine::Builder::DeformableConvolutionLayer
* InferenceEngine::Builder::FullyConnectedLayer
* InferenceEngine::Builder::PriorBoxLayer
* InferenceEngine::Builder::SoftMaxLayer
* InferenceEngine::Builder::OutputLayer
* InferenceEngine::Builder::TileLayer
* InferenceEngine::Builder::SplitLayer
* InferenceEngine::Builder::PReLULayer
* InferenceEngine::Builder::RegionYoloLayer
* InferenceEngine::Builder::ReshapeLayer
* InferenceEngine::Builder::ConvolutionLayer
* InferenceEngine::Builder::DetectionOutputLayer
* InferenceEngine::Builder::ConcatLayer
* InferenceEngine::Builder::ELULayer
* InferenceEngine::Builder::GRNLayer
* InferenceEngine::Builder::LRNLayer
* InferenceEngine::Builder::ArgMaxLayer
* InferenceEngine::Builder::ReLU6Layer
* InferenceEngine::Builder::ScaleShiftLayer
* InferenceEngine::Builder::ProposalLayer
* InferenceEngine::Builder::SigmoidLayer
* InferenceEngine::Builder::ResampleLayer
* InferenceEngine::Builder::CTCGreedyDecoderLayer
* InferenceEngine::Builder::BatchNormalizationLayer
* InferenceEngine::Builder::LayerDecorator
* InferenceEngine::Builder::PowerLayer
* InferenceEngine::Builder::Network
* InferenceEngine::Builder::PortInfo
* InferenceEngine::Builder::Connection
* InferenceEngine::Builder::PortData
* InferenceEngine::Builder::Port
* InferenceEngine::Builder::ILayer
* InferenceEngine::Builder::INetworkIterator
* InferenceEngine::Builder::INetwork
* InferenceEngine::Builder::ILayer
**Plugin API:**
* InferenceEngine::InferencePlugin C++ plugin wrapper class
* InferenceEngine::IInferencePlugin plugin interface
* InferenceEngine::PluginDispatcher class
* InferenceEngine::InferenceEnginePluginPtr typedef
* InferenceEngine::ICNNNetReader reader interface
* InferenceEngine::CNNNetReader class
**Blob API:**
* Blob::element_size() const noexcept method
* Blob::buffer() noexcept method
* Blob::cbuffer() noexcept method
* MemoryBlob::buffer() noexcept method
* MemoryBlob::cbuffer() noexcept method
### Removed API
Removed all [Inference Engine API which deprecated in 2019'R2](https://docs.openvinotoolkit.org/2019_R3/_docs_IE_DG_API_Changes.html#deprecated_api)
## 2019 R3
### New API
**New supported layers:**
* InferenceEngine::SparseFillEmptyRowsLayer new class
* InferenceEngine::UniqueLayer new class
* InferenceEngine::NonMaxSuppressionLayer new class
* InferenceEngine::ScatterLayer new class
**FPGA plugin streaming support:**
* DLIA_METRIC_VALUE(INPUT_STREAMING) value to METRIC_KEY(OPTIMIZATION_CAPABILITIES)
* DLIA_CONFIG_KEY(ENABLE_STREAMING) config key
### Removed API
* InferenceEngine::EltwiseLayer::Select from InferenceEngine::EltwiseLayer::eOperation enumeration
## 2019 R2
### New API
**Inference Engine Core API:**
* Introduced InferenceEngine::Core high level class to manage devices
**Query API extensions to InferenceEngine::ExecutableNetwork and InferenceEngine::IExecutableNetwork:**
* InferenceEngine::ExecutableNetwork::SetConfig method
* InferenceEngine::ExecutableNetwork::GetConfig method
* InferenceEngine::ExecutableNetwork::GetMetric method
* InferenceEngine::IExecutableNetwork::SetConfig method
* InferenceEngine::IExecutableNetwork::GetConfig method
* InferenceEngine::IExecutableNetwork::GetMetric method
**Metrics and values for Query API:**
* METRIC_KEY(AVAILABLE_DEVICES)
* METRIC_KEY(SUPPORTED_METRICS)
* METRIC_KEY(SUPPORTED_CONFIG_KEYS)
* METRIC_KEY(FULL_DEVICE_NAME)
* METRIC_KEY(OPTIMIZATION_CAPABILITIES)
* METRIC_VALUE(FP32)
* METRIC_VALUE(FP16)
* METRIC_VALUE(INT8)
* METRIC_VALUE(BIN)
* METRIC_VALUE(WINOGRAD)
* DLIA_METRIC_VALUE(FP11)
* METRIC_KEY(RANGE_FOR_STREAMS)
* METRIC_KEY(NUMBER_OF_WAITING_INFER_REQUESTS)
* METRIC_KEY(NUMBER_OF_EXEC_INFER_REQUESTS)
* METRIC_KEY(DEVICE_THERMAL)
* METRIC_KEY(RANGE_FOR_ASYNC_INFER_REQUESTS)
* EXEC_NETWORK_METRIC_KEY(NETWORK_NAME)
* EXEC_NETWORK_METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS)
**Common API:**
* CLDNN_CONFIG_KEY(INT8_ENABLED) config key
* CONFIG_KEY(GPU_THROUGHPUT_AUTO)
* CONFIG_KEY(GPU_THROUGHPUT_STREAMS)
* DLIA_CONFIG_KEY(IO_TRANSFORMATIONS_NATIVE) config key
* DLIA_CONFIG_KEY(DUMP_SUPPORTED_LAYERS_INFORMATION) config key
* GNA_CONFIG_VALUE(SW_FP32) config value for GNA_CONFIG_KEY(DEVICE_MODE) key
* MULTI_CONFIG_KEY(DEVICE_PRIORITIES) config key for `MULTI` device
* InferenceEngine::CNNNetReader::ReadNetwork(const std::wstring &filepath) new method
* InferenceEngine::CNNNetReader::ReadWeights(const std::wstring &filepath) new method
* InferenceEngine::ExecutableNetwork::ExecutableNetwork(IExecutableNetwork::Ptr actual, InferenceEnginePluginPtr plg) constructor with additional `plg` parameter
* InferenceEngine::InferRequest::InferRequest(IInferRequest::Ptr request, InferenceEnginePluginPtr plg) constructor with additional `plg` parameter
* InferenceEngine::Data::setName method
* InferenceEngine::QueryNetworkResult::supportedLayersMap
* InferenceEngine::Precision::I64 extension to InferenceEngine::Precision::ePrecision enumeration
**New supported primitives:**
* InferenceEngine::Builder::DeformableConvolutionLayer new class
* InferenceEngine::DeformableConvolutionLayer new class
* InferenceEngine::EltwiseLayer::Logical_NOT, InferenceEngine::EltwiseLayer::Mean, InferenceEngine::EltwiseLayer::Select extensions to InferenceEngine::EltwiseLayer::eOperation enumeration
* InferenceEngine::OneHotLayer new class
* InferenceEngine::SelectLayer new class
* InferenceEngine::BroadcastLayer new class
* InferenceEngine::MathLayer new class
* InferenceEngine::ReduceLayer new class
* InferenceEngine::TopKLayer new class
**Extensions to Blob creation API:**
* InferenceEngine::Blob::is method
* InferenceEngine::Blob::is const method
* InferenceEngine::Blob::as method
* InferenceEngine::Blob::as const method
* InferenceEngine::Blob::getAllocator abstract method
* InferenceEngine::Blob::getHandle abstract method
* InferenceEngine::MemoryBlob class
* InferenceEngine::ColorFormat enumeration
* InferenceEngine::PreProcessInfo::setColorFormat method
* InferenceEngine::PreProcessInfo::getColorFormat method
* InferenceEngine::CompoundBlob class to work with blobs consisting of several planes
* InferenceEngine::NV12Blob class representing NV12 blob with two planes
### Deprecated API
The methods listed below are deprecated and will be removed in 2019 R4 release:
**Common API:**
* InferenceEngine::InputInfo::getInputPrecision method
* InferenceEngine::InputInfo::setInputPrecision method
* InferenceEngine::InputInfo::getDims method
* InferenceEngine::CNNLayer::GetParamsAsBool method
* InferenceEngine::CNNNetwork::CNNNetwork(ICNNNetwork* actual) constructor
* InferenceEngine::CNNNetwork::setTargetDevice method
* HETERO_CONFIG_KEY(DUMP_DLA_MESSAGES) config key
* InferenceEngine::ILayerImplFactory::getShapes method
* InferenceEngine::IShapeInferImpl::inferShapes(const std::vector<SizeVector>&, const std::map<std::string, std::string>& , const std::map<std::string, Blob::Ptr>&, std::vector<SizeVector>&, ResponseDesc\*) method
* InferenceEngine::Data::setBatchSize method
* InferenceEngine::QueryNetworkResult::supportedLayers field
* InferenceEngine::ICNNNetwork::setBatchSize(const size_t size) method
* InferenceEngine::Blob::Resize method
* InferenceEngine::Blob::Reshape method
* InferenceEngine::TBlob::set method
**InferenceEngine::IInferencePlugin and InferenceEngine:InferencePlugin obsolete methods:**
* InferenceEngine::InferencePlugin::LoadNetwork(ICNNNetwork &network) method
* InferenceEngine::InferencePlugin::Infer method
* InferenceEngine::InferencePlugin::GetPerformanceCounts method
* InferenceEngine::InferencePlugin::QueryNetwork(const ICNNNetwork &network, QueryNetworkResult &res) const method
* InferenceEngine::IInferencePlugin::LoadNetwork(ICNNNetwork &network, ResponseDesc \*resp) method
* InferenceEngine::IInferencePlugin::Infer(const Blob &input, Blob &result, ResponseDesc \*resp) method
* InferenceEngine::IInferencePlugin::Infer(const BlobMap &input, BlobMap &result, ResponseDesc \*resp) method
* InferenceEngine::IInferencePlugin::GetPerformanceCounts method
* InferenceEngine::IInferencePlugin::QueryNetwork(const ICNNNetwork& network, QueryNetworkResult& res) const method
**Fields in InferenceEngine::Data class are replaced with appropriate methods:**
* InferenceEngine::Data::precision field
* InferenceEngine::Data::layout field
* InferenceEngine::Data::dims field
* InferenceEngine::Data::creatorLayer field
* InferenceEngine::Data::name field
* InferenceEngine::Data::inputTo field
* InferenceEngine::Data::userObject field
**Heterogeneous plugin:**
* InferenceEngine::IHeteroDeviceLoader class
* InferenceEngine::IHeteroInferencePlugin class
* InferenceEngine::HeteroPluginPtr class
* operator InferenceEngine::InferencePlugin::HeteroPluginPtr operator
**Blob creation API with dimensions in reverse order:**
* InferenceEngine::Blob::Blob(Precision p) constructor
* InferenceEngine::Blob::Blob(Precision p, Layout l) constructor
* InferenceEngine::Blob::Blob(Precision p, const SizeVector &dims) constructor
* InferenceEngine::Blob::Blob(Precision p, Layout l, const SizeVector &dims) constructor
* InferenceEngine::TBlob::TBlob(Precision p, Layout l) constructor
* InferenceEngine::TBlob::TBlob(Precision p, Layout l, const SizeVector& dims) constructor
* InferenceEngine::TBlob::TBlob(Precision p, Layout l, const SizeVector& dims, T* ptr, size_t data_size) constructor
* InferenceEngine::TBlob::TBlob(Precision p, Layout l, const SizeVector &dims, std::shared_ptr<IAllocator> alloc) constructor
* InferenceEngine::Blob::type() method
* InferenceEngine::Blob::precision() method
* InferenceEngine::Blob::layout() method
* InferenceEngine::Blob::dims() method
* InferenceEngine::make_shared_blob(Precision p, Layout l, const SizeVector &dims) function
* InferenceEngine::make_shared_blob(Precision p, const SizeVector &dims) function
* InferenceEngine::make_shared_blob(Precision p, Layout l, const TArg &arg) function
* InferenceEngine::make_shared_blob(Precision p, const TArg &arg) function
* InferenceEngine::make_shared_blob(TBlob<TypeTo> &&arg) function
* InferenceEngine::make_shared_blob(Precision p, Layout l) function
* InferenceEngine::make_shared_blob(Precision p, Layout l, SizeVector dims, const std::vector<TypeTo> &arg) function
* InferenceEngine::make_shared_blob(Precision p, Layout l, const std::vector<TypeTo> &arg) function
* InferenceEngine::make_shared_blob(Precision p, const std::vector<TypeTo> &arg) function
* InferenceEngine::make_shared_blob(Precision p, Layout l, const SizeVector &dims, TypeTo * ptr, size_t size) function
* InferenceEngine::make_shared_blob(Precision p, const SizeVector &dims, TypeTo * ptr, size_t size) function
* InferenceEngine::I_N variable
* InferenceEngine::I_C variable
* InferenceEngine::I_H variable
* InferenceEngine::I_W variable
* InferenceEngine::LayoutOffsetCounter class
* InferenceEngine::ConvertLayout function
**API working with device enumeration:**
* InferenceEngine::TargetDevice enumeration
* InferenceEngine::TargetDeviceInfo class
* InferenceEngine::getDeviceName function
* InferenceEngine::FindPluginRequest class
* InferenceEngine::FindPluginResponse class
* InferenceEngine::findPlugin(const FindPluginRequest &req, FindPluginResponse &result, ResponseDesc *resp) function
* InferenceEngine::ICNNNetwork::setTargetDevice method
* InferenceEngine::ICNNNetwork::getTargetDevice method
* InferenceEngine::PluginDispatcher::getPluginByDevice method
* InferenceEngine::PluginDispatcher::getSuitablePlugin method

View File

@@ -0,0 +1,90 @@
# Bfloat16 Inference {#openvino_docs_IE_DG_Bfloat16Inference}
## Disclaimer
Inference Engine with the bfloat16 inference implemented on CPU must support the `avx512_bf16` instruction and therefore the bfloat16 data format.
## Introduction
Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format with 16 bits. This is a truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format FP32. BF16 preserves 8 exponent bits as FP32 but reduces precision of the sign and mantissa from 24 bits to 8 bits.
![bf16_format]
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits.
Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range.
Another useful feature of BF16 is possibility to encode an INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
See the [Intel's site](https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
There are two ways to check if CPU device can support bfloat16 computations for models:
1. Query the instruction set via system `lscpu | grep avx512_bf16` or `cat /proc/cpuinfo | grep avx512_bf16`.
2. Use [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
```cpp
InferenceEngine::Core core;
auto cpuOptimizationCapabilities = core.GetMetric("CPU", METRIC_KEY(OPTIMIZATION_CAPABILITIES)).as<std::vector<std::string>>();
```
Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the following layers in BF16 computation mode:
* Convolution
* FullyConnected
* InnerProduct
* LRN
* Pooling
This means that BF16 inference can only be performed with the CPU plugin on the layers listed above. All other layers are executed in FP32.
## Lowering Inference Precision
Lowering precision to increase performance is [widely used](https://software.intel.com/content/www/us/en/develop/articles/lower-numerical-precision-deep-learning-inference-and-training.html) for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach.
The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
Bfloat16 data usage provides the following benefits that increase performance:
1. Faster multiplication of two BF16 numbers because of shorter mantissa of bfloat16 data.
2. No need to support denormals and handling exceptions as this is a performance optimization.
3. Fast conversion of float32 to bfloat16 and vice versa.
4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds.
5. Reduced amount of data that must be transferred, as a result, reduced data transition time.
For default optimization on CPU, source model converts from FP32 or FP16 to BF16 and executes internally on platforms with native BF16 support. In that case, `KEY_ENFORCE_BF16` is set to `YES`.
The code below demonstrates how to check if the key is set:
```cpp
InferenceEngine::Core core;
auto exeNetwork = core.LoadNetwork(network, "CPU");
auto enforceBF16 = exeNetwork.GetConfig(PluginConfigParams::KEY_ENFORCE_BF16).as<std::string>();
```
To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers AS IS without modifications with precisions that were set on each layer edge.
```cpp
InferenceEngine::Core core;
core.SetConfig({ { CONFIG_KEY(ENFORCE_BF16), CONFIG_VALUE(NO) } }, "CPU");
```
An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support.
Low-Precision 8-bit integer models do not convert to BF16, even if bfloat16 optimization is set by default.
## Performance Counters
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. The layers have the following marks:
* Suffix `BF16` for layers that had bfloat16 data type input and were computed in BF16 precision
* Suffix `FP32` for layers computed in 32-bit precision
For example, the performance counters table for the Inception model can look as follows:
```
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
```
The `execType` column of the table includes inference primitives with specific suffixes.
[bf16_format]: img/bf16_format.png

View File

@@ -0,0 +1,298 @@
Cross Check Tool {#openvino_docs_IE_DG_Cross_Check_Tool}
================
Cross Check Tool is a console application that enables comparing accuracy and performance metrics for two successive
model inferences that are performed
on two different supported Intel&reg; devices or with different precisions.
The Cross Check Tool can compare metrics per layer or all over the model.
On Linux* OS, before running the Cross Check Tool binary, make sure your application can find the
Deep Learning Inference Engine libraries.
Navigate to the `<INSTALL_DIR>/deployment_tools/inference_engine/bin` folder and run the `setvars.sh` script to
set all necessary environment variables:
```sh
source setvars.sh
```
## Running the Cross Check Tool
Cross Check Tool is distributed as a binary file and there is no need to build it. To run the Cross Check Tool,
execute the tool's binary file with necessary parameters. Please note that the Inference Engine assumes that weights
are in the same folder as the _.xml_ file.
You can get the list of all available options using the -h option:
```sh
$./cross_check_tool -h
InferenceEngine:
API version ............ 1.0
Build .................. ###
[ INFO ] Parsing input parameters
./cross_check_tool [OPTION]
Options:
-h Prints a usage message.
-i "<path>" Optional. Path to an input image file or multi-input file to infer. Generates input(s) from normal distribution if empty
-m "<path>" Required. Path to an .xml file that represents the first IR of the trained model to infer.
-l "<absolute_path>" Required for MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the kernels implementation.
Or
-c "<absolute_path>" Required for clDNN (GPU)-targeted custom kernels. Absolute path to the xml file with the kernels description.
-conf "<path>" Optional. Path to config file for -d device plugin
-ref_conf "<path>" Optional. Path to config file for -ref_d device plugin
-pp "<path>" Optional. Path to a plugin folder.
-d "<device>" Required. The first target device to infer the model specified with the -m option. CPU, GPU, HDDL or MYRIAD is acceptable.
-ref_m "<path>" Optional. Path to an .xml file that represents the second IR in different precision to compare the metrics.
-ref_d "<device>" Required. The second target device to infer the model and compare the metrics. CPU, GPU, HDDL or MYRIAD is acceptable.
-layers "<options>" Defines layers to check. Options: all, None - for output layers check, list of comma-separated layer names to check. Default value is None.
-eps "<float>" Optional. Threshold for filtering out those blob statistics that do not statify the condition: max_abs_diff < eps.
-dump Enables blobs statistics dumping
-load "<path>" Path to a file to load blobs from
```
### Examples
1. To check per-layer accuracy and performance of inference in FP32 precision on the CPU against the GPU, run:
```sh
./cross_check_tool -i <path_to_input_image_or_multi_input_file> \
-m <path_to_FP32_xml> \
-d CPU \
-ref_d GPU \
-layers all
```
The output looks as follows:
```
InferenceEngine:
API version ............ 1.0
Build .................. ###
[ INFO ] Parsing input parameters
The same IR on both devices: <path_to_IR>
[ INFO ] No extensions provided
API version ............ 1.0
Build .................. lnx_20180510
Description ....... MKLDNNPlugin
API version ............ 0.1
Build .................. ci-main-03659
Description ....... clDNNPlugin
[ INFO ] Inputs detected: Placeholder
[ INFO ] Statistics will be dumped for X layers: <layer_1_name>, <layer_2_name>, ... , <layer_X_name>
[ INFO ] Layer <layer_1_name> statistics
Max absolute difference: 1.52588e-05
Min absolute difference: 0
Max relative difference: 0.000288028%
Min relative difference: 0%
Blob size: 1000
Devices: CPU_FP32 GPU_FP32
Status: EXECUTED EXECUTED
Layer type: Reshape Reshape
Real time, microsec: 20 154
Execution type: unknown GPU
Number of NAN: 0 0
Number of INF: 0 0
Number of ZERO: 0 0
...
<list_of_layer_statistics>
...
[ INFO ] Overall max absolute difference 2.81334e-05 was reached by <layer_name> layer
[ INFO ] Overall min absolute difference 0 was reached by <layer_name> layer
[ INFO ] Overall max relative difference 0.744893% was reached by <layer_name> layer
[ INFO ] Overall min relative difference -2.47948% was reached by <layer_name> layer
[ INFO ] Execution successful
```
2. To check the overall accuracy and performance of inference on the CPU in FP32 precision against the
Intel&reg; Movidius&trade; Myriad&trade; device in FP16 precision, run:
```sh
./cross_check_tool -i <path_to_input_image_or_multi_input_file> \
-m <path_to_FP16_xml> \
-ref_d CPU \
-ref_m <path_to_FP32_xml>\
-d MYRIAD \
```
The output looks as follows:
```
InferenceEngine:
API version ............ 1.0
Build .................. ###
[ INFO ] Parsing input parameters
[ INFO ] MYRIAD vs CPU
IR for MYRIAD : <path_to_FP16_xml>
IR for CPU : <path_to_FP32_xml>
[ INFO ] No extensions provided
[ INFO ] Loading plugins
API version ............ 0.1
Build .................. ###
Description ....... myriadPlugin
API version ............ 1.0
Build .................. ###
Description ....... MKLDNNPlugin
[ INFO ] Inputs detected: <list_of_input_layers>
[ INFO ] Statistics will be dumped for 1 layers: <output_layer_name(s)>
[ INFO ] Layer <output_layer_name> statistics
Max absolute difference: 0.003889
Min absolute difference: 2.49778e-12
Max relative difference: 290.98%
Min relative difference: 0.0327804%
Devices: MYRIAD_FP16 CPU_FP32
Real time, microsec: 69213.978946 4149.904940
[ INFO ] Execution successful
```
3. To dump layer statistics from specific list of layers, run:
```sh
./cross_check_tool -i <path_to_input_image_or_multi_input_file> \
-m <path_to_FP16_xml> \
-d MYRIAD \
-dump \
-layers <comma_separated_list_of_layers>
```
The output looks as follows:
```
InferenceEngine:
API version ............ 1.0
Build .................. ###
[ INFO ] Blob and statistics dumping enabled
[ INFO ] No extensions provided
API version ............ 0.1
Build .................. custom_releases/cvsdk-2018-r2_e28ec0278fb749d6b999c688a8e90a8a25c0f2b5
Description ....... myriadPlugin
[ INFO ] Inputs detected: <list_of_input_layers>
[ INFO ] Statistics will be dumped for X layers: <comma_separated_list_of_layers>
[ INFO ] Dump path: <path_where_dump_will_be_saved>
[ INFO ] <layer_1_name> layer processing
...
[ INFO ] <layer_X_name> layer processing
[ INFO ] Execution successful
```
If you do not provide the `-i` key, the Cross Check Tool generates an input from normal distributed noise and saves
it in a multi-input file format with the filename `<path_to_xml>_input_layers_dump.txt` in the same folder as the IR.
4. To check the overall accuracy and performance of inference on the CPU in FP32 precision against dumped results, run:
```sh
./cross_check_tool -i <path_to_input_image_or_multi_input_file> \
-m <path_to_FP32_xml> \
-d CPU \
-load <path_to_dump> \
-layers all
```
The output looks as follows:
```
InferenceEngine:
API version ............ 1.0
Build .................. ###
[ INFO ] Blob and statistics loading enabled. File /localdisk/models/FP16/icv_squeezenet_v1.0_MYRIAD_FP16_dump.txt
The same IR on both devices: <path_to_FP32_xml>
[ INFO ] No extensions provided
API version ............ 0.1
Build .................. ###
Description ....... myriadPlugin
[ INFO ] Inputs detected: <list_of_input_layers>
[ INFO ] Statistics will be dumped for X layers: <layer_1_name>, <layer_2_name>, ... , <layer_X_name>
[ INFO ] <layer_1_name> layer processing
[ INFO ] Layer <layer_1_name> statistics
Max absolute difference: 0
Min absolute difference: 0
Max relative difference: 0%
Min relative difference: 0%
Blob size: 1000
Devices: MYRIAD_FP16 MYRIAD_FP16_loaded
Status: EXECUTED EXECUTED
Layer type: SoftMax SoftMax
Real time, microsec: 43 43
Execution type: SoftMax SoftMax
Number of NAN: 0 0
Number of INF: 0 0
Number of ZERO: 0 0
...
<list_of_layer_statistics>
...
[ INFO ] Overall max absolute difference 0
[ INFO ] Overall min absolute difference 0 was reached by <layer_1_name> layer
[ INFO ] Overall max relative difference 0%
[ INFO ] Overall min relative difference 0% was reached by <layer_1_name> layer
[ INFO ] Execution successful
```
### Multi-input and dump file experimental format
Text file contains description of each layer in structure like this:
* 1<sup>st</sup> line is layer name (required)
* 2<sup>nd</sup> line is shape like "(1,224,224,3)" (required)
* 3<sup>rd</sup> line is a device and precision information like "CPU_FP32" (optional for multi-input file)
* 4<sup>th</sup> line is execution status Options are: EXECUTED, OPTIMIZED_OUT (optional for multi-input file)
* 5<sup>th</sup> line is type of layer (optional for multi-input file)
* 6<sup>th</sup> line is execution time in microseconds (optional for multi-input file)
* 7<sup>th</sup> line is type of execution (optional for multi-input file)
* 8<sup>th</sup> line is word "CONTENT" which means that the next line or lines are consisted of blob elements
* Next line or lines are for blob elements. They may be separated with one or several spaces, tabs and new lines.
#### Multi-input file example
```
Input_1
(1,10)
CONTENT
0 0.000628471375 0.00185108185
0.000580787659
0.00137138367
0.000561237335 0.0040473938 0 0 0
Input_2
(1,8)
CONTENT
0 0 0.00194549561 0.0017490387 7.73072243e-05 0.000135779381 0.000186920166 0 7.52806664e-05
```
#### Dump file example
```
Softmax
(1,10)
MYRIAD_FP16
EXECUTED
SoftMax
43
SoftMax
CONTENT
7.44462013e-05
0
0.000810623169
0.000361680984
0
9.14335251e-05
0
0
8.15987587e-05
0
```
### Configuration file
There is an option to pass configuration file to plugin by providing
`-conf` and/or `--ref_conf` keys.
Configuration file is a text file with content of pairs of keys and values.
Structure of configuration file:
```sh
KEY VALUE
ANOTHER_KEY ANOTHER_VALUE,VALUE_1
```

View File

@@ -0,0 +1,93 @@
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
## Introduction to the OpenVINO™ Toolkit
The OpenVINO™ toolkit is a comprehensive toolkit that you can use to develop and deploy vision-oriented solutions on
Intel® platforms. Vision-oriented means the solutions use images or videos to perform specific tasks.
A few of the solutions use cases include autonomous navigation, digital surveillance cameras, robotics,
and mixed-reality headsets.
The OpenVINO™ toolkit:
* Enables CNN-based deep learning inference on the edge
* Supports heterogeneous execution across an Intel&reg; CPU, Intel&reg; Integrated Graphics, Intel&reg; Movidius&trade; Neural Compute Stick and Intel&reg; Neural Compute Stick 2
* Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
* Includes optimized calls for computer vision standards including OpenCV\*, OpenCL&trade;, and OpenVX\*
The OpenVINO™ toolkit includes the following components:
* Intel® Deep Learning Deployment Toolkit (Intel® DLDT)
- [Deep Learning Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) — A cross-platform command-line tool for importing models and
preparing them for optimal execution with the Deep Learning Inference Engine. The Model Optimizer supports converting Caffe*,
TensorFlow*, MXNet*, Kaldi*, ONNX* models.
- [Deep Learning Inference Engine](inference_engine_intro.md) — A unified API to allow high performance inference on many hardware types
including Intel® CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Neural Compute Stick 2.
- [nGraph](nGraph_Flow.md) — graph representation and manipulation engine which is used to represent a model inside Inference Engine and allows the run-time model construction without using Model Optimizer.
* [OpenCV](https://docs.opencv.org/) — OpenCV* community version compiled for Intel® hardware.
Includes PVL libraries for computer vision.
* Drivers and runtimes for OpenCL™ version 2.1
* [Intel® Media SDK](https://software.intel.com/en-us/media-sdk)
* [OpenVX*](https://software.intel.com/en-us/cvsdk-ovx-guide) — Intel's implementation of OpenVX*
optimized for running on Intel® hardware (CPU, GPU, IPU).
* [Demos and samples](Samples_Overview.md).
This Guide provides overview of the Inference Engine describing the typical workflow for performing
inference of a pre-trained and optimized deep learning model and a set of sample applications.
> **NOTES:**
> - Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_intel_index).
> - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
## Table of Contents
* [Introduction to Intel® Deep Learning Deployment Toolkit](Introduction.md)
* [Inference Engine API Changes History](API_Changes.md)
* [Introduction to Inference Engine](inference_engine_intro.md)
* [Introduction to nGraph Flow](nGraph_Flow.md)
* [Understanding Inference Engine Memory Primitives](Memory_primitives.md)
* [Introduction to Inference Engine Device Query API](InferenceEngine_QueryAPI.md)
* [Adding Your Own Layers to the Inference Engine](Extensibility_DG/Intro.md)
* [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md)
* [Migration from Inference Engine Plugin API to Core API](Migration_CoreAPI.md)
* [Introduction to Performance Topics](Intro_to_Performance.md)
* [Inference Engine Python API Overview](../../inference-engine/ie_bridges/python/docs/api_overview.md)
* [Using Dynamic Batching feature](DynamicBatching.md)
* [Using Static Shape Infer feature](ShapeInference.md)
* [Using Low-Precision 8-bit Integer Inference](Int8Inference.md)
* [Using Bfloat16 Inference](Bfloat16Inference.md)
* Utilities to Validate Your Converted Model
* [Using Cross Check Tool for Per-Layer Comparison Between Plugins](../../inference-engine/tools/cross_check_tool/README.md)
* [Supported Devices](supported_plugins/Supported_Devices.md)
* [GPU](supported_plugins/CL_DNN.md)
* [CPU](supported_plugins/CPU.md)
* [FPGA](supported_plugins/FPGA.md)
* [VPU](supported_plugins/VPU.md)
* [MYRIAD](supported_plugins/MYRIAD.md)
* [HDDL](supported_plugins/HDDL.md)
* [Heterogeneous execution](supported_plugins/HETERO.md)
* [GNA](supported_plugins/GNA.md)
* **NEW!** [MULTI](supported_plugins/MULTI.md)
* [Pre-Trained Models](@ref omz_models_intel_index)
* [Known Issues](Known_Issues_Limitations.md)
**Typical Next Step:** [Introduction to Intel® Deep Learning Deployment Toolkit](Introduction.md)

View File

@@ -0,0 +1,83 @@
Using Dynamic Batching {#openvino_docs_IE_DG_DynamicBatching}
======================
Dynamic Batching feature allows you+ to dynamically change batch size for inference calls
within preset batch size limit.
This feature might be useful when batch size is unknown beforehand, and using extra large batch size is
undesired or impossible due to resource limitations.
For example, face detection with person age, gender, or mood recognition is a typical usage scenario.
## Usage
You can activate Dynamic Batching by setting <code>KEY_DYN_BATCH_ENABLED</code> flag to <code>YES</code> in a configuration map that is
passed to the plugin while loading a network.
This configuration creates an <code>ExecutableNetwork</code> object that will allow setting batch size
dynamically in all of its infer requests using <code>SetBatch()</code> method.
The batch size that was set in passed <code>CNNNetwork</code> object will be used as a maximum batch size limit.
Here is a code example:
```cpp
int dynBatchLimit = FLAGS_bl; //take dynamic batch limit from command line option
// Read network model
Core core;
CNNNetwork network = core.ReadNetwork(modelFileName, weightFileName);
// enable dynamic batching and prepare for setting max batch limit
const std::map<std::string, std::string> dyn_config =
{ { PluginConfigParams::KEY_DYN_BATCH_ENABLED, PluginConfigParams::YES } };
network.setBatchSize(dynBatchLimit);
// create executable network and infer request
auto executable_network = core.LoadNetwork(network, "CPU", dyn_config);
auto infer_request = executable_network.CreateInferRequest();
...
// process a set of images
// dynamically set batch size for subsequent Infer() calls of this request
size_t batchSize = imagesData.size();
infer_request.SetBatch(batchSize);
infer_request.Infer();
...
// process another set of images
batchSize = imagesData2.size();
infer_request.SetBatch(batchSize);
infer_request.Infer();
```
## Limitations
Currently, certain limitations for using Dynamic Batching exist:
* Use Dynamic Batching with CPU and GPU plugins only.
* Use Dynamic Batching on topologies that consist of certain layers only:
* Convolution
* Deconvolution
* Activation
* LRN
* Pooling
* FullyConnected
* SoftMax
* Split
* Concatenation
* Power
* Eltwise
* Crop
* BatchNormalization
* Copy
Do not use layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape),
layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput), and
custom layers.
Topology analysis is performed during the process of loading a network into plugin, and if topology is
not applicable, an exception is generated.

View File

@@ -0,0 +1,72 @@
# Add Custom nGraph Operations {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
Inference Engine Extension API allows to register operation sets (opsets) with custom nGraph operations, it allows to support Networks with unknown operations.
## Operation Class
To add your custom nGraph operation, create a new class that extends `ngraph::Op`, which is in turn derived from `ngraph::Node`, the base class for all graph operations in nGraph. Follow the steps below:
1. Define a `NodeTypeInfo` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an nGraph operation currently consists of a string identifier and a version number, but this may change in the future.
2. Implement constructors that can optionally take the operation inputs and attributes as parameters.
3. Override the shape inference method `validate_and_infer_types`. This method is called multiple times during graph manipulations to determine the shapes and element types of the outputs of the operations. You can access the input shapes through the `get_input_partial_shape()` method and input element types through the `get_input_element_type()` method of `ngraph::Node`. Set the inferred shape and element type of the output using `set_output_type`.
4. Override the `copy_with_new_args` method, which allows graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
5. Override the `visit_attributes` method, which allows serialization and deserialization of attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector` and for existing nGraph defined types.
Based on that, declaration of a operation class can look as follows:
@snippet op.hpp op:header
### Class Fields
The provided implementation has several fields:
* `add` of type `int64_t` is an attribute of custom operation
* `type_info` of type `ngraph::NodeTypeInfo` defines the type and version of operation
### Operation Constructors
nGraph operation contains two constructors: a default constructor, which allows to create operation without attributes and a constructor that creates and validates operation with specified inputs and attributes.
@snippet op.cpp op:ctor
### `validate_and_infer_types()`
`ngraph::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of operation.
@snippet op.cpp op:validate
### `copy_with_new_args()`
`ngraph::Node::copy_with_new_args` method creates a copy of nGraph operation with new inputs.
@snippet op.cpp op:copy
### `visit_attributes()`
`ngraph::Node::visit_attributes` method allows to visit all operation attributes.
@snippet op.cpp op:visit_attributes
## Register Custom Operations in Extension Class
To add custom operations to the [Extension](Extension.md) class, create an operation set with custom operations and implement the `InferenceEngine::IExtension::getOpSets` method:
@snippet extension.cpp extension:getOpSets
This method returns a map of opsets that exist in the extension library.
nGraph provides opsets mechanism for operation versioning. Different opsets distinguish between different versions of one operation.
When specifying opset names, follow the rules below:
* Use unique opset names.
* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opest2`.
* Make sure that the Model Optimizer and your extension use the same opset names.
* IR v10 layers have the mandatory `version` attribute specifying the opset.
* `opset1` is the name of default operations set.
Operations from the default opset cannot be redefined.
Use a custom opset to create a new operation or extend functionality of an existing operation from another opset.

View File

@@ -0,0 +1,19 @@
# Build Extension Library Using CMake* {#openvino_docs_IE_DG_Extensibility_DG_Building}
Inference Engine build infrastructure provides the Inference Engine Package for application development.
To build an extension library, use the following CMake script:
@snippet CMakeLists.txt cmake:extension
This CMake script finds the Inference Engine and nGraph using the `find_package` CMake command.
To build an extension library, run the commands below:
```sh
$ cd template_extension
$ mkdir build
$ cd build
$ cmake -DInferenceEngine_DIR=[IE_DIR] -Dngraph_DIR=[NGRAPH_DIR] ../
$ cmake --build .
```

View File

@@ -0,0 +1,74 @@
# How to Implement Custom CPU Layers {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel}
The primary vehicle for the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel.
## Implementation Class
All custom kernels for the CPU plugin should be inherited from the InferenceEngine::ILayerExecImpl interface.
Based on that, declaration of a kernel implementation class can look as follows:
@snippet cpu_kernel.hpp cpu_implementation:header
### Class Fields
The provided implementation has several fields:
* `add` of the type `int64_t` is an attribute of a custom operation
* `inShape` of the type `ngraph::Shape` is an input shape
* `outShape` of the type `ngraph::Shape` is an output shape
* `error` of the type `std::string` is a field to handle errors from a constructor
### Constructor of Implementation
An implementation constructor checks parameters of nGraph operation, stores needed attributes, and stores an error message in the case of an error.
@snippet cpu_kernel.cpp cpu_implementation:ctor
### `getSupportedConfigurations`
InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions on how to do it.
@snippet cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
### `init`
InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
@snippet cpu_kernel.cpp cpu_implementation:init
### `execute`
InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tenors as input/output blobs:
@snippet cpu_kernel.cpp cpu_implementation:execute
## Register Implementation in `Extension` Class
To register custom kernel implementation in the [Extension](Extension.md) class, implement the following methods:
* <a href="#getImpTypes">getImplTypes</a>
* <a href="#getImplementation">getImplementation</a>
### <a name="getImpTypes"><code>getImplTypes</code></a>
InferenceEngine::IExtension::getImplTypes returns a vector of implementation types for an operation.
@snippet extension.cpp extension:getImplTypes
### <a name="getImplementation"><code>getImplementation</code></a>
InferenceEngine::IExtension::getImplementation returns the kernel implementation with a specified type for an operation.
@snippet extension.cpp extension:getImplementation
## Load Extension with Executable Kernels to Plugin
Use the `AddExtension` method of the general plugin interface to load your primitives:
```cpp
InferenceEngine::Core core;
// Load CPU extension as a shared library
auto extension_ptr = make_so_pointer<InferenceEngine::IExtension>("<shared lib path>");
// Add extension to the CPU device
core.AddExtension(extension_ptr, "CPU");
```

View File

@@ -0,0 +1,25 @@
# Extension Library {#openvino_docs_IE_DG_Extensibility_DG_Extension}
Inference Engine provides an InferenceEngine::IExtension interface, which defines the interface for Inference Engine Extension libraries.
All extension libraries should be inherited from this interface.
Based on that, declaration of an extension class can look as follows:
@snippet extension.hpp extension:header
The extension library should contain and export the method InferenceEngine::CreateExtension, which creates an `Extension` class:
@snippet extension.cpp extension:CreateExtension
Also, an `Extension` object should implement the following methods:
* InferenceEngine::IExtension::Release deletes an extension object
* InferenceEngine::IExtension::GetVersion returns information about version of the library
@snippet extension.cpp extension:GetVersion
Implement the InferenceEngine::IExtension::getOpSets method if the extension contains custom layers.
Read the [guide about custom operations](AddingNGraphOps.md) for more information.
To understand how integrate execution kernels to the extension library, read the [guide about development of custom CPU kernels](CPU_Kernel.md).

View File

@@ -0,0 +1,250 @@
# How to Implement Custom GPU Layers {#openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel}
The GPU codepath abstracts many details about OpenCL&trade;. You need to provide the kernel code in OpenCL C and the configuration file that connects the kernel and its parameters to the parameters of the layer.
There are two options of using custom layer configuration file:
* Include a section with your kernels into the global automatically-loaded `cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file, which is hosted in the `<INSTALL_DIR>/deployment_tools/inference_engine/bin/intel64/{Debug/Release}` folder
* Call the `InferenceEngine::Core::SetConfig()` method from your application with the `InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE` key and the configuration file name as a value before loading the network that uses custom layers to the plugin:
```cpp
InferenceEngine::Core core;
// Load GPU Extensions
core.SetConfig({ { InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE, "<path_to_the_xml_file>" } }, "GPU");
```
All Inference Engine samples, except trivial `hello_classification`,
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom layers for the classification sample, run the command below:
```sh
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
-c <absolute_path_to_config>/custom_layer_example.xml
```
## Configuration File Format <a name="config-file-format"></a>
The configuration file is expected to follow the `.xml` file structure
with a node of the type `CustomLayer` for every custom layer you provide.
The definitions described in the sections below use the following notations:
Notation | Description
---|---
(0/1) | Can have 0 or 1 instances of this node/attribute
(1) | Must have only 1 instance of this node/attribute
(0+) | Can have any number of instances of this node/attribute
(1+) | Can have 1 or more instances of this node/attribute
### CustomLayer Node and Sub-node Structure
`CustomLayer` node contains the entire configuration for a single custom
layer.
| Attribute Name |\# | Description |
|-----|-----|-----|
| `name` | (1) | The name of the layer type to be used. This name should be identical to the type used in the IR.|
| `type` | (1) | Must be `SimpleGPU`. |
| `version` | (1) | Must be `1`. |
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
`WorkSizes` (0/1)
### Kernel Node and Sub-node Structure
`Kernel` node contains all kernel source code configuration. No kernel
node structure exists.
**Sub-nodes**: `Source` (1+), `Define` (0+)
### Source Node and Sub-node Structure
`Source` node points to a single OpenCL source file.
| Attribute Name | \# ||
|-----|-----|-----|
| `filename` | (1) | Name of the file containing OpenCL source code. Notice that path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
**Sub-nodes**: None
### Define Node and Sub-node Structure
`Define` node configures a single `#&zwj;define` instruction to be added to
the sources during compilation (JIT).
| Attribute Name | \# | Description |
|------|-------|------|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well (taken as a string). |
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
| `default` | (0/1) | The default value to be used if the specified parameters is missing from the layer in the IR. |
**Sub-nodes:** None
The resulting JIT has the following form:
`#&zwj;define [name] [type] [value/default]`.
### Buffers Node and Sub-node Structure
`Buffers` node configures all input/output buffers for the OpenCL entry
function. No buffers node structure exists.
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
### Data Node and Sub-node Structure
`Data` node configures a single input with static data (for example,
weights or biases).
| Attribute Name | \# | Description |
|----|-----|------|
| `name` | (1) | Name of a blob attached to a layer in the IR |
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to |
**Sub-nodes**: None
### Tensor Node and Sub-node Structure
`Tensor` node configures a single input or output tensor.
| Attribute Name | \# | Description |
|------|-------|-------|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
| `type` | (1) | `input` or `output` |
| `port-index` | (1) | 0-based index in the layers input/output ports in the IR |
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB` (also in all lowercase). Default value: `BFYX` |
### CompilerOptions Node and Sub-node Structure
`CompilerOptions` node configures the compilation flags for the OpenCL
sources.
| Attribute Name | \# | Description |
|--------|-----|------|
| `options` | (1) | Options string to be passed to the OpenCL compiler |
**Sub-nodes**: None
### WorkSizes Node and Sub-node Structure
`WorkSizes` node configures the global/local work sizes to be used when
queuing the OpenCL program for execution.
| Attribute Name | \# | Description |
|-----|------|-----|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to 3 integers (or formulas) for defining the OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,% (all evaluated in integer arithmetic). <br>Default value: `global=”B*F*Y*X” local=””` |
| `dim` | (0/1) | A tensor to take the work size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. Default value: `output` |
**Sub-nodes**: None
## Example Configuration File
The following code sample provides an example configuration file (in the
`.xml` format). For information on configuration file structure, see
[Configuration File Format](#config-file-format).
```xml
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
<Kernel entry="example_relu_kernel">
<Source filename="custom_layer_kernel.cl"/>
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
</Kernel>
<Buffers>
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
</Buffers>
<CompilerOptions options="-cl-mad-enable"/>
<WorkSizes global="X,Y,B*F"/>
</CustomLayer>
```
## Built-In Defines for Custom Layers
The following table includes definitions that are attached before
the user sources, where `<TENSOR>` is the actual input and output, for
example, `INPUT0` or `OUTPUT0`.
For an example, see [Example Kernel](#example-kernel).
| Name | Value |
|---|---|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel |
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel |
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array |
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel |
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array |
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX` |
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`|
| `<TENSOR>_FORMAT_` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#&zwj;ifdef/#&zwj;endif`. |
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
| `<TENSOR>_ LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array |
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array |
| `<TENSOR>_PITCHES` | The number of elements between adjacent elements in each dimension. Always ordered as BFYX.|
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array |
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element (bypassing the lower padding) |
All `<TENSOR>` values are automatically defined for every tensor
bound to this layer (`INPUT0`, `INPUT1`, `OUTPUT0`, and so on), as shown
in the following for example:
```sh
#define INPUT0_DIMS_SIZE 4
#define INPUT0_DIMS (int []){ 1,96,55,55, }
```
## Example Kernel<a name="example-kernel"></a>
```c
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
__kernel void example_relu_kernel(
const __global INPUT0_TYPE* input0,
__global OUTPUT0_TYPE* output)
{
const uint idx = get_global_id(0);
const uint idy = get_global_id(1);
const uint idbf = get_global_id(2);//batches*features, as OpenCL supports 3D nd-ranges only
const uint feature = idbf%OUTPUT0_DIMS[1];
const uint batch = idbf/OUTPUT0_DIMS[1];
//notice that pitches are in elements, not in bytes!
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
INPUT0_TYPE value = input0[in_id];
//neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
output[out_id] = value < 0 ? value * neg_slope : value;
}
```
> **NOTE:** As described in the previous section, all the things like
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
> the Inference Engine for efficiency reasons. See [Debugging
> Tips](#debugging-tips) for information on debugging the results.
> **NOTE**: Several GPU-targeted kernels are also added to the binaries upon samples compilation
> so that the sample application can easy load them.
> Refer to the `cldnn_global_custom_kernels` folder in the GPU plugin installation directory.
## Debugging Tips<a name="debugging-tips"></a>
* **Dumping the Resulting Kernels**.
It is recommended to get a dump of the kernel with all of
the values set by the Inference Engine, such as tensor sizes,
floating-point, and integer kernel parameters. To get the dump, add the
following line to your code that configures the GPU plugin to output the
custom kernels:
```cpp
core.SetConfig({ { PluginConfigParams::KEY_DUMP_KERNELS, PluginConfigParams::YES } }, "GPU");
```
When the Inference Engine compiles the kernels for the specific network,
it also outputs the resulting code for the custom kernels. In the
directory of your executable, find files like
`clDNN_program0.cl`, `clDNN_program1.cl`. There are as many files as
distinct sets of parameters for your custom kernel: different input
tensor sizes and kernel parameters.
* **Using `printf` in the OpenCL™ Kernels**.
To debug the specific values, you can use `printf` in your kernels.
However, be careful: for instance, do not output excessively
as it would generate too much data. The `printf` output is typical, so
your output can be truncated to fit the buffer. Also, because of
buffering, you actually get an entire buffer of output when the
execution ends.<br>
For more information, refer to the [printf
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).

View File

@@ -0,0 +1,56 @@
# Inference Engine Extensibility Mechanism {#openvino_docs_IE_DG_Extensibility_DG_Intro}
Inference Engine Extensibility API allows to add support of custom operations to the Inference Engine.
Extension should contain operation sets with custom operations and execution kernels for custom operations.
Physically, an extension library can be represented as a dynamic library exporting the single `CreateExtension` function that allows to create a new extension instance.
Extensibility library can be loaded to the InferenceEngine::Core object using the InferenceEngine::Core::AddExtension method.
## Inference Engine Extension Library
Inference Engine Extension dynamic library contains several main components:
* [Extension class](Extension.md):
- Contains custom operation sets
- Provides CPU implementations for custom operations
* [Custom operations](Intro.md):
- Allows to use InferenceEngine::Core::ReadNetwork to read Intermediate Representation (IR) with unsupported operations
- Allows to create `ngraph::Function` with unsupported operations
- Provides shape inference mechanism for custom operations
> **NOTE**: This documentation is written based on the `Template extension`, which demonstrates extension
development details. Find the complete code of the `Template extension`, which is fully compilable and up-to-date,
at `<dldt source tree>/docs/template_extension`.
## Execution Kernels
The Inference Engine workflow involves the creation of custom kernels and either custom or existing operations.
An _Operation_ is a Network building block implemented in the training framework, for example, `Convolution` in Caffe*.
A _Kernel_ is defined as the corresponding implementation in the Inference Engine.
Refer to the [Custom Layers in the Model Optimizer](../../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) section for details on how
mapping between framework layers and Inference Engine kernels is registered.
In short, you can plug your own kernel implementations into the Inference Engine and map them to the layers in the original framework.
The following pages describe how to integrate custom _kernels_ into the Inference Engine:
* [Introduction to development of custom CPU kernels](CPU_Kernel.md)
* [Introduction to development of custom GPU kernels](GPU_Kernel.md)
* [Introduction to development of custom VPU kernels](VPU_Kernel.md)
## Deprecated Extensibility API
Shape Inference API and some methods of extensibility mechanism was deprecated and will be removed soon.
Old Extensibility mechanism contains two parts shape inference and execution kernel.
* [Shape Inference](deprecated/ShapeInfer.md)
* [Execution Kernel](deprecated/Factory.md)
## Additional Resources
* [Build an extension library using CMake*](Building.md)
## See Also
* [Using Inference Engine Samples](../Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../../inference-engine/samples/hello_reshape_ssd/README.md)

View File

@@ -0,0 +1,679 @@
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel}
> **NOTE:** OpenCL™ custom layer support is available in the preview mode.
> **NOTE:** This section assumes you are familiar with developing kernels using OpenCL™.
To customize your topology with an OpenCL™ layer, follow the steps below:
1. Write and compile you OpenCL™ code with the standalone offline OpenCL™ compiler (`clc`).
2. Write a configuration file to bind the OpenCL™ kernel to the topology file (`.xml`) of the model IR.
3. Pass the configuration file to Inference engine with the model IR.
## Compile OpenCL™ code for VPU (Intel® Neural Compute Stick 2)
> **NOTE:** OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
OpenCL support is provided by ComputeAorta*, and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
The OpenCL™ toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/deployment_tools/tools/cl_compiler`.
> **NOTE:** By design, custom OpenCL layers support any OpenCL kernels written with 1.2 version assumed. It also supports half float
extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
1. Prior to running a compilation, make sure that the following variables are set:
* `SHAVE_MA2X8XLIBS_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/lib/`
* `SHAVE_LDSCRIPT_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/ldscripts/`
* `SHAVE_MYRIAD_LD_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/bin/`
* `SHAVE_MOVIASM_DIR=<INSTALL_DIR>/deployment_tools/tools/cl_compiler/bin/`
2. Run the compilation with the command below. You should use `--strip-binary-header` to make an OpenCL runtime-agnostic binary runnable with the Inference Engine.
```bash
cd <INSTALL_DIR>/deployment_tools/tools/cl_compiler/bin
./clc --strip-binary-header custom_layer.cl -o custom_layer.bin
```
## Write a Configuration File
To tie the topology IR for a layer you customize, prepare a configuration file, so that the Inference Engine can find parameters for your kernel and the execution work grid is described.
For example, given the following OpenCL kernel signature:
```cpp
__kernel void reorg_nhwc(__global const half *src, __global half *out, int w, int h, int c, int stride);
```
Configuration file for this kernel might be the following:
```xml
<CustomLayer name="ReorgYolo" type="MVCL" version="1">
<Kernel entry="reorg_nhwc">
<Source filename="reorg.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BYXF"/>
<Tensor arg-name="out" type="output" port-index="0" format="BYXF"/>
<Scalar arg-name="w" type="int" port-index="0" source="I.X" />
<Scalar arg-name="h" type="int" port-index="0" source="I.Y" />
<Scalar arg-name="c" type="int" port-index="0" source="I.F" />
<Scalar arg-name="stride" type="int" source="stride" />
</Parameters>
<WorkSizes dim="input,0" global="(Y+7)/8*8,1,1" local="8,1,1"/>
</CustomLayer>
```
Each custom layer is described with the `CustomLayer` node. It has the following nodes and attributes:
- Root node `CustomLayer` contains the following attributes:
- `name` (Required) A name of the Inference Engine layer to bind the kernel with.
- `type` and `version` (Required) Reserved for future use. Set them to `MVCL` and `1` respectively.
- `max-shaves` (Optional) The maximum number of SHAVE cores that should be dedicated for the layer. It is useful for debugging concurrency issues or for resource saving if memory bound kernel does not scale well with the number of cores, so more resources can be left for the rest of a topology.
- Sub-node `Kernel` must contain the following attributes:
- `entry` A name of your kernel function as you defined it in a source file (in the example above, it is `reorg_nhwc`).
- Node `Source` must contain the following attributes:
- `filename` A path to a compiled binary relative to the `.xml` binding file.
- Sub-node `Parameters` Describes parameters bindings. For more information, see the description below.
- Sub-node `WorkSizes` Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
- Sub-node `Where` Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binging xml.
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
- Each `Tensor` node of `input` or `output` type must contain the following attributes:
- `arg-name` A name of a kernel parameter in the kernel signature.
- `type` Node type: `input` or `output` as in the IR.
- `port-index` A number of input/output ports as in the IR.
- `format` The channel order in the tensor. Optional conversion layers are generated if the custom layer format is not compatible with formats of neighboring layers. `BFXY`, `BYXF`, and `ANY` formats are supported currently.
- Each `Tensor` node of `input_buffer` or `output_buffer` type must contain the following attributes:
- `arg-name` A name of a kernel parameter in the kernel signature.
- `type` Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
- `port-index` The unique identifier to bind by.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
Here is an example of multi-stage MVN layer binding:
```xml
<CustomLayer name="MVN" stage="0" type="MVCL" version="1">
<Kernel entry="reduction_mean">
<Source filename="mvn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="mean" type="output_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="variance" type="output_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
<CustomLayer name="MVN" stage="1" type="MVCL" version="1">
<Kernel entry="mvn_scale">
<Source filename="mvn_scale_changed_orded.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Tensor arg-name="mean_part" type="input_buffer" port-index="0" dim="output,0" size="Y*F*4"/>
<Tensor arg-name="power_mean" type="input_buffer" port-index="1" dim="output,0" size="Y*F*4"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
</CustomLayer>
```
- Each `Tensor` node that has the type `data` must contain the following attributes:
- `source` A name of the blob as it is in the IR (typical example is `weights` for convolution
- `format` Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
```xml
<CustomLayer name="BinaryConvolution" type="MVCL" version="1">
<Kernel entry="binary_convolution">
<Source filename="binary_layers.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Data arg-name="weights_data" type="data" source="weights" format="ANY"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<!--other parameters -->
</Parameters>
<WorkSizes dim="output,0" global="X,Y,F" local="1,1,1"/>
</CustomLayer>
```
- Each `Scalar` node must contain the following attributes:
- `arg-name` A name of a kernel parameter in the kernel signature.
- `type` `int` or `float` value. It is used for correct argument extraction from IR parameters.
- `source` Contains the name of the parameter in the IR file or input/output (`I`/`O`, `In`/`On`, where `n` is a port number)
followed by dimension `B`(batch), `Y`(height), `X`(width), or `F`(channels).
- Each `Data` node must contain the following attributes:
- `arg-name` A name of a kernel parameter in the kernel signature.
- `type` Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100K for all `__local` and
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Please, consider that a manual-DMA extension requires double buffering.
If the custom layer is detected to run out of local memory, the inference fails.
- `dim` The dim source with the same `direction,port` format used for `WorkSizes` bindings.
- `size` Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
The example binding below illustrates a kernel with two local buffers passed to the kernel.
```xml
<CustomLayer name="GRN" type="MVCL" version="1">
<Kernel entry="grn_NCHW">
<Source filename="grn.bin"/>
</Kernel>
<Parameters>
<Tensor arg-name="src_data" type="input" port-index="0" format="BFYX"/>
<Tensor arg-name="dst_data" type="output" port-index="0" format="BFYX"/>
<Data arg-name="src" type="local_data" dim="input,0" size="X*F*2" />
<Data arg-name="dst" type="local_data" dim="input,0" size="X*F*2" />
<Scalar arg-name="C" type="int" port-index="0" source="I.F" />
<Scalar arg-name="bias" type="float" source="bias" />
</Parameters>
<WorkSizes dim="input,0" global="X,Y,1" local="X,1,1"/>
</CustomLayer>
```
## Pass Configuration File to Inference Runtime
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
Before loading the network that features the custom layers, provide a separate configuration file and load it using the InferenceEngine::Core::SetConfig() method with the PluginConfigParams::KEY_CONFIG_FILE key and the configuration file name as a value:
```cpp
InferenceEngine::Core core;
// Load custom layers
core.SetConfig({ { InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE, "<path to the xml file>" } }, "MYRIAD");
```
Optionally, set a path to a custom layers description with a pair of `VPU_CUSTOM_LAYERS` and `/path/to/your/customLayers.xml`
as a network configuration:
```cpp
InferenceEngine::Core core;
std::map<std::string, std::string> networkConfig;
config["VPU_CUSTOM_LAYERS"] = "/path/to/your/customLayers.xml";
// Load custom layers in network config
auto exeNetwork = core.LoadNetwork(cnnNetwork, "MYRIAD", networkConfig);
```
## Optimizing Kernels with OpenCL™ for VPU (Intel® Neural Compute Stick 2)
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
| OpenCL Model | VPU Mapping|
|-----|----|
| Device code | Executed on SHAVE cores |
| Private memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Local memory | Mapped to CMX internal memory, limited to 100KB per work group, valid only while the work group is executed |
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
| Work group | Executed on a single SHAVE core iterating over multiple work items |
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
1. Split work evenly across work groups.
2. Adjust work group granularity to maintain equal workload for all compute codes.
3. Set the maximum number of cores (using the `max-shaves` attribute for the `CustomLayer` node). This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
Consider full topology performance (not just specific layer boost) since data conversion layers would be automatically inserted
as appropriate.
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
For example, the kernel below could be automatically vectorized:
```cpp
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
float scale, float bais)
{
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
outImage[idx] = convert_half(inImage[idx]*scale+bais);
}
```
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
patterns. WGV works if and only if vector types are not used in the code.
Here is a short list of optimization tips:
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
- This may give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
2. Put `#&zwj;pragma unroll N` to your loop header. Since the compiler does not trigger unrolling by default, it is your responsibility to
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#&zwj;pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Please, pay
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4`to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
```cpp
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
{
int x = get_global_id(0);
int W = get_global_size(0);
int y = get_global_id(1);
int H = get_global_size(1);
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);
variance = 1.f / native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
}
```
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
```cpp
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
{
int y = get_global_id(1);
int H = get_global_size(1);
for (int x = 0; x < W/8; x++)
{
float8 variance = (float8)(bias+1e-9f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
half8 sh = src_line[x];
variance += convert_float8(sh*sh);
}
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
__global const half8* restrict src_line = ((__global const half8 * restrict)(src_data + c*H*W + y*W));
__global half8* restrict dst_line = ((__global half8 * restrict)(dst_data + c*H*W + y*W));
dst_line[x] = convert_half8(convert_float8(src_line[x])*variance);
}
}
for (int x = W/8*8; x < W; x++)
{
float variance = bias+1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
variance += (float)(src_data[c*H*W + y*W + x]*src_data[c*H*W + y*W + x]);
variance = 1.f/native_sqrt(variance);
#pragma unroll 4
for (int c = 0; c < C; c++)
dst_data[c*H*W + y*W + x] = (float)src_data[c*H*W + y*W + x]*variance;
}
}
```
Both versions perform the same, but the second one has more complex code.
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
to unroll the code up to local size of the work group. Please note that if the kernel is actually executed with the
different work group configuration, the result is undefined.
4. Prefer to use the `half` compute, if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
Use the standard `native_*` function for the rest of types.
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is 8 times slower than the code with `vstore_half`.
6. Mind early exits. Early exit may be extremely costly for the current version of the `clc` compiler due to conflicts with the
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
The kernel example below demonstrates the impact of early exits on kernel performance.
```cpp
// Initial version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int stride)
{
int w = get_global_id(0);
int W = get_global_size(0);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width (which is `8` for `half` data type). As a result, the Inference Engine does not select the auto-vectorized kernel.
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This allows the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector (e.g. 32) and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
```cpp
// Version with out-of-bound checks added
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
{
int w = get_global_id(0);
w = min(w, W-1);
int h = get_global_id(1);
int H = get_global_size(1);
int c = get_global_id(2);
int C = get_global_size(2);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
```
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
```cpp
// Line-wise version
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c = get_global_id(1);
int C = get_global_size(1);
int C2 = C/(stride*stride);
int offset = c / C2;
int c2 = c - C2 * offset;
int H2 = H*stride;
int W2 = W*stride;
for (int w = 0; w < W; ++w)
{
int h2 = h*stride + offset / stride;
int w2 = w*stride + offset - stride * (offset / stride);
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
}
}
```
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
```cpp
// Unrolled line-wise version
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
int H, int W, int stride)
{
int h = min((int)get_global_id(0), H-1);
int c2 = get_global_id(1);
int C2 = get_global_size(1);
int C = C2*stride*stride;
int H2 = H*stride;
int W2 = W*stride;
for (int stride_y = 0; stride_y < stride; stride_y++)
for (int stride_x = 0; stride_x < stride; stride_x++)
for (int w2 = 0, w = 0; w < W; w2 += stride, w++)
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
}
```
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO™ 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is equal to (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
```cpp
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 4
for (int c = 0; c < C; c++)
{
float val = (float) src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 4
for (int c = 0; c < C; c++)
{
dst_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)]
= src_data[c*get_global_size(1)*get_global_size(0) + get_global_id(1)*get_global_size(0) + get_global_id(0)] * hvariance;
}
}
```
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName` and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before `n`-th work group itself, while `__dma_postwrite_kernelName` is guarantied to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy required piece of src tensor into local_src
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
// ToDO: copy back computed piece of local_dst into dst
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
// same as the example above
}
```
GRN kernel operates on channel-major tensors to compute average over full channel range and then normalizes input elements to produce the output.
As a part of manual DMA extension, a group of work group copy functions are introduced in addition to `async_work_group_copy`, which is also mapped to DMA call.
Here is the list of supported functions:
```cpp
// 2D sub-tensor copy
event_t WorkGroupDmaCreateStrideTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreateStrideTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t size, // total number of bytes loaded for all lines from source to destination
event_t event) __OVERLOAD;
// 3D sub-tensor copy
event_t WorkGroupDmaCreate3DTransaction(
const local T *src,
global T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
event_t WorkGroupDmaCreate3DTransaction(
const global T *src,
local T *dst,
size_t src_width, // width of the line of source in bytes
size_t dst_width, // width of the line of destination in bytes
size_t src_stride, // stride between corresponding 2 consecutive lines of source in bytes
size_t dst_stride, // stride between corresponding 2 consecutive lines of destination in bytes
size_t num_planes, // number of planes to be copied
size_t src_plane_stride, // stride between corresponding 2 consecutive planes of source in bytes
size_t dst_plane_stride, // stride between corresponding 2 consecutive planes of destination in bytes
size_t size, // size of the loaded plane in bytes, analogues to the size in 2D case
event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.
Modified version of the GRN kernel could be the following:
```cpp
__kernel void __dma_preload_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
src + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // src
local_src, // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_global_size(0) * sizeof(half), // src stride
get_local_size(0) * sizeof(half), // dst stride
C, // num planes
get_global_size(0) * get_global_size(1) * sizeof(half), // src plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void __dma_postwrite_grn_NCHW(
__global const half* restrict src,
__global half* restrict dst,
__local const half* restrict local_src,
__local half* restrict local_dst,
int C,
float bias)
{
WorkGroupDmaCreate3DTransaction(
local_dst, // src
dst + get_group_id(0)*get_local_size(0)
+ get_group_id(1)*get_local_size(1)*get_global_size(0), // dst
get_local_size(0) * sizeof(half), // src width
get_local_size(0) * sizeof(half), // dst width
get_local_size(0) * sizeof(half), // src stride
get_global_size(0) * sizeof(half), // dst stride
C, // num planes
get_local_size(0) * get_local_size(1) * sizeof(half), // src plane stride
get_global_size(0) * get_global_size(1) * sizeof(half), // dst plane stride
get_local_size(0) * get_local_size(1) * sizeof(half), // plane size
0);
}
__kernel void grn_NCHW(
__global const half* restrict src_data,
__global half* restrict dst_data,
__local half* restrict src,
__local half* restrict dst,
int C,
float bias)
{
float variance = bias + 1e-9f;
#pragma unroll 8
for (int c = 0; c < C; c++)
{
float val = (float) src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)];
variance += val*val;
}
half hvariance = (half)(native_rsqrt((half)(variance/16.f))*0.25f);
#pragma unroll 8
for (int c = 0; c < C; c++)
{
dst[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)]
= src[c*get_local_size(1)*get_local_size(0) + get_local_id(1)*get_local_size(0) + get_local_id(0)] * hvariance;
}
}
```
Please note `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup since it was completely limited by memory usage.
An alternative method of using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
Here is the list of supported work item functions:
```cpp
item_dma_event_t WorkItemDmaCreateTransaction(
const global T *src,
private T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateTransaction(
const private T *src,
global T *dst,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreateStrideTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const global T *src,
private T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
item_dma_event_t WorkItemDmaCreate3DTransaction(
const private T *src,
global T *dst,
size_t src_width,
size_t dst_width,
size_t src_stride,
size_t dst_stride,
size_t num_planes,
size_t src_plane_stride,
size_t dst_plane_stride,
size_t size,
item_dma_event_t event) __OVERLOAD;
```
where `T` can be `uchar`, `char`, `short`, `ushort`, `int`, `uint`, `long`, `ulong`, `half` or `float`.

View File

@@ -0,0 +1,96 @@
# Deprecated API for CPU kernels creation {#openvino_docs_IE_DG_Extensibility_DG_deprecated_Factory}
List of deprecated API for kernels development:
* `InferenceEngine::IExtension::getPrimitiveTypes(char**& types, unsigned int& size, ResponseDesc* resp)` method
* `InferenceEngine::IExtension::getFactoryFor(ILayerImplFactory *&factory, const CNNLayer *cnnLayer, ResponseDesc *resp)` method
* `InferenceEngine::ILayerImplFactory` class
>**NOTE**: This guide demonstrates how to use deprecated API for kernels creation. However, keep in mind that this API will be deleted soon.
1. Create your custom layer factory `CustomLayerFactory` class:
```cpp
// custom_layer.h
// A CustomLayerFactory class is an example layer, which makes exponentiation by 2 for the input and does not change dimensions
class CustomLayerFactory {
};
```
2. Inherit it from the abstract `InferenceEngine::ILayerImplFactory` class:
```cpp
// custom_layer.h
class CustomLayerFactory: public InferenceEngine::ILayerImplFactory {
};
```
3. Create a constructor, a virtual destructor, and a data member to keep the layer info:
```cpp
// custom_layer.h
class CustomLayerFactory: public InferenceEngine::ILayerImplFactory {
public:
explicit CustomLayerFactory(const CNNLayer *layer): cnnLayer(*layer) {}
private:
CNNLayer cnnLayer;
};
```
4. Overload and implement the abstract methods `getShapes` and `getImplementations` of the `InferenceEngine::ILayerImplFactory` class:
```cpp
// custom_layer.h
class CustomLayerFactory: public InferenceEngine::ILayerImplFactory {
public:
// ... constructor and destructor
StatusCode getShapes(const std::vector<TensorDesc>& inShapes, std::vector<TensorDesc>& outShapes, ResponseDesc *resp) noexcept override {
if (cnnLayer == nullptr) {
std::string errorMsg = "Cannot get cnn layer!";
errorMsg.copy(resp->msg, sizeof(resp->msg) - 1);
return GENERAL_ERROR;
}
if (inShapes.size() != 1) {
std::string errorMsg = "Incorrect input shapes!";
errorMsg.copy(resp->msg, sizeof(resp->msg) - 1);
return GENERAL_ERROR;
}
outShapes.clear();
outShapes.emplace_back(inShapes[0]);
return OK;
}
StatusCode getImplementations(std::vector<ILayerImpl::Ptr>& impls, ResponseDesc *resp) noexcept override {
// You can add cnnLayer to implementation if it is necessary
impls.push_back(ILayerImpl::Ptr(new CustomLayerImpl()));
return OK;
}
};
```
5. Create your custom layer implementation `CustomLayerImpl` class using the [instruction](../CPU_Kernel.md).
6. Implement methods in the `Extension` class:
```cpp
// custom_extension.h
class CustomExtention : public InferenceEngine::IExtension {
public:
// ... utility methods
// Retruns the list of supported kernels/layers
StatusCode getPrimitiveTypes(char**& types, unsigned int& size, ResponseDesc* resp) noexcept override {
std::string type_name = "CustomLayer";
types = new char *[1];
size = 1;
types[0] = new char[type_name.size() + 1];
std::copy(type_name.begin(), type_name.end(), types[0]);
types[0][type_name.size()] = '\0';
return OK;
}
// Main function
StatusCode getFactoryFor(ILayerImplFactory *&factory, const CNNLayer *cnnLayer, ResponseDesc *resp) noexcept override {
if (cnnLayer->type != "CustomLayer") {
std::string errorMsg = std::string("Factory for ") + cnnLayer->type + " wasn't found!";
errorMsg.copy(resp->msg, sizeof(resp->msg) - 1);
return NOT_FOUND;
}
factory = new CustomLayerFactory(cnnLayer);
return OK;
}
};
```

View File

@@ -0,0 +1,18 @@
# Old ShapeInference Extensibility API {#openvino_docs_IE_DG_Extensibility_DG_deprecated_ShapeInfer}
The new approach to shape inference suggests a creation of a custom nGraph operation that contains a special method for shape inference.
The following classes and methods were deprecated:
* `InferenceEngine::IShapeInferExtension` class
* `InferenceEngine::IShapeInferExtension::getShapeInferTypes(char**&, unsigned int&, ResponseDesc*)` method
* `InferenceEngine::IShapeInferExtension::getShapeInferImpl(IShapeInferImpl::Ptr&, const char*, ResponseDesc*)` method
However, the old approach with the `InferenceEngine::IShapeInferExtension` method still works for already existing custom layers.
Custom Shape Inference functions are registered by calling `InferenceEngine::ICNNNetwork::AddExtension` with the implemented `InferenceEngine::IShapeInferExtension` method, which is a holder of custom implementations.
The holder requires to implement two key methods:
* `InferenceEngine::IShapeInferExtension::getShapeInferImpl` - Returns custom shape inference implementation for the given type.
* `InferenceEngine::IShapeInferExtension::getShapeInferTypes` - Provides all custom types.
Custom shape inference implementation is represented by the `InferenceEngine::IShapeInferImpl::inferShapes` method.
It is impossible to overwrite built-in shape inference functions. Custom type must be different from the supported ones.

View File

@@ -0,0 +1,43 @@
Using GPU Kernels Tuning {#openvino_docs_IE_DG_GPU_Kernels_Tuning}
======================
GPU Kernels Tuning allows you to tune models, so the heavy computational layers are configured to fit better into
hardware, which the tuning was done on. It is required to achieve best performance on GPU.
> **NOTE** Currently only convolution and fully connected layers undergo tuning process. It means that the performance boost depends on the amount of that layers in the model.
OpenVINO™ releases include the `<INSTALL_DIR>/inference_engine/bin/intel64/Release/cache.json` file with pretuned data for current state of the art models. It is highly recommended to do the
tuning for new kind of models, hardwares or drivers.
## Tuned data
GPU tuning data is saved in JSON format.
File's content is composed of 2 types of attributes and 1 type of value:
1. Execution units number - this attribute splits the content into different EU sections.
2. Hash - hashed tuned kernel data.
Key: Array with kernel name and kernel's mode index.
## Usage
---
You can activate Kernels Tuning process by setting `KEY_TUNING_MODE` flag to `TUNING_CREATE` and `KEY_TUNING_FILE` to `<"filename">` in a configuration map that is
passed to the plugin while loading a network.
This configuration modifies the behavior of the `ExecutableNetwork` object. Instead of standard network compilation, it will run the tuning process.
Please keep in mind that the tuning can be very time consuming. The bigger the network, the longer it will take.
File with tuned data is the result of this step.
> **NOTE** If a filename passed to `KEY_TUNING_FILE` points to existing tuned data and you are tuning a new model, then this file will be extended by new data. This allows you to extend existing `cache.json` provided in the OpenVINO™ release package.
The example below shows how to set and use the key files:
```cpp
Core ie;
ie.SetConfig({{ CONFIG_KEY(TUNING_MODE), CONFIG_VALUE(TUNING_CREATE) }}, "GPU");
ie.SetConfig({{ CONFIG_KEY(TUNING_FILE), "/path/to/tuning/file.json" }}, "GPU");
// Further LoadNetwork calls will use the specified tuning parameters
```
---
You can activate the inference with tuned data by setting `KEY_TUNING_MODE` flag to `TUNING_USE_EXISTING` and
`KEY_TUNING_FILE` flag to `<"filename">`.
GPU backend will process the content of the file during network compilation to configure the OpenCL kernels for the best performance.

89
docs/IE_DG/Glossary.md Normal file
View File

@@ -0,0 +1,89 @@
Glossary {#openvino_docs_IE_DG_Glossary}
=======
## Acronyms and Abbreviations
| Abbreviation | Description |
| :--- | :--- |
| API | Application Programming Interface |
| AVX | Advanced Vector Extensions |
| clDNN | Compute Library for Deep Neural Networks |
| CLI | Command Line Interface |
| CNN | Convolutional Neural Network |
| CPU | Central Processing Unit |
| CV | Computer Vision |
| DL | Deep Learning |
| DLDT | Intel(R) Deep Learning Deployment Toolkit |
| DLL | Dynamic Link Library |
| DNN | Deep Neural Networks |
| ELU | Exponential Linear rectification Unit |
| FCN | Fully Convolutional Network |
| FP | Floating Point |
| FPGA | Field-Programmable Gate Array |
| GCC | GNU Compiler Collection |
| GPU | Graphics Processing Unit |
| HD | High Definition |
| IE | Inference Engine |
| IR | Intermediate Representation |
| JIT | Just In Time |
| JTAG | Joint Test Action Group |
| LPR | License-Plate Recognition |
| LRN | Local Response Normalization |
| mAP | Mean Average Precision |
| Intel(R) MKL-DNN | Intel(R) Math Kernel Library Deep Neural Networks |
| MO | Model Optimizer |
| MVN | Mean Variance Normalization |
| NCDHW | Number of images, Channels, Depth, Height, Width |
| NCHW | Number of images, Channels, Height, Width |
| NHWC | Number of images, Height, Width, Channels |
| NMS | Non-Maximum Suppression |
| NN | Neural Network |
| NST | Neural Style Transfer |
| OD | Object Detection |
| OS | Operating System |
| PCI | Peripheral Component Interconnect |
| PReLU | Parametric Rectified Linear Unit |
| PSROI | Position Sensitive Region Of Interest |
| RCNN, R-CNN | Region-based Convolutional Neural Network |
| ReLU | Rectified Linear Unit |
| ROI | Region Of Interest |
| SDK | Software Development Kit |
| SSD | Single Shot multibox Detector |
| SSE | Streaming SIMD Extensions |
| USB | Universal Serial Bus |
| VGG | Visual Geometry Group |
| VOC | Visual Object Classes |
| WINAPI | Windows Application Programming Interface |
## Terms
Glossary of terms used in the Inference Engine
| Term | Description |
| :--- | :--- |
| Batch | Number of images to analyze during one call of infer. Maximum batch size is a property of the network and it is set before loading of the network to the plugin. In NHWC, NCHW and NCDHW image data layout representation, the N refers to the number of images in the batch |
| Blob | Memory container used for storing inputs, outputs of the network, weights and biases of the layers |
| Device (Affinitity) | A preferred Intel(R) hardware device to run the inference (CPU, GPU, FPGA, etc.) |
| Extensibility mechanism, Custom layers | The mechanism that provides you with capabilities to extend the Inference Engine and Model Optimizer so that they can work with topologies containing layers that are not yet supported |
| <code>ICNNNetwork</code> | An Interface of the Convolutional Neural Network that Inference Engine reads from IR. Consists of topology, weights and biases |
| <code>IExecutableNetwork</code> | An instance of the loaded network which allows the Inference Engine to request (several) infer requests and perform inference synchronously or asynchronously |
| <code>IHeteroInferencePlugin</code> | Interface that is implemented by the heterogeneity plugin to allow the Inference Engine to set the default affinities for layers by devices before loading the network to the heterogeneous plugin. You can modify affinities manually before loading to the plugin. |
| <code>IInferencePlugin</code> | Interface provided by each plugin to allow the Inference Engine to load <code>ICNNNetwork</code> to the plugin, create Executable network and set special dedicated options for the plugin |
| <code>IInferRequest</code> | Interface that represents the end point of inference on the model loaded to the plugin and represented by executable network. Inputs are set here, outputs should be requested from this interface as well |
| <code>InferenceEngineProfileInfo</code> | Represents basic inference profiling information per layer |
| Inference Engine | A C++ library with a set of classes that you can use in your application to infer input data (images) and get the result |
| Inference Engine API | The basic default API for all supported devices, which allows you to load a model from Intermediate Representation, set input and output formats and execute the model on various devices |
| Inference Engine Plugin | Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel(R) hardware device: CPU, GPU, VPU, FPGA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs. |
| Layer catalog or Operations specification | A list of supported layers or operations and its parameters. Sets of supported layers are different for different plugins, please check the documentation on plugins to verify if the Inference Engine supports certain layer on the dedicated hardware |
| <code>Layout</code> | Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch |
| <code>OutputsDataMap</code> | Structure which contains information about output precisions and layouts |
| Precision | Represents data precision. For example, FP32 is 32-bit floating point, FP16 is 16-bit floating point. Precision can be changed before loading the network to the plugin |
| <code>PreProcessInfo</code> | Class that represents input data for the network. It contains information about input precision, its layout, and pre-processing |
| <code>ResponseDesc</code> | Represents debug information for an error |
## See Also
* [Deep Learning Model Optimizer IR Operations Catalog](../ops/opset.md)
* [Inference Engine Memory primitives](Memory_primitives.md)
* [Terminology](supported_plugins/Supported_Devices.md)

View File

@@ -0,0 +1,47 @@
# Graph Debug Capabilities {#openvino_docs_IE_DG_Graph_debug_capabilities}
Inference Engine supports two different objects for a graph representation: the nGraph function and
CNNNetwork. Both representations provide an API to get detailed information about the graph structure.
## nGraph Function
To receive additional messages about applied graph modifications, rebuild the nGraph library with
the `-DNGRAPH_DEBUG_ENABLE=ON` option.
To enable serialization and deserialization of the nGraph function to a JSON file, rebuild the
nGraph library with the `-DNGRAPH_JSON_ENABLE=ON` option. To serialize or deserialize the nGraph
function, call the nGraph function as follows:
```cpp
#include <ngraph/serializer.hpp>
std::shared_ptr<ngraph::Function> nGraph;
...
ngraph::serialize("test_json.json", nGraph); // For graph serialization
std::ifstream file("test_json.json"); // Open a JSON file
nGraph = ngraph::deserialize(file); // For graph deserialization
```
To visualize the nGraph function to the xDot format or to an image file, use the
`ngraph::pass::VisualizeTree` graph transformation pass:
```cpp
#include <ngraph/pass/visualize_tree.hpp>
std::shared_ptr<ngraph::Function> nGraph;
...
std::vector<std::shared_ptr<ngraph::Function>> g2{nGraph};
ngraph::pass::VisualizeTree("after.png").run_on_module(g2); // Visualize the nGraph function to an image
```
## CNNNetwork
To serialize the CNNNetwork to the Inference Engine Intermediate Representation (IR) format, use the
`CNNNetwork::serialize(...)` method:
```cpp
std::shared_ptr<ngraph::Function> nGraph;
...
CNNNetwork network(nGraph);
network.serialize("test_ir.xml", "test_ir.bin");
```
> **NOTE**: CNNNetwork created from the nGraph function might differ from the original nGraph
> function because the Inference Engine applies some graph transformation.

View File

@@ -0,0 +1,102 @@
Introduction to Inference Engine Device Query API {#openvino_docs_IE_DG_InferenceEngine_QueryAPI}
===============================
This section provides a high-level description of the process of querying of different device properties and configuration values.
Refer to the [Hello Query Device Sample](../../inference-engine/samples/hello_query_device/README.md) sources and [Multi-Device Plugin guide](supported_plugins/MULTI.md) for example of using the Inference Engine Query API in user applications.
## Using the Inference Engine Query API in Your Code
The Inference Engine `Core` class provides the following API to query device information, set or get different device configuration properties:
* <code>InferenceEngine::Core::GetAvailableDevices</code> - Provides a list of available devices. If there are more than one instance of a specific device, the devices are enumerated with `.suffix` where `suffix` is a unique string identifier. The device name can be passed to all methods of the `InferenceEngine::Core` class that work with devices, for example `InferenceEngine::Core::LoadNetwork`.
* <code>InferenceEngine::Core::GetMetric</code> - Provides information about specific device.
<code>InferenceEngine::Core::GetConfig</code> - Gets the current value of a specific configuration key.
* <code>InferenceEngine::Core::SetConfig</code> - Sets a new value for the configuration key.
The `InferenceEngine::ExecutableNetwork` class is also extended to support the Query API:
* <code>InferenceEngine::ExecutableNetwork::GetMetric</code>
* <code>InferenceEngine::ExecutableNetwork::GetConfig</code>
* <code>InferenceEngine::ExecutableNetwork::SetConfig</code>
## Query API in the Core Class
### GetAvailableDevices
```cpp
InferenceEngine::Core core;
std::vector<std::string> availableDevices = ie.GetAvailableDevices();
```
The function returns list of available devices, for example:
```
MYRIAD.1.2-ma2480
MYRIAD.1.4-ma2480
FPGA.0
FPGA.1
CPU
GPU
...
```
Each device name can then be passed to:
* `InferenceEngine::Core::LoadNetwork` to load the network to a specific device.
* `InferenceEngine::Core::GetMetric` to get common or device specific metrics.
* All other methods of the `Core` class that accept `deviceName`.
### GetConfig()
The code below demonstrates how to understand whether `HETERO` device dumps `.dot` files with split graphs during the split stage:
```cpp
InferenceEngine::Core core;
bool dumpDotFile = core.GetConfig("HETERO", HETERO_CONFIG_KEY(DUMP_GRAPH_DOT)).as<bool>();
```
For documentation about common configuration keys, refer to `ie_plugin_config.hpp`. Device specific configuration keys can be found in corresponding plugin folders.
### GetMetric()
* To extract device properties such as available device, device name, supported configuration keys, and others, use the `InferenceEngine::Core::GetMetric` method:
```cpp
InferenceEngine::Core core;
std::string cpuDeviceName = core.GetMetric("GPU", METRIC_KEY(FULL_DEVICE_NAME)).as<std::string>();
```
A returned value looks as follows: `Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz`.
> **NOTE**: All metrics have specific type, which is specified during metric instantiation. The list of common device-agnostic metrics can be found in `ie_plugin_config.hpp`. Device specific metrics (for example, for `HDDL`, `MYRIAD` devices) can be found in corresponding plugin folders.
## Query API in the ExecutableNetwork Class
### GetMetric()
The method is used to get executable network specific metric such as `METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS)`:
```cpp
InferenceEngine::Core core;
auto exeNetwork = core.LoadNetwork(network, "CPU");
auto nireq = exeNetwork.GetMetric(METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS)).as<unsigned int>();
```
Or the current temperature of `MYRIAD` device:
```cpp
InferenceEngine::Core core;
auto exeNetwork = core.LoadNetwork(network, "MYRIAD");
float temperature = exeNetwork.GetMetric(METRIC_KEY(DEVICE_THERMAL)).as<float>();
```
### GetConfig()
The method is used to get information about configuration values the executable network has been created with:
```cpp
InferenceEngine::Core core;
auto exeNetwork = core.LoadNetwork(network, "CPU");
auto ncores = exeNetwork.GetConfig(PluginConfigParams::KEY_CPU_THREADS_NUM).as<std::string>();
```
### SetConfig()
The only device that supports this method is [Multi-Device](supported_plugins/MULTI.md).

127
docs/IE_DG/Int8Inference.md Normal file
View File

@@ -0,0 +1,127 @@
# Low-Precision 8-bit Integer Inference {#openvino_docs_IE_DG_Int8Inference}
## Disclaimer
Inference Engine with low-precision 8-bit integer inference requires the following prerequisites to be satisfied:
- Inference Engine [CPU Plugin](supported_plugins/CPU.md) must be built with the Intel® Math Kernel Library (Intel® MKL) dependency. In the Intel® Distribution of OpenVINO™ it is
satisfied by default, this is mostly the requirement if you are using OpenVINO™ available in open source, because [open source version of OpenVINO™](https://github.com/openvinotoolkit/openvino) can be built with OpenBLAS* that is unacceptable if you want to use 8-bit integer inference.
- Intel® platforms that support at least one extension to x86 instruction set from the following list:
- Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
- Intel® Advanced Vector Extensions 2.0 (Intel® AVX2)
- Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2)
- A model must be quantized. To quantize the model, you can use the [Post-Training Optimization Tool](@ref pot_README) delivered with the Intel® Distribution of OpenVINO™ toolkit release package.
The 8-bit inference feature was validated on the following topologies:
* **Classification models:**
* Caffe\* DenseNet-121, DenseNet-161, DenseNet-169, DenseNet-201
* Caffe Inception v1, Inception v2, Inception v3, Inception v4
* Caffe YOLO v1 tiny, YOLO v3
* Caffe ResNet-50 v1, ResNet-101 v1, ResNet-152 v1, ResNet-269 v1
* Caffe ResNet-18
* Caffe MobileNet, MobileNet v2
* Caffe SE ResNeXt-50
* Caffe SqueezeNet v1.0, SqueezeNet v1.1
* Caffe VGG16, VGG19
* TensorFlow\* DenseNet-121, DenseNet-169
* TensorFlow Inception v1, Inception v2, Inception v3, Inception v4, Inception ResNet v2
* TensorFlow Lite Inception v1, Inception v2, Inception v3, Inception v4, Inception ResNet v2
* TensorFlow Lite MobileNet v1, MobileNet v2
* TensorFlow MobileNet v1, MobileNet v2
* TensorFlow ResNet-50 v1.5, ResNet-50 v1, ResNet-101 v1, ResNet-152 v1, ResNet-50 v2, ResNet-101 v2, ResNet-152 v2
* TensorFlow VGG16, VGG19
* TensorFlow YOLO v3
* MXNet\* CaffeNet
* MXNet DenseNet-121, DenseNet-161, DenseNet-169, DenseNet-201
* MXNet Inception v3, inception_v4
* MXNet Mobilenet, Mobilenet v2
* MXNet ResNet-101 v1, ResNet-152 v1, ResNet-101 v2, ResNet-152 v2
* MXNet ResNeXt-101
* MXNet SqueezeNet v1.1
* MXNet VGG16, VGG19
* **Object detection models:**
* Caffe SSD GoogLeNet
* Caffe SSD MobileNet
* Caffe SSD SqueezeNet
* Caffe SSD VGG16 300, SSD VGG16 512
* TensorFlow SSD MobileNet v1, SSD MobileNet v2
* MXNet SSD Inception v3 512
* MXNet SSD MobileNet 512
* MXNet SSD ResNet-50 512
* MXNet SSD VGG16 300
* ONNX\* SSD ResNet 34
* **Semantic segmentation models:**
* Unet2D
* **Recommendation system models:**
* NCF
## Introduction
A lot of investigation was made in the field of deep learning with the idea of using low precision computations during inference in order to boost deep learning pipelines and gather higher performance. For example, one of the popular approaches is to shrink the precision of activations and weights values from `fp32` precision to smaller ones, for example, to `fp11` or `int8`. For more information about this approach, refer to
**Brief History of Lower Precision in Deep Learning** section in [this whitepaper](https://software.intel.com/en-us/articles/lower-numerical-precision-deep-learning-inference-and-training).
8-bit computations (referred to as `int8`) offer better performance compared to the results of inference in higher precision (for example, `fp32`), because they allow loading more data into a single processor instruction. Usually the cost for significant boost is a reduced accuracy. However, it is proved that an accuracy drop can be negligible and depends on task requirements, so that the application engineer can set up the maximum accuracy drop that is acceptable.
Current Inference Engine solution for low-precision inference uses Intel MKL-DNN and supports inference of the following layers in 8-bit integer computation mode:
* Convolution
* FullyConnected
* ReLU
* ReLU6
* Reshape
* Permute
* Pooling
* Squeeze
* Eltwise
* Concat
* Resample
* MVN
This means that 8-bit inference can only be performed with the CPU plugin on the layers listed above. All other layers are executed in the format supported by the CPU plugin: 32-bit floating point format (`fp32`).
## Low-Precision 8-bit Integer Inference Workflow
For 8-bit integer computations, a model must be quantized. If the model is not quantized then you can use the [Post-Training Optimization Tool](@ref pot_README) to quantize the model. The quantization process adds `FakeQuantize` layers on activations and weights for most layers. Read more about mathematical computations under the hood in the [white paper](https://intel.github.io/mkl-dnn/ex_int8_simplenet.html).
8-bit inference pipeline includes two stages (also refer to the figure below):
1. *Offline stage*, or *model quantization*. During this stage, `FakeQuantize` layers are added before most layers to have quantized tensors before layers in a way that low-precision accuracy drop for 8-bit integer inference satisfies the specified threshold. The output of this stage is a quantized model. Quantized model precision is not changed, quantized tensors are in original precision range (`fp32`). `FakeQuantize` layer has `Quantization Levels` attribute whic defines quants count. Quants count defines precision which is used during inference. For `int8` range `Quantization Levels` attribute value has to be 255 or 256.
2. *Run-time stage*. This stage is an internal procedure of the [CPU Plugin](supported_plugins/CPU.md). During this stage, the quantized model is loaded to the plugin. The plugin updates each `FakeQuantize` layer on activations and weights to have `FakeQuantize` output tensor values in low precision range.
![int8_flow]
### Offline Stage: Model Quantization
To infer a layer in low precision and get maximum performance, the input tensor for the layer has to be quantized and each value has to be in the target low precision range. For this purpose, `FakeQuantize` layer is used in the OpenVINO™ intermediate representation file (IR). To quantize the model, you can use the [Post-Training Optimization Tool](@ref pot_README) delivered with the Intel® Distribution of OpenVINO™ toolkit release package.
When you pass the calibrated IR to the [CPU plugin](supported_plugins/CPU.md), the plugin automatically recognizes it as a quantized model and performs 8-bit inference. Note, if you pass a quantized model to another plugin that does not support 8-bit inference, the model is inferred in precision that this plugin supports.
### Run-Time Stage: Quantization
This is the second stage of the 8-bit integer inference. After you load the quantized model IR to a plugin, the pluing uses the `Low Precision Transformation` component to update the model to infer it in low precision:
* Updates `FakeQuantize` layers to have quantized output tensors in low precision range and add dequantization layers to compensate the update. Dequantization layers are pushed through as many layers as possible to have more layers in low precision. After that, most layers have quantized input tensors in low precision range and can be inferred in low precision. Ideally, dequantization layers should be fused in next `FakeQuantize` or `ScaleShift` layers.
* Weights are quantized and stored in `Const` layers.
* Biases are updated to avoid shifts in dequantization layers.
## Performance Counters
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. The layers have the following marks:
* Suffix `I8` for layers that had 8-bit data type input and were computed in 8-bit precision
* Suffix `FP32` for layers computed in 32-bit precision
For example, the performance counters table for the Inception model can look as follows:
```
inception_5b/5x5_reduce EXECUTED layerType: Convolution realTime: 417 cpu: 417 execType: gemm_blas_I8
inception_5b/output EXECUTED layerType: Concat realTime: 34 cpu: 34 execType: ref_I8
inception_5b/output_U8_nhw... EXECUTED layerType: Reorder realTime: 33092 cpu: 33092 execType: reorder_I8
inception_5b/output_oScale... EXECUTED layerType: ScaleShift realTime: 1390 cpu: 1390 execType: jit_avx2_FP32
inception_5b/output_oScale... EXECUTED layerType: Reorder realTime: 143 cpu: 143 execType: reorder_FP32
inception_5b/pool EXECUTED layerType: Pooling realTime: 59301 cpu: 59301 execType: ref_any_I8
```
The `execType` column of the table includes inference primitives with specific suffixes.
[int8_flow]: img/cpu_int8_flow.png

View File

@@ -0,0 +1,303 @@
Integrate the Inference Engine with Your Application {#openvino_docs_IE_DG_Integrate_with_customer_application_new_API}
===============================
This section provides a high-level description of the process of integrating the Inference Engine into your application.
Refer to the [Hello Classification Sample](../../inference-engine/samples/hello_classification/README.md) sources
for example of using the Inference Engine in applications.
> **NOTE**: For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach.
> The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases. Please, refer to [Migration from Inference Engine Plugin API to Core API](Migration_CoreAPI.md) guide to update your application.
## Use the Inference Engine API in Your Code
The core `libinference_engine.so` library implements loading and parsing a model Intermediate Representation (IR), and triggers inference using a specified device. The core library has the following API:
* `InferenceEngine::Core`
* `InferenceEngine::Blob`, `InferenceEngine::TBlob`,
`InferenceEngine::NV12Blob`
* `InferenceEngine::BlobMap`
* `InferenceEngine::InputsDataMap`, `InferenceEngine::InputInfo`,
* `InferenceEngine::OutputsDataMap`
C++ Inference Engine API wraps the capabilities of core library:
* `InferenceEngine::CNNNetwork`
* `InferenceEngine::ExecutableNetwork`
* `InferenceEngine::InferRequest`
## Integration Steps
Integration process includes the following steps:
![integration_process]
1) **Create Inference Engine Core** to manage available devices and read network objects:
```cpp
InferenceEngine::Core core;
```
2) **Read a model IR** created by the Model Optimizer (.xml is supported format):
```cpp
auto network = core.ReadNetwork("Model.xml");
```
**Or read the model from ONNX format** (.onnx and .prototxt are supported formats)
```cpp
auto network = core.ReadNetwork("model.onnx");
```
3) **Configure input and output**. Request input and output information using `InferenceEngine::CNNNetwork::getInputsInfo()`, and `InferenceEngine::CNNNetwork::getOutputsInfo()`
methods:
```cpp
/** Take information about all topology inputs **/
InferenceEngine::InputsDataMap input_info = network.getInputsInfo();
/** Take information about all topology outputs **/
InferenceEngine::OutputsDataMap output_info = network.getOutputsInfo();
```
Optionally, set the number format (precision) and memory layout for inputs and outputs. Refer to the
[Supported configurations](supported_plugins/Supported_Devices.md) chapter to choose the relevant configuration.
You can also allow input of any size. To do this, mark each input as resizable by setting a desired resize algorithm (e.g. `BILINEAR`) inside of the appropriate input info.
Basic color format conversions are supported as well. By default, the Inference Engine assumes
that the input color format is `BGR` and color format conversions are disabled. The Inference
Engine supports the following color format conversions:
* `RGB->BGR`
* `RGBX->BGR`
* `BGRX->BGR`
* `NV12->BGR`
where `X` is a channel that will be ignored during inference. To enable the conversions, set a
desired color format (for example, `RGB`) for each input inside of the appropriate input info.
If you want to run inference for multiple images at once, you can use the built-in batch
pre-processing functionality.
> **NOTE**: Batch pre-processing is not supported if input color format is set to `ColorFormat::NV12`.
You can use the following code snippet to configure input and output:
```cpp
/** Iterate over all input info**/
for (auto &item : input_info) {
auto input_data = item.second;
input_data->setPrecision(Precision::U8);
input_data->setLayout(Layout::NCHW);
input_data->getPreProcess().setResizeAlgorithm(RESIZE_BILINEAR);
input_data->getPreProcess().setColorFormat(ColorFormat::RGB);
}
/** Iterate over all output info**/
for (auto &item : output_info) {
auto output_data = item.second;
output_data->setPrecision(Precision::FP32);
output_data->setLayout(Layout::NC);
}
```
> **NOTE**: NV12 input color format pre-processing differs from other color conversions. In case of NV12,
> Inference Engine expects two separate image planes (Y and UV). You must use a specific
> `InferenceEngine::NV12Blob` object instead of default blob object and set this blob to
> the Inference Engine Infer Request using `InferenceEngine::InferRequest::SetBlob()`.
> Refer to [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md)
> for more details.
If you skip this step, the default values are set:
* no resize algorithm is set for inputs
* input color format - `ColorFormat::RAW` meaning that input does not need color
conversions
* input and output precision - `Precision::FP32`
* input layout - `Layout::NCHW`
* output layout depends on number of its dimensions:
|Number of dimensions | 5 | 4 | 3 | 2 | 1 |
|:--------------------|-------|------|-----|----|----|
|Layout | NCDHW | NCHW | CHW | NC | C |
4) **Load the model** to the device using `InferenceEngine::Core::LoadNetwork()`:
```cpp
auto executable_network = core.LoadNetwork(network, "CPU");
```
It creates an executable network from a network object. The executable network is associated with single hardware device.
It is possible to create as many networks as needed and to use them simultaneously (up to the limitation of the hardware resources).
Third parameter is a configuration for plugin. It is map of pairs: (parameter name, parameter value). Choose device from
[Supported devices](supported_plugins/Supported_Devices.md) page for more details about supported configuration parameters.
```cpp
/** Optional config. E.g. this enables profiling of performance counters. **/
std::map<std::string, std::string> config = {{ PluginConfigParams::KEY_PERF_COUNT, PluginConfigParams::YES }};
auto executable_network = core.LoadNetwork(network, "CPU", config);
```
5) **Create an infer request**:
```cpp
auto infer_request = executable_network.CreateInferRequest();
```
6) **Prepare input**. You can use one of the following options to prepare input:
* **Optimal way for a single network.** Get blobs allocated by an infer request using `InferenceEngine::InferRequest::GetBlob()`
and feed an image and the input data to the blobs. In this case, input data must be aligned (resized manually) with a
given blob size and have a correct color format.
```cpp
/** Iterate over all input blobs **/
for (auto & item : inputInfo) {
auto input_name = item->first;
/** Get input blob **/
auto input = infer_request.GetBlob(input_name);
/** Fill input tensor with planes. First b channel, then g and r channels **/
...
}
```
* **Optimal way for a cascade of networks (output of one network is input for another).** Get output blob from the first
request using `InferenceEngine::InferRequest::GetBlob()` and set it as input for the second request using
`InferenceEngine::InferRequest::SetBlob()`.
```cpp
auto output = infer_request1->GetBlob(output_name);
infer_request2->SetBlob(input_name, output);
```
* **Optimal way to handle ROI (a ROI object located inside of input of one network is input for another).** It is
possible to re-use shared input by several networks. You do not need to allocate separate input blob for a network if
it processes a ROI object located inside of already allocated input of a previous network. For instance, when first
network detects objects on a video frame (stored as input blob) and second network accepts detected bounding boxes
(ROI inside of the frame) as input.
In this case, it is allowed to re-use pre-allocated input blob (used by first network) by second network and just crop
ROI without allocation of new memory using `InferenceEngine::make_shared_blob()` with passing of
`InferenceEngine::Blob::Ptr` and `InferenceEngine::ROI` as parameters.
```cpp
/** inputBlob points to input of a previous network and
cropROI contains coordinates of output bounding box **/
InferenceEngine::Blob::Ptr inputBlob;
InferenceEngine::ROI cropRoi;
...
/** roiBlob uses shared memory of inputBlob and describes cropROI
according to its coordinates **/
auto roiBlob = InferenceEngine::make_shared_blob(inputBlob, cropRoi);
infer_request2->SetBlob(input_name, roiBlob);
```
Make sure that shared input is kept valid during execution of each network. Otherwise, ROI blob may be corrupted if the
original input blob (that ROI is cropped from) has already been rewritten.
* Allocate input blobs of the appropriate types and sizes, feed an image and the input data to the blobs, and call
`InferenceEngine::InferRequest::SetBlob()` to set these blobs for an infer request:
```cpp
/** Iterate over all input blobs **/
for (auto & item : inputInfo) {
auto input_data = item->second;
/** Create input blob **/
InferenceEngine::TBlob<unsigned char>::Ptr input;
// assuming input precision was asked to be U8 in prev step
input = InferenceEngine::make_shared_blob<unsigned char, InferenceEngine::SizeVector>(InferenceEngine::Precision:U8, input_data->getDims());
input->allocate();
infer_request->SetBlob(item.first, input);
/** Fill input tensor with planes. First b channel, then g and r channels **/
...
}
```
A blob can be filled before and after `SetBlob()`.
> **NOTE:**
>
> * `SetBlob()` method compares precision and layout of an input blob with ones defined on step 3 and
> throws an exception if they do not match. It also compares a size of the input blob with input
> size of the read network. But if input was configured as resizable, you can set an input blob of
> any size (for example, any ROI blob). Input resize will be invoked automatically using resize
> algorithm configured on step 3. Similarly to the resize, color format conversions allow the color
> format of an input blob to differ from the color format of the read network. Color format
> conversion will be invoked automatically using color format configured on step 3.
>
> * `GetBlob()` logic is the same for pre-processable and not pre-processable input. Even if it is
> called with input configured as resizable or as having specific color format, a blob allocated by
> an infer request is returned. Its size and color format are already consistent with the
> corresponding values of the read network. No pre-processing will happen for this blob. If you
> call `GetBlob()` after `SetBlob()`, you will get the blob you set in `SetBlob()`.
7) **Do inference** by calling the `InferenceEngine::InferRequest::StartAsync` and `InferenceEngine::InferRequest::Wait`
methods for asynchronous request:
```cpp
infer_request->StartAsync();
infer_request.Wait(IInferRequest::WaitMode::RESULT_READY);
```
or by calling the `InferenceEngine::InferRequest::Infer` method for synchronous request:
```cpp
sync_infer_request->Infer();
```
`StartAsync` returns immediately and starts inference without blocking main thread, `Infer` blocks
main thread and returns when inference is completed.
Call `Wait` for waiting result to become available for asynchronous request.
There are three ways to use it:
* specify maximum duration in milliseconds to block for. The method is blocked until the specified timeout has elapsed,
or the result becomes available, whichever comes first.
* `InferenceEngine::IInferRequest::WaitMode::RESULT_READY` - waits until inference result becomes available
* `InferenceEngine::IInferRequest::WaitMode::STATUS_ONLY` - immediately returns request status.It does not
block or interrupts current thread.
Both requests are thread-safe: can be called from different threads without fearing corruption and failures.
Multiple requests for single `ExecutableNetwork` are executed sequentially one by one in FIFO order.
While request is ongoing, all its methods except `InferenceEngine::InferRequest::Wait` would throw an
exception.
8) Go over the output blobs and **process the results**.
Note that casting `Blob` to `TBlob` via `std::dynamic_pointer_cast` is not recommended way,
better to access data via `buffer()` and `as()` methods as follows:
```cpp
for (auto &item : output_info) {
auto output_name = item.first;
auto output = infer_request.GetBlob(output_name);
{
auto const memLocker = output->cbuffer(); // use const memory locker
// output_buffer is valid as long as the lifetime of memLocker
const float *output_buffer = memLocker.as<const float *>();
/** output_buffer[] - accessing output blob data **/
```
## Build Your Application
For details about building your application, refer to the CMake files for the sample applications.
All samples source code is located in the `<INSTALL_DIR>/openvino/inference_engine/samples` directory, where `INSTALL_DIR` is the OpenVINO™ installation directory.
### CMake project creation
1. **Create a structure** for the project:
``` sh
project/
├── CMakeLists.txt - CMake file to build
├── ... - Additional folders like includes/
└── src/ - source folder
└── main.cpp
build/ - build directory
...
```
2. **Include Inference Engine, nGraph and OpenCV libraries** in `project/CMakeLists.txt`
[OpenCV](https://docs.opencv.org/master/db/df5/tutorial_linux_gcc_cmake.html) integration is needed mostly for pre-processing input data and ngraph for more complex applications using [ngraph API](nGraph_Flow.md).
``` cmake
cmake_minimum_required(VERSION 3.0.0)
project(project_name)
find_package(ngraph REQUIRED)
find_package(InferenceEngine REQUIRED)
find_package(OpenCV REQUIRED)
add_executable(${PROJECT_NAME} src/main.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE ${InferenceEngine_LIBRARIES} ${OpenCV_LIBS} ${NGRAPH_LIBRARIES})
```
3. **To build your project** using CMake with the default build tools currently available on your machine, execute the following commands:
> **NOTE**: Make sure **Set the Environment Variables** step in [OpenVINO Installation](../../inference-engine/samples/hello_nv12_input_classification/README.md) document is applied to your terminal, otherwise `InferenceEngine_DIR` and `OpenCV_DIR` variables won't be configured properly to pass `find_package` calls.
```sh
cd build/
cmake ../project
cmake --build .
```
It's allowed to specify additional build options (e.g. to build CMake project on Windows with a specific build tools). Please refer to the [CMake page](https://cmake.org/cmake/help/latest/manual/cmake.1.html#manual:cmake(1)) for details.
### Run Your Application
> **NOTE**: Before running, make sure you completed **Set the Environment Variables** section in [OpenVINO Installation](../../inference-engine/samples/hello_nv12_input_classification/README.md) document so that the application can find the libraries.
To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2017
Redistributable and Intel® C++ Compiler 2017 Redistributable packages are installed and
`<INSTALL_DIR>/bin/intel64/Release/*.dll` files are placed to the
application folder or accessible via `%PATH%` environment variable.
[integration_process]: img/integration_process.png

View File

@@ -0,0 +1,99 @@
# Introduction to the Performance Topics {#openvino_docs_IE_DG_Intro_to_Performance}
This section is a shorter version of the
[Optimization Guide](supported_plugins/MULTI.md) for the Intel Deep Learning Deployment Toolkit.
## Precision
Inference precision directly affects the performance.
Model Optimizer can produce an IR with different precision. For example, float16 IR initially targets VPU and GPU devices, while, for example, the CPU can also execute regular float32.
Also, further device-specific inference precision settings are available, for example, [8-bit integer](Int8Inference.md) or [bfloat16](Bfloat16Inference.md) inference on the CPU.
Note that for [MULTI device](supported_plugins/MULTI.md) that supports automatic inference on multiple devices in parallel, you can use the FP16 IR.
You can find more information, including preferred data types for specific devices, in the
[Supported Devices](supported_plugins/Supported_Devices.md) section.
## Lowering Inference Precision
Default optimization is used for CPU and implies that inference is made with lower precision if it is possible on a given platform to reach better performance with acceptable range of accuracy.
This approach is used for CPU device if platform supports the AVX512_BF16 instruction. In this case, a regular float32 model is converted to [bfloat16](Bfloat16Inference.md) internal representation and inference is provided with bfloat16 layers usage.
Below is the example command line to disable this feature on the CPU device with the AVX512_BF16 instruction and execute regular float32.
```
$ benchmark_app -m <model.xml> -enforcebf16=false
```
## Latency vs. Throughput
One way to increase computational efficiency is batching, which combines many (potentially tens) of
input images to achieve optimal throughput. However, high batch size also comes with a
latency penalty. So, for more real-time oriented usages, lower batch sizes (as low as a single input) are used.
Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample, which allows latency vs. throughput measuring.
## Using Async API
To gain better performance on accelerators, such as VPU or FPGA, the Inference Engine uses the asynchronous approach (see
[Integrating Inference Engine in Your Application (current API)](Integrate_with_customer_application_new_API.md)).
The point is amortizing the costs of data transfers, by pipe-lining, see [Async API explained](@ref omz_demos_object_detection_demo_ssd_async_README).
Since the pipe-lining relies on the availability of the parallel slack, running multiple inference requests in parallel is essential.
Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample, which enables running a number of inference requests in parallel. Specifying different number of request produces different throughput measurements.
## Best Latency on the Multi-Socket CPUs
Note that when latency is of concern, there are additional tips for multi-socket systems.
When input is limited to the single image, the only way to achieve the best latency is to limit execution to the single socket.
The reason is that single image is simply not enough
to saturate more than one socket. Also NUMA overheads might dominate the execution time.
Below is the example command line that limits the execution to the single socket using numactl for the best *latency* value
(assuming the machine with 28 phys cores per socket):
```
limited to the single socket).
$ numactl -m 0 --physcpubind 0-27 benchmark_app -m <model.xml> -api sync -nthreads 28
```
Note that if you have more than one input, running as many inference requests as you have NUMA nodes (or sockets)
usually gives the same best latency as a single request on the single socket, but much higher throughput. Assuming two NUMA nodes machine:
```
$ benchmark_app -m <model.xml> -nstreams 2
```
Number of NUMA nodes on the machine can be queried via 'lscpu'.
Please see more on the NUMA support in the [Optimization Guide](supported_plugins/MULTI.md).
## Throughput Mode for CPU
Unlike most accelerators, CPU is perceived as an inherently latency-oriented device.
Since 2018 R5 release, the Inference Engine introduced the "throughput" mode, which allows the Inference Engine to efficiently run multiple inference requests on the CPU simultaneously, greatly improving the throughput.
Internally, the execution resources are split/pinned into execution "streams".
Using this feature gains much better performance for the networks that originally are not scaled well with a number of threads (for example, lightweight topologies). This is especially pronounced for the many-core server machines.
Run the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) and play with number of infer requests running in parallel, next section.
Try different values of the `-nstreams` argument from `1` to a number of CPU cores and find one that provides the best performance.
In addition to the number of streams, it is also possible to play with the batch size to find the throughput sweet-spot.
The throughput mode relaxes the requirement to saturate the CPU by using a large batch: running multiple independent inference requests in parallel often gives much better performance, than using a batch only.
This allows you to simplify the app-logic, as you don't need to combine multiple inputs into a batch to achieve good CPU performance.
Instead, it is possible to keep a separate infer request per camera or another source of input and process the requests in parallel using Async API.
## Benchmark App
[Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample is the best performance reference.
It has a lot of device-specific knobs, but the primary usage is as simple as:
```bash
$ ./benchmark_app d GPU m <model> -i <input>
```
to measure the performance of the model on the GPU.
Or
```bash
$ ./benchmark_app d CPU m <model> -i <input>
```
to execute on the CPU instead.
For example, for the CPU throughput mode from the previous section, you can play with number of streams (`-nstreams` command-line param).
Try different values of the `-nstreams` argument from `1` to a number of CPU cores and find one that provides the best performance. For example, on a 8-core CPU, compare the `-nstreams 1` (which is a latency-oriented scenario) to the `2`, `4` and `8` streams. Notice that `benchmark_app` automatically queries/creates/runs number of requests required to saturate the given number of streams.
Finally, notice that when you don't specify number of streams with `-nstreams`, "AUTO" value for the streams is used, e.g. for the CPU this is [CPU_THROUGHPUT_AUTO](supported_plugins/CPU.md). You can spot the actual value behind "AUTO" for your machine in the application output.
Notice that the "AUTO" number is not necessarily most optimal, so it is generally recommended to play either with the benchmark_app's "-nstreams" as described above, or via [new Workbench tool](@ref workbench_docs_Workbench_DG_Introduction).This allows you to simplify the app-logic, as you don't need to combine multiple inputs into a batch to achieve good CPU performance.
Instead, it is possible to keep a separate infer request per camera or another source of input and process the requests in parallel using Async API.
## Kernels Tuning for GPU
GPU backend comes with a feature, that allows models tuning, so the workload is configured to fit better into hardware.
Tuning is time consuming process, which internally execute every layer several (or even hundreds) times to find most performant configuration.
This configuration is saved into json-formatted file, whose name can be passed as plugin param to network. GPU backend will process this data to configure kernels for the best performance.
For more details about Kernels Tuning and How-To please refer to [GPU Kernels Tuning](GPU_Kernels_Tuning.md).

128
docs/IE_DG/Introduction.md Normal file
View File

@@ -0,0 +1,128 @@
# Introduction to Intel® Deep Learning Deployment Toolkit {#openvino_docs_IE_DG_Introduction}
## Deployment Challenges
Deploying deep learning networks from the training environment to embedded platforms for inference
might be a complex task that introduces a number of technical challenges that must be addressed:
* There are a number of deep learning frameworks widely used in the industry, such as Caffe*, TensorFlow*, MXNet*, Kaldi* etc.
* Typically the training of the deep learning networks is performed in data centers or server farms while the inference
might take place on embedded platforms, optimized for performance and power consumption. Such platforms are typically
limited both from software perspective (programming languages, third party dependencies, memory consumption,
supported operating systems), and from hardware perspective (different data types, limited power envelope),
so usually it is not recommended (and sometimes just impossible) to use original training framework for inference.
An alternative solution would be to use dedicated inference APIs that are well optimized for specific hardware platforms.
* Additional complications of the deployment process include supporting various layer types and networks that are getting
more and more complex. Obviously, ensuring the accuracy of the transforms networks is not trivial.
## Deployment Workflow
The process assumes that you have a network model trained using one of the [supported frameworks](#SupportedFW).
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
![scheme]
The steps are:
1. [Configure Model Optimizer](../MO_DG/prepare_model/Config_Model_Optimizer.md) for the specific framework (used to train your model).
2. Run [Model Optimizer](#MO) to produce an optimized [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md)
of the model based on the trained network topology, weights and biases values, and other optional parameters.
3. Test the model in the IR format using the [Inference Engine](#IE) in the target environment with provided
[Inference Engine sample applications](Samples_Overview.md).
4. [Integrate Inference Engine](Integrate_with_customer_application_new_API.md) in your application to deploy the model in the target environment.
## Model Optimizer <a name = "MO"></a>
Model Optimizer is a cross-platform command line tool that facilitates the transition between the training and
deployment environment, performs static model analysis and automatically adjusts deep learning
models for optimal execution on end-point target devices.
Model Optimizer is designed to support multiple deep learning [supported frameworks and formats](#SupportedFW).
While running Model Optimizer you do not need to consider what target device you wish to use, the same output of the MO can be used in all targets.
### Model Optimizer Workflow
The process assumes that you have a network model trained using one of the [supported frameworks](#SupportedFW).
The Model Optimizer workflow can be described as following:
* [Configure Model Optimizer](../MO_DG/prepare_model/Config_Model_Optimizer.md) for one of the supported deep learning framework that was used to train the model.
* Provide as input a trained network that contains a certain network topology, and the adjusted weights and
biases (with some optional parameters).
* [Run Model Optimizer](../MO_DG/prepare_model/convert_model/Converting_Model.md) to perform specific model optimizations (for example, horizontal fusion of certain network layers). Exact optimizations
are framework-specific, refer to appropriate documentation pages: [Converting a Caffe Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md),
[Converting a TensorFlow Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md), [Converting a MXNet Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md), [Converting a Kaldi Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md),
[Converting an ONNX Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md).
* Model Optimizer produces as output an [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md) of the network which is used as an input for the Inference Engine on all targets.
### Supported Frameworks and Formats <a name = "SupportedFW"></a>
* Caffe* (most public branches)
* TensorFlow*
* MXNet*
* Kaldi*
* ONNX*
### Supported Models
For the list of supported models refer to the framework or format specific page:
* [Supported Caffe* models](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
* [Supported TensorFlow* models](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
* [Supported MXNet* models](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
* [Supported ONNX* models](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)
* [Supported Kaldi* models](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
## Intermediate Representation
Intermediate representation describing a deep learning model plays an important role connecting the OpenVINO&trade; toolkit components.
The IR is a pair of files:
* `.xml`: The topology file - an XML file that describes the network topology
* `.bin`: The trained data file - a .bin file that contains the weights and biases binary data
Intermediate Representation (IR) files can be read, loaded and inferred with the [Inference Engine](#IE).
Inference Engine API offers a unified API across a number of [supported Intel® platforms](#SupportedTargets).
IR is also consumed, modified and written by Post-Training Optimization Tool which provides quantization capabilities.
Refer to a dedicated description about [Intermediate Representation and Operation Sets](../MO_DG/IR_and_opsets.md) for further details.
## nGraph Integration
OpenVINO toolkit is powered by nGraph capabilities for Graph construction API, Graph transformation engine and Reshape.
nGraph Function is used as an intermediate representation for a model in the run-time underneath the CNNNetwork API.
The conventional representation for CNNNetwork is still available if requested for backward compatibility when some conventional API methods are used.
Please refer to the [Overview of nGraph Flow](nGraph_Flow.md) describing the details of nGraph integration into the Inference Engine and co-existence with the conventional representation.
## Inference Engine <a name = "IE"></a>
Inference Engine is a runtime that delivers a unified API to integrate the inference with application logic:
* Takes as input the model. The model presented in the specific form of [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md)
produced by Model Optimizer.
* Optimizes inference execution for target hardware.
* Delivers inference solution with reduced footprint on embedded inference platforms.
The Inference Engine supports inference of multiple image classification networks,
including AlexNet, GoogLeNet, VGG and ResNet families of networks, fully convolutional networks like FCN8 used for image
segmentation, and object detection networks like Faster R-CNN.
For the full list of supported hardware, refer to the
[Supported Devices](supported_plugins/Supported_Devices.md) section.
For Intel® Distribution of OpenVINO™ toolkit, the Inference Engine package contains [headers](files.html), runtime libraries, and
[sample console applications](Samples_Overview.md) demonstrating how you can use
the Inference Engine in your applications.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md">Inference Engine Build Instructions</a>.
## See Also
- [Inference Engine Samples](Samples_Overview.md)
- [Intel&reg; Deep Learning Deployment Toolkit Web Page](https://software.intel.com/en-us/computer-vision-sdk)
[scheme]: img/workflow_steps.png
#### Optimization Notice
<sup>For complete information about compiler optimizations, see our [Optimization Notice](https://software.intel.com/en-us/articles/optimization-notice#opt-en).</sup>

View File

@@ -0,0 +1,58 @@
# Known Issues and Limitations {#openvino_docs_IE_DG_Known_Issues_Limitations}
## Multiple OpenMP Loadings
If the application uses the Inference Engine with third-party components that depend on Intel OpenMP, multiple loadings of the libiomp library may occur and cause OpenMP runtime initialization conflicts. This may happen, for example, if the application uses Intel® Math Kernel Library (Intel® MKL) through the “Single Dynamic Library” (<code>libmkl_rt.so</code>) mechanism and calls Intel MKL after loading the Inference Engine plugin.
The error log looks as follows:
```sh
OMP: Error #15: Initializing libiomp5.so, but found libiomp5.so already initialized.
OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
```
Possible workarounds:
* Preload the OpenMP runtime using the <code>LD_PRELOAD</code> variable:
```sh
LD_PRELOAD=<path_to_libiomp5.so> <path_to your_executable>
```
This eliminates multiple loadings of libiomp, and makes all the components use this specific version of OpenMP.
* Alternatively, you can set <code>KMP_DUPLICATE_LIB_OK=TRUE</code>. However, performance degradation or results incorrectness may occur in this case.
## Old proto compiler breaks protobuf library
With python protobuf library version 3.5.1 the following incompatibility can happen.
The known case is for Cent OS 7.4
The error log looks as follows:
```sh
File "../lib64/python3.5/site-packages/google/protobuf/descriptor.py", line 829, in _new_
return _message.default_pool.AddSerializedFile(serialized_pb)
TypeError: expected bytes, str found
```
Possible workaround is to upgrade default protobuf compiler (libprotoc 2.5.0) to newer version, for example
libprotoc 2.6.1.
[protobuf_issue]: https://github.com/google/protobuf/issues/4272
## Dynamic batching
Refer to the **Limitations** section of [Dynamic batching page](DynamicBatching.md)
## Static Shape Infer
Refer to the **Limitations** section of [Static Shape Infer page](ShapeInference.md)
## Image Pre-Processing Performance Optimization Issue
As described in [documentation for new API](Integrate_with_customer_application_new_API.md), you can set an image blob of any size to an
infer request using resizable input. Resize is executed during inference using configured resize algorithm.
But currently resize algorithms are not completely optimized. So expect performance degradation if resizable input is
specified and an input blob (to be resized) is set (`SetBlob()` is used). Required performance is met for
[CPU](supported_plugins/CPU.md) plugin only (because enabled openMP* provides parallelism).
Another limitation is that currently, resize algorithms support NCHW layout only. So if you set NHWC layout for an input
blob, NHWC is converted to NCHW before resize and back to NHWC after resize.

View File

@@ -0,0 +1,12 @@
# Legal Information {#openvino_docs_IE_DG_Legal_Information}
<sup>No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.</sup><br/>
<sup>Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.</sup><br/>
<sup>This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.</sup><br/>
<sup>The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.</sup><br/>
<sup>Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [<b>www.intel.com/design/literature.htm</b>](http://www.intel.com/design/literature.htm).</sup><br/>
<sup>Intel, Intel logo, Intel Core, VTune, Xeon are trademarks of Intel Corporation in the U.S. and other countries.</sup><br/>
<sup>\* Other names and brands may be claimed as the property of others.</sup><br/>
<sup>Copyright © 2016-2018 Intel Corporation.</sup><br/>
<sup>This software and the related documents are Intel copyrighted materials, and your use of them is governed by the express license under which they were provided to you (License). Unless the License provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the related documents without Intel's prior written permission.</sup><br/>
<sup>This software and the related documents are provided as is, with no express or implied warranties, other than those that are expressly stated in the License.</sup><br/>

View File

@@ -0,0 +1,55 @@
Inference Engine Memory primitives {#openvino_docs_IE_DG_Memory_primitives}
=====================================================================
## Blobs
<code>InferenceEngine::Blob</code> is the main class intended for working with memory.
Using this class you can read and write memory, get information about the memory structure etc.
The right way to create <code>Blob</code> objects with a specific layout is to use constructors with <code>InferenceEngine::TensorDesc</code>.
<pre class="brush:cpp">
InferenceEngige::TensorDesc tdesc(FP32, {1, 3, 227, 227}, InferenceEngine::Layout::NCHW);
InferenceEngine::Blob::Ptr blob = InferenceEngine::make_shared_blob<float>(tdesc);
</pre>
## Layouts
<code>InferenceEngine::TensorDesc</code> is a special class that provides layout format description.
This class allows to create planar layouts using the standard formats (like <code>InferenceEngine::Layout::NCDHW</code>, <code>InferenceEngine::Layout::NCHW</code>, <code>InferenceEngine::Layout::NC</code>, <code>InferenceEngine::Layout::C</code> and etc) and also non-planar layouts using <code>InferenceEngine::BlockingDesc</code>.
In order to create a complex layout you should use <code>InferenceEngine::BlockingDesc</code> which allows to define the blocked memory with offsets and strides.
## Examples
1. You can define a blob with dimensions {N: 1, C: 25, H: 20, W: 20} and format NHWC with using next parameters:<br/>
<pre class="brush:cpp">
InferenceEngine::BlockingDesc({1, 20, 20, 25}, {0, 2, 3, 1}); // or
InferenceEngine::BlockingDesc({1, 20, 20, 25}, InferenceEngine::Layout::NHWC);
</pre>
2. If you have a memory with real dimensions {N: 1, C: 25, H: 20, W: 20} but with channels which are blocked by 8, you can define it using next parameters:<br/>
<pre class="brush:cpp">
InferenceEngine::BlockingDesc({1, 4, 20, 20, 8}, {0, 1, 2, 3, 1})
</pre>
3. Also you can set strides and offsets if layout contains it.
4. If you have a complex blob layout and you don't want to calculate the real offset to data you can use methods
<code>InferenceEngine::TensorDesc::offset(size_t l)</code> or <code>InferenceEngine::TensorDesc::offset(SizeVector v)</code>.<br/>
For example:
<pre class="brush:cpp">
InferenceEngine::BlockingDesc blk({1, 4, 20, 20, 8}, {0, 1, 2, 3, 1});
InferenceEngine::TensorDesc tdesc(FP32, {1, 25, 20, 20}, blk);
tdesc.offset(0); // = 0
tdesc.offset(1); // = 8
tdesc.offset({0, 0, 0, 2}); // = 16
tdesc.offset({0, 1, 0, 2}); // = 17
</pre>
5. If you would like to create a TensorDesc with a planar format and for N dimensions (N can be different 1, 2, 4 and etc), you can use the method
<code>InferenceEngine::TensorDesc::getLayoutByDims</code>.
<pre class="brush:cpp">
InferenceEngine::TensorDesc::getLayoutByDims({1}); // InferenceEngine::Layout::C
InferenceEngine::TensorDesc::getLayoutByDims({1, 2}); // InferenceEngine::Layout::NC
InferenceEngine::TensorDesc::getLayoutByDims({1, 2, 3, 4}); // InferenceEngine::Layout::NCHW
InferenceEngine::TensorDesc::getLayoutByDims({1, 2, 3}); // InferenceEngine::Layout::BLOCKED
InferenceEngine::TensorDesc::getLayoutByDims({1, 2, 3, 4, 5}); // InferenceEngine::Layout::NCDHW
InferenceEngine::TensorDesc::getLayoutByDims({1, 2, 3, 4, 5, ...}); // InferenceEngine::Layout::BLOCKED
</pre>

View File

@@ -0,0 +1,77 @@
Migration from Inference Engine Plugin API to Core API {#openvino_docs_IE_DG_Migration_CoreAPI}
===============================
For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach. The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases.
This section provides common steps to migrate your application written using the Inference Engine Plugin API (`InferenceEngine::InferencePlugin`) to the Inference Engine Core API (`InferenceEngine::Core`).
To learn how to write a new application using the Inference Engine, refer to [Integrate the Inference Engine Request API with Your Application](Integrate_with_customer_application_new_API.md) and [Inference Engine Samples Overview](Samples_Overview.md).
## Inference Engine Core Class
The Inference Engine Core class is implemented on top existing Inference Engine Plugin API and handles plugins internally.
The main responsibility of the `InferenceEngine::Core` class is to hide plugin specifics inside and provide a new layer of abstraction that works with devices (`InferenceEngine::Core::GetAvailableDevices`). Almost all methods of this class accept `deviceName` as an additional parameter that denotes an actual device you are working with. Plugins are listed in the `plugins.xml` file, which is loaded during constructing `InferenceEngine::Core` objects:
```bash
<ie>
<plugins>
<plugin name="CPU" location="libMKLDNNPlugin.so">
</plugin>
...
</ie>
```
## Migration Steps
Common migration process includes the following steps:
1. Migrate from the `InferenceEngine::InferencePlugin` initialization:
```cpp
InferenceEngine::InferencePlugin plugin = InferenceEngine::PluginDispatcher({ FLAGS_pp }).getPluginByDevice(FLAGS_d);
```
to the `InferenceEngine::Core` class initialization:
```cpp
InferenceEngine::Core core;
```
2. Instead of using `InferenceEngine::CNNNetReader` to read IR:
```cpp
CNNNetReader network_reader;
network_reader.ReadNetwork(fileNameToString(input_model));
network_reader.ReadWeights(fileNameToString(input_model).substr(0, input_model.size() - 4) + ".bin");
CNNNetwork network = network_reader.getNetwork();
```
read networks using the Core class:
```cpp
CNNNetwork network = core.ReadNetwork(input_model);
```
The Core class also allows reading models from ONNX format:
```cpp
CNNNetwork network = core.ReadNetwork("model.onnx");
```
3. Instead of adding CPU device extensions to the plugin:
```cpp
plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
```
add extensions to CPU device using the Core class:
```cpp
core.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");
```
4. Instead of setting configuration keys to a particular plugin, set (key, value) pairs via `InferenceEngine::Core::SetConfig`
```cpp
core.SetConfig({{PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c}}, "GPU");
```
> **NOTE**: If `deviceName` is omitted as the last argument, configuration is set for all Inference Engine devices.
5. Migrate from loading the network to a particular plugin:
```cpp
auto execNetwork = plugin.LoadNetwork(network, { });
```
to `InferenceEngine::Core::LoadNetwork` to a particular device:
```cpp
auto execNetwork = core.LoadNetwork(network, deviceName, { });
```
After you have an instance of `InferenceEngine::ExecutableNetwork`, all other steps are as usual.

View File

@@ -0,0 +1,100 @@
# ONNX* Importer API Tutorial {#openvino_docs_IE_DG_OnnxImporterTutorial}
> **NOTE**: This tutorial is deprecated. Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models via the Inference Engine Core API
> and there is no need to use directly the low-level ONNX* Importer API anymore.
> To read ONNX\* models, it's recommended to use the InferenceEngine::Core::ReadNetwork method that provide a uniform way to read models from IR or ONNX format.
This tutorial demonstrates how to use the ONNX\* Importer API.
This API makes it possible to create an nGraph `Function` object from an imported ONNX model.
All functions of the ONNX Importer API are in the [onnx.hpp][onnx_header] header file.
Two categories of API functions:
* Helper functions that check which ONNX ops are supported in a current version of the ONNX Importer
* Functions that read ONNX models from a stream or file and result in an nGraph function, which can be executed using the Inference Engine
## Check Which ONNX Ops Are Supported
To list all supported ONNX ops in a specific version and domain, use the `get_supported_operators`
as shown in the example below:
```cpp
const std::int64_t version = 12;
const std::string domain = "ai.onnx";
const std::set<std::string> supported_ops = ngraph::onnx_import::get_supported_operators(version, domain);
for(const auto& op : supported_ops)
{
std::cout << op << std::endl;
}
```
The above code produces a list of all the supported operators for the `version` and `domain` you specified and outputs a list similar to this:
```cpp
Abs
Acos
...
Xor
```
To determine whether a specific ONNX operator in a particular version and domain is supported by the importer, use the `is_operator_supported` function as shown in the example below:
```cpp
const std::string op_name = "Abs";
const std::int64_t version = 12;
const std::string domain = "ai.onnx";
const bool is_abs_op_supported = ngraph::onnx_import::is_operator_supported(op_name, version, domain);
std::cout << "Abs in version 12, domain `ai.onnx`is supported: " << (is_abs_op_supported ? "true" : "false") << std::endl;
```
## Import ONNX Model
To import an ONNX model, use the `import_onnx_model` function.
The method has two overloads:
* <a href="#stream">`import_onnx_model` takes a stream as an input</a>, for example, file stream, memory stream
* <a href="#path">`import_onnx_model` takes a file path as an input</a>
Refer to the sections below for details.
> **NOTE**: The examples below use the ONNX ResNet50 model, which is available at the [ONNX Model Zoo][onnx_model_zoo]:
> ```bash
> $ wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
> $ tar -xzvf resnet50.tar.gz
> ```
Once you create the `ng_function`, you can use it to run computation on the Inference Engine.
As it was shown in [Build a Model with nGraph Library](nGraphTutorial.md), `std::shared_ptr<ngraph::Function>` can be transformed into a `CNNNetwork`.
### <a name="stream">Stream as Input</a>
The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the stream as an input:
```cpp
const std::string resnet50_path = "resnet50/model.onnx";
std::ifstream resnet50_stream(resnet50_path);
if(resnet50_stream.is_open())
{
try
{
const std::shared_ptr<ngraph::Function> ng_function = ngraph::onnx_import::import_onnx_model(resnet50_stream);
// Check shape of the first output, for example
std::cout << ng_function->get_output_shape(0) << std::endl;
// The output is Shape{1, 1000}
}
catch (const ngraph::ngraph_error& error)
{
std::cout << "Error when importing ONNX model: " << error.what() << std::endl;
}
}
resnet50_stream.close();
```
### <a name="path">Filepath as Input</a>
The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the filepath as an input:
```cpp
const std::shared_ptr<ngraph::Function> ng_function = ngraph::onnx_import::import_onnx_model(resnet50_path);
```
[onnx_header]: https://github.com/NervanaSystems/ngraph/blob/master/src/ngraph/frontend/onnx_import/onnx.hpp
[onnx_model_zoo]: https://github.com/onnx/models

View File

@@ -0,0 +1,3 @@
# Optimization Notice {#openvino_docs_IE_DG_Optimization_notice}
![Optimization_notice](img/opt-notice-en_080411.gif)

View File

@@ -0,0 +1,15 @@
OpenVINO™ Python* package {#openvino_docs_IE_DG_PythonPackage_Overview}
========================
OpenVINO™ Python\* package includes types to measure model and calibrate to low precision.
The OpenVINO™ Python\* package available in the `<INSTALL_DIR>/python/python3.X` directory.
The OpenVINO™ Python\* package includes the following sub-packages:
- [openvino.inference_engine](../../inference-engine/ie_bridges/python/docs/api_overview.md) - Python\* wrapper on OpenVINO™ Inference Engine.
- `openvino.tools.accuracy_checker` - Measure accuracy.
- `openvino.tools.benchmark` - Measure latency and throughput.
## See Also
* [Introduction to Intel's Deep Learning Inference Engine](Introduction.md)

View File

@@ -0,0 +1,184 @@
# Inference Engine Samples {#openvino_docs_IE_DG_Samples_Overview}
The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc.
After installation of Intel® Distribution of OpenVINO™ toolkit, С, C++ and Python* sample applications are available in the following directories, respectively:
* `<INSTALL_DIR>/inference_engine/samples/c`
* `<INSTALL_DIR>/inference_engine/samples/cpp`
* `<INSTALL_DIR>/inference_engine/samples/python`
Inference Engine sample applications include the following:
- **[Automatic Speech Recognition C++ Sample](../../inference-engine/samples/speech_sample/README.md)** Acoustic model inference based on Kaldi neural networks and speech feature vectors.
- **Benchmark Application** Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes.
- [Benchmark C++ Application](../../inference-engine/samples/benchmark_app/README.md)
- [Benchmark Python Application](../../inference-engine/tools/benchmark_tool/README.md)
- **Hello Classification Sample** Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths).
- [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md)
- [Hello Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_classification/README.md)
- **Hello NV12 Input Classification Sample** Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
- [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md)
- [Hello NV12 Input Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md)
- **Hello Query Device Sample** Query of available Inference Engine devices and their metrics, configuration values.
- [Hello Query Device C++ Sample](../../inference-engine/samples/hello_query_device/README.md)
- [Hello Query Device Python* Sample](../../inference-engine/ie_bridges/python/sample/hello_query_device/README.md)
- **[Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md)** Inference of SSD networks resized by ShapeInfer API according to an input size.
- **Image Classification Sample Async** Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
- [Image Classification C++ Sample Async](../../inference-engine/samples/classification_sample_async/README.md)
- [Image Classification Python* Sample Async](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md)
- **[Image Classification Python* Sample](../../inference-engine/ie_bridges/python/sample/classification_sample/README.md)** Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API (the sample supports only images as inputs).
- **Neural Style Transfer Sample** Style Transfer sample (the sample supports only images as inputs).
- [Neural Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md)
- [Neural Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md)
- **[nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md)** Construction of the LeNet network using the nGraph function creation sample.
- **Object Detection for SSD Sample** Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
- [Object Detection for SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md)
- [Object Detection for SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md)
- [Object Detection for SSD Python* Sample](../../inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md)
## Media Files Available for Samples
To run the sample applications, you can use images and videos from the media files collection available at https://github.com/intel-iot-devkit/sample-videos.
## Samples that Support Pre-Trained Models
You can download the [pre-trained models](@ref omz_models_intel_index) using the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
## Build the Sample Applications
### <a name="build_samples_linux"></a>Build the Sample Applications on Linux*
The officially supported Linux* build environment is the following:
* Ubuntu* 16.04 LTS 64-bit or CentOS* 7.4 64-bit
* GCC* 5.4.0 (for Ubuntu* 16.04) or GCC* 4.8.5 (for CentOS* 7.4)
* CMake* version 2.8.12 or higher
To build the C or C++ sample applications for Linux, go to the `<INSTALL_DIR>/inference_engine/samples/c` or `<INSTALL_DIR>/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script:
```sh
build_samples.sh
```
Once the build is completed, you can find sample binaries in the following folders:
* C samples: `~/inference_engine_c_samples_build/intel64/Release`
* C++ samples: `~/inference_engine_cpp_samples_build/intel64/Release`
You can also build the sample applications manually:
> **NOTE**: If you have installed the product as a root user, switch to root mode before you continue: `sudo -i`
1. Navigate to a directory that you have write access to and create a samples build directory. This example uses a directory named `build`:
```sh
mkdir build
```
> **NOTE**: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory: `~/inference_engine_samples_build/`
2. Go to the created directory:
```sh
cd build
```
3. Run CMake to generate the Make files for release or debug configuration. For example, for C++ samples:
- For release configuration:
```sh
cmake -DCMAKE_BUILD_TYPE=Release <INSTALL_DIR>/inference_engine/samples/cpp
```
- For debug configuration:
```sh
cmake -DCMAKE_BUILD_TYPE=Debug <INSTALL_DIR>/inference_engine/samples/cpp
```
4. Run `make` to build the samples:
```sh
make
```
For the release configuration, the sample application binaries are in `<path_to_build_directory>/intel64/Release/`;
for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
### <a name="build_samples_windows"></a>Build the Sample Applications on Microsoft Windows* OS
The recommended Windows* build environment is the following:
* Microsoft Windows* 10
* Microsoft Visual Studio* 2017, or 2019
* CMake* version 2.8.12 or higher
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
To build the C or C++ sample applications on Windows, go to the `<INSTALL_DIR>\inference_engine\samples\c` or `<INSTALL_DIR>\inference_engine\samples\cpp` directory, respectively, and run the `build_samples_msvc.bat` batch file:
```sh
build_samples_msvc.bat
```
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
a solution for a sample code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
versions are `VS2017` and `VS2019`. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
```sh
<INSTALL_DIR>\inference_engine\samples\cpp\build_samples_msvc.bat VS2017
```
Once the build is completed, you can find sample binaries in the following folders:
* C samples: `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_c_samples_build\intel64\Release`
* C++ samples: `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release`
You can also build a generated solution manually. For example, if you want to build C++ sample binaries in Debug configuration, run the appropriate version of the
Microsoft Visual Studio and open the generated solution file from the `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\Samples.sln`
directory.
## Get Ready for Running the Sample Applications
### Get Ready for Running the Sample Applications on Linux*
Before running compiled binary files, make sure your application can find the
Inference Engine and OpenCV libraries.
Run the `setupvars` script to set all necessary environment variables:
```sh
source <INSTALL_DIR>/bin/setupvars.sh
```
**(Optional)**: The OpenVINO environment variables are removed when you close the
shell. As an option, you can permanently set the environment variables as follows:
1. Open the `.bashrc` file in `<user_home_directory>`:
```sh
vi <user_home_directory>/.bashrc
```
2. Add this line to the end of the file:
```sh
source /opt/intel/openvino/bin/setupvars.sh
```
3. Save and close the file: press the **Esc** key, type `:wq` and press the **Enter** key.
4. To test your change, open a new terminal. You will see `[setupvars.sh] OpenVINO environment initialized`.
You are ready to run sample applications. To learn about how to run a particular
sample, read the sample documentation by clicking the sample name in the samples
list above.
### Get Ready for Running the Sample Applications on Windows*
Before running compiled binary files, make sure your application can find the
Inference Engine and OpenCV libraries.
Use the `setupvars` script, which sets all necessary environment variables:
```sh
<INSTALL_DIR>\bin\setupvars.bat
```
To debug or run the samples on Windows in Microsoft Visual Studio, make sure you
have properly configured **Debugging** environment settings for the **Debug**
and **Release** configurations. Set correct paths to the OpenCV libraries, and
debug and release versions of the Inference Engine libraries.
For example, for the **Debug** configuration, go to the project's
**Configuration Properties** to the **Debugging** category and set the `PATH`
variable in the **Environment** field to the following:
```sh
PATH=<INSTALL_DIR>\deployment_tools\inference_engine\bin\intel64\Debug;<INSTALL_DIR>\opencv\bin;%PATH%
```
where `<INSTALL_DIR>` is the directory in which the OpenVINO toolkit is installed.
You are ready to run sample applications. To learn about how to run a particular
sample, read the sample documentation by clicking the sample name in the samples
list above.
## See Also
* [Introduction to Intel's Deep Learning Inference Engine](Introduction.md)

View File

@@ -0,0 +1,112 @@
Using Shape Inference {#openvino_docs_IE_DG_ShapeInference}
==========================================
Inference Engine takes two kinds of model description as an input: [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md) and [nGraph::Function](nGraph_Flow.md) objects.
Both should have fixed input shapes to be successfully loaded to the Inference Engine.
To feed input data of a shape that is different from the model input shape, resize the model first.
Model resizing on the stage of <a href="_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html#when_to_specify_input_shapes">IR generation</a> or [nGraph::Function creation](nGraphTutorial.md) is the recommended approach.
OpenVINO™ provides the following experimental methods for runtime model reshaping:
1. Setting a new input shape with the `InferenceEngine::CNNNetwork::reshape` method
`InferenceEngine::CNNNetwork::reshape` method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.
Shape propagation for `InferenceEngine::CNNNetwork` objects created from `nGraph::Function` or IR of the version 10 works through the `nGraph` shape inference mechanism.
`InferenceEngine::CNNNetwork` objects created from lower IR versions are considered deprecated and may be reshaped incorrectly or give unexpected results.
To keep the v10 IR resizable by the `InferenceEngine::CNNNetwork::reshape` method, convert the model with the additional Model Optimizer key `--keep_shape_ops`.
2. Setting a new batch dimension value with the `InferenceEngine::CNNNetwork::setBatchSize` method
The meaning of a model batch may vary depending on choices you made during the model designing.
The `InferenceEngine::CNNNetwork::setBatchSize` method deduces index of batch dimension relying only on the input rank.
This method does not work for models with a non-zero index batch placement or models with inputs without a batch dimension.
Batch-setting algorithm does not involve shape inference mechanism.
Batch of input and output shapes for all layers is set to a new batch value without layer validation.
It may cause both positive and negative side effects.
Due to the limitations described above, the current method is recommended for simple image processing models only.
Practically, some models are not ready to be resized. In this case, a new input shape cannot be set with the Model Optimizer or the `InferenceEngine::CNNNetwork::reshape` method.
## Troubleshooting Resize Errors
Operation semantics may impose restrictions on input shapes of the operation.
Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions.
Changing the model input shape may result in intermediate operations shape collision.
Examples of such operations:
- <a href="_docs_MO_DG_prepare_model_convert_model_IR_V10_opset1.html#Reshape">`Reshape` operation</a> with a hard-coded output shape value
- <a href="_docs_MO_DG_prepare_model_convert_model_IR_V10_opset1.html#MatMul">`MatMul` operation</a> with the `Const` second input cannot be resized by spatial dimensions due to operation semantics
Model structure and logic should not change significantly after resizing.
- The Global Pooling operation is commonly used to reduce output feature map of classification models output.
Having the input of the shape [N, C, H, W], Global Pooling returns the output of the shape [N, C, 1, 1].
Model architects usually express Global Pooling with the help of the `Pooling` operation with the fixed kernel size [H, W].
During spatial reshape, having the input of the shape [N, C, H1, W1], Pooling with the fixed kernel size [H, W] returns the output of the shape [N, C, H2, W2], where H2 and W2 are commonly not equal to `1`.
It breaks the classification model structure.
For example, [publicly available Inception family models from TensorFlow*](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models) have this issue.
- Resizing the model input shape may significantly affect its accuracy.
For example, Object Detection models from TensorFlow have resizing restrictions by design.
To keep the model valid after the reshape, choose a new input shape that satisfies conditions listed in the `pipeline.config` file.
For details, refer to the <a href="_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#tf_od_custom_input_shape">Tensorflow Object Detection API models resizing techniques</a>.
## Usage of Reshape Method
The primary method of the feature is `InferenceEngine::CNNNetwork::reshape`.
It gets new input shapes and propagates it from input to output for all intermediates layers of the given network.
The method takes `InferenceEngine::ICNNNetwork::InputShapes` - a map of pairs: name of input data and its dimension.
The algorithm for resizing network is the following:
1) **Collect the map of input names and shapes from Intermediate Representation (IR)** using helper method `InferenceEngine::CNNNetwork::getInputShapes`
2) **Set new input shapes**
3) **Call reshape**
Here is a code example:
```cpp
InferenceEngine::Core core;
// ------------- 0. Read IR and image ----------------------------------------------
CNNNetwork network = core.ReadNetwork("path/to/IR/xml");
cv::Mat image = cv::imread("path/to/image");
// ---------------------------------------------------------------------------------
// ------------- 1. Collect the map of input names and shapes from IR---------------
auto input_shapes = network.getInputShapes();
// ---------------------------------------------------------------------------------
// ------------- 2. Set new input shapes -------------------------------------------
std::string input_name;
SizeVector input_shape;
std::tie(input_name, input_shape) = *input_shapes.begin(); // let's consider first input only
input_shape[0] = batch_size; // set batch size to the first input dimension
input_shape[2] = image.rows; // changes input height to the image one
input_shape[3] = image.cols; // changes input width to the image one
input_shapes[input_name] = input_shape;
// ---------------------------------------------------------------------------------
// ------------- 3. Call reshape ---------------------------------------------------
network.reshape(input_shapes);
// ---------------------------------------------------------------------------------
...
// ------------- 4. Loading model to the device ------------------------------------
std::string device = "CPU";
ExecutableNetwork executable_network = core.LoadNetwork(network, device);
// ---------------------------------------------------------------------------------
```
Shape Inference feature is used in [Smart classroom sample](@ref omz_demos_smart_classroom_demo_README).
## Extensibility
Inference Engine provides a special mechanism that allows to add the support of shape inference for custom operations.
This mechanism is described in the [Extensibility documentation](Extensibility_DG/Intro.md).

View File

@@ -0,0 +1,17 @@
# OpenVINO™ Tools {#openvino_docs_IE_DG_Tools_Overview}
OpenVINO™ tools are C++ and Python\* console command line applications that can be used for models downloading, accuracy measurement, calibration and checking.
The OpenVINO™ toolkit installation includes the following tools:
|Tool | Location in the Installation Directory|
|-----------------------------------------------------------------------------|---------------------------------------|
|[Accuracy Checker Tool](@ref omz_tools_accuracy_checker_README) | `<INSTALL_DIR>/deployment_tools/tools/open_model_zoo/tools/accuracy_checker`|
|[Post-Training Optimization Tool](@ref pot_README) | `<INSTALL_DIR>/deployment_tools/tools/post_training_optimization_toolkit`|
|[Model Downloader](@ref omz_tools_downloader_README) | `<INSTALL_DIR>/deployment_tools/tools/model_downloader`|
|[Cross Check Tool](../../inference-engine/tools/cross_check_tool/README.md) | `<INSTALL_DIR>/deployment_tools/tools/cross_check_tool`|
|[Compile Tool](../../inference-engine/tools/compile_tool/README.md) | `<INSTALL_DIR>/deployment_tools/inference_engine/lib/intel64/`|
## See Also
* [Introduction to Deep Learning Inference Engine](Introduction.md)

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5389b6d0a25e8356002bd8c68526ceedf39f6c4efa5e7097b5ac0308fd42dee3
size 48611

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c416156d9ed77213ead230fc49c32a3c3918e52128ac2db442f56062e206bc01
size 708262

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ce6fb1c626ac0858b411c86fa2e3a46c5ca0dc2e88692284ce4ec24edb141e7f
size 9326

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:80edd1da1c5673d18afa44bc2c0503ba9ecdcc37c2acb94960303b61c602ceee
size 12649

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d3e8856aa175d6fcf940af57a53f962ff6c58acf0a3838bfccc6a093bff1756d
size 9015

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7d53ce33f180cf4d170bbeb69635ee7c49a67d3f6ee8b1c01ec12568fe1cca38
size 17157

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3965f4830c45518ee1dc169c2b1760cae83f8a8819023770a28893c6cef558c2
size 68441

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:25ed719bdd525dc0b606ef17a3fec5303ea032dfe6b2d167e1b19b6100b6fb37
size 16516

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:55c5fd6517ae9e3639f2214167665ffbb4b641cd2abef155ff816c68478915e2
size 54233

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5fbfb33c1a860978b8b99cf4dfbc04b5f7fbe0e20af03cd3e5ffd1d6a9f2db40
size 353490

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3f0f329112b9c8227cbba3d394b778a6d219b4f3fc0d02cc5f2f8598c3d4eb51
size 151678

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b46a1f89df96410a87f90801c9a86a28a6aacb39fa4677b434d856559f163fe
size 217954

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88745fd132531e943d59afe59ed6af8eaae6b62ba1fda2493dfef76080d31a25
size 7788

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9709bc83f903943b4d737d379babf80a391a72ad8eab98e71abcc0de5424fbfc
size 12361

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f6ff04de33684f00d0d2da8fed6d30b5162c566b35b8894e9e14f7921db70592
size 8598

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9a453412cf37f06e1e5a63f5ff629d4e16ed1707fc55b5a63cc03e710807b33e
size 10151

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b3be59a71703b640eac6ad99ce3d463141a36e58f5299bf21e4f6aba152d9ed6
size 9359

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:50f41274758a989c9ef43e558343d420d7e4e288c88ac2d19a2bf396d5ee573c
size 9937

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9fff52e5faaf108371db87e53959453216554152b15ca0432b1541f94def297e
size 19145

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d147adf801535e95d8b627a8a1d23f7b89dea1eabe06218235e756b0a9866fe
size 1636

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a9aae473dcc469ebdb5c2d9ac8067bf8c7caa11d4cdbc7e0dd0b2006621ce526
size 4267

Some files were not shown because too many files have changed in this diff Show More