Compare commits

..

759 Commits

Author SHA1 Message Date
Evgenya Stepyreva
af16ea1d79 Revert "Fix experimental detectron do ref impl (#10621)" (#12683) (#13009)
* Revert "Fix experimental detectron do ref impl (#10621)"

This reverts commit d87233863d.

* Disabled Experimental Detectron per agreement with GPU team. Ticket to fix it: 90209
2022-09-12 18:16:13 +04:00
Mateusz Tabaka
dcc8f926e1 [ONNX] Update external data location in Constant nodes (#12992)
Ticket: 91271
2022-09-09 20:22:11 +03:00
Ekaterina Aidova
c0762847a7 openvino-dev return opencv-python back (#12957) 2022-09-09 18:48:47 +03:00
Sergey Shlyapnikov
af29d221b4 [GPU] Add NV12 -> Grayscale mode support (#12988)
* [GPU] Add NV12 -> Grayscale mode support

* Fix uv plane shape
2022-09-09 19:00:37 +04:00
mei, yang
0f5a45c875 add GenerateProposals single layer test (#12967) 2022-09-08 20:38:10 +04:00
yanlan song
facf990dfd fix inconsistent tbb config due to executor used in multi (#12929)
* fix inconsistent tbb config due to executor used in multi

Signed-off-by: fishbell <bell.song@intel.com>

* refine comment

Signed-off-by: fishbell <bell.song@intel.com>

Signed-off-by: fishbell <bell.song@intel.com>
2022-09-08 13:34:22 +08:00
Ilya Churaev
eb24795c66 Fix tbb for macos 22.2 (#12952)
* Fixed build for TBB which uses pre-reliase functions

* Disable TBB only for macOS only

* Changed condition
2022-09-07 19:59:01 +04:00
Yuan Xu
e21b51a53b Fix a link anchor for pypi page (#12950)
* fix the link for pip

* update the <a> tags
2022-09-07 13:42:44 +04:00
Yuan Xu
b2c00c66a7 fix the link for pip (#12946) 2022-09-07 10:17:42 +04:00
Tomasz Dołbniak
d84da15de5 Use absolute path in some cpuFuncTests (#12902)
* Use absolute path in some cpuFuncTests

* Missing include
2022-09-06 11:57:27 +03:00
Mateusz Bencer
917a465a00 added op check tests for RDFT and IRDFT (#12918) 2022-09-06 12:53:26 +04:00
Yuan Xu
320ed5b94c add new articles for using binaries (#12216)
* Add Overview page

* Revert "Add Overview page"

* init (#11985)

* [GPU] Pass convolution unit tests on DG2 (#12056)

* scale -> eltwise

* Proofreading-OV-Runtime (#11658)

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/protecting_model_guide.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/ARM_CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/optimization_guide/dldt_deployment_optimization_common.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GPU_RemoteTensor_API.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/ov_dynamic_shapes.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/config_properties.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/config_properties.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/performance_hints.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/performance_hints.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/preprocessing_details.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/performance_hints.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Update ref links

* Update Getting_performance_numbers.md

* Update deployment_intro.md

* Update preprocessing_details.md

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update tools/pot/openvino/tools/pot/algorithms/quantization/default/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update tools/pot/openvino/tools/pot/algorithms/quantization/default/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update automatic_batching.md

* Update docs/OV_Runtime_UG/automatic_batching.md

* Update docs/OV_Runtime_UG/ShapeInference.md

* Update deployment-manager-tool.md

* Update deployment-manager-tool.md

* Update docs/OV_Runtime_UG/deployment/deployment-manager-tool.md

* Update automatic_batching.md

* Update automatic_batching.md

* Update docs/OV_Runtime_UG/ShapeInference.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update integrate_with_your_application.md

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update integrate_with_your_application.md

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update model_representation.md

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update integrate_with_your_application.md

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Additional_Optimizations.md

Removing redundant information.

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Additional_Optimizations.md

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Additional_Optimizations.md

* Update docs/OV_Runtime_UG/model_representation.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/layout_overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update model_representation.md

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/openvino/tools/pot/algorithms/quantization/accuracy_aware/README.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/SaturationIssue.md

* Update tools/pot/docs/SaturationIssue.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update README.md

* Update README.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/Introduction.md

* Update tools/pot/docs/AccuracyAwareQuantizationUsage.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Removing one-liners

Removing introductory sentences from 'Supported Features' sections.

* Update docs/OV_Runtime_UG/openvino_intro.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/benchmarks/performance_benchmarks_ovms.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/Introduction.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update tools/pot/docs/DefaultQuantizationUsage.md

* Update tools/pot/docs/BestPractices.md

* Update tools/pot/docs/BestPractices.md

* Update tools/pot/docs/AccuracyAwareQuantizationUsage.md

* Update docs/optimization_guide/model_optimization_guide.md

* Update docs/optimization_guide/dldt_deployment_optimization_guide.md

* Update docs/OV_Runtime_UG/supported_plugins/config_properties.md

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

* Update docs/OV_Runtime_UG/preprocessing_usecase_save.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>

* updated to fuse activation in eltwise_vload8 (#12084)

* [GPU] Fix gather data type issue (#12085) (#12085)

* setting tput as the default performance mode only for AUTO, excluding MULTI plugin. (#12083)

Signed-off-by: ywang2 <yang4.wang@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>

* [C API][COVERITY SCAN]Fix the TAINTED_SCALAR and DEADCODE in Coverity Scan (#12087)

* Fix the Coverity scan issues

* Fix the insecure data handling (TAINTED_SCALAR) issue found in coverity scan

* [hotfix] pytest error of act_act example (#12093)

* [hotfix] pytest error of act_act example

* remove needless import

* NonZero operation: uncomment tests since they can be passed now (#11548)

* NonZero operation: uncomment tests since they can be passed now

# Conflicts:
#	src/tests/functional/plugin/cpu/shared_tests_instances/skip_tests_config.cpp

* Unbreak tests once more by changing base class from LayerTestsCommon to SubgraphBaseTest

* Unbreak compilation / style

* Add test case for cache

Co-authored-by: Chenhu Wang <chenhu.wang@intel.com>

* Increase zeroes count for NonZero tests

* Correct the change

* Remove my previous changes and add dynamic shapes / repeatable shapes into the correct file

Co-authored-by: Chenhu Wang <chenhu.wang@intel.com>

* [SAMPLES] Remove unused commandline arguments for speech_sample (#11892)

* GNA SF propagation fix (#11806)

* Fix the uninitialized value issue found in Coverity Scan (#12098)

* [GPU] Assign-6 and ReadValue-6 (#11780)

* Add methods for access to varables information in Program class

* add ReadValue and Assign primitives

* ReadValue and Assign implementations

* Implementation of memory states allocation

* Add output existance check in primitive_inst to avoid crashes if output is set during execution

* Add memory states management functionality in network component

* Integration of memory states feature in inference request component

* Exclude constant path for read_value and assign nodes in cldnn transformations

* Improve memory states test to run on a single inference request

* unit tests for ReadValue and Assign

* single-layer test for ReadValue and Assign

* Add QueryState API implementation

* Add memory state test which covers dynamic batch case

Co-authored-by: Oleksii Khovan <okhovan@lohika.com>

* [GNA] Add automatic model splitting for compiled graphs (#12001)

* DOCS-code-reference-css-style-change (#12109)

code formatting changed from blue to black, to distinguish from links

* Virtual destructor for the base class (#12102)

* [GPU] Pass Resample unit tests on DG2 (#12052)

* fix validate_fusings_gpu error
* fix biased scale testcase

* [GPU] Pass lrn unit tests on DG2 (#11986)

* [GPU] Pass reduce unit tests on DG2 (#12086)

* scale to eltwise

* [CPU] Move cpu_dump_check into CPU plugin's tools folder (#12100)

* Move cpu_dump_check into CPU plugin's tools folder

* remove cpu from names

* Update README

* Zlib update to 1.12.2 (#12128)

* [GNA] Reduce impact of sf propagation fix (#12115)

* [GPU] Simplify namespaces in the plugin part (#12121)

* [GNA] Add support for future devices with relaxed capabilities (#12000)

* [GPU] Pass eltwise unit tests on DG2 (#12113)

* check fusion in onednn too

* [GPU] modify fusing condition for reduce (#12119)

Signed-off-by: Min, Byungil <byungil.min@intel.com>

* Enable tensor offset to GemmKernelRef for input padding support (#12133)

Signed-off-by: Andrew Park <andrew.park@intel.com>

* [PYTHON][BENCHMARK_APP] Add BGR covert to Gray function (#12118)

* Fix the JIRA 80700 issue. Add BGR covert to Gray function

* Support NCHW and NHWC

Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

* [CPU] revert pr 11990 and enable brgconv avx512 on SPR by default (#12105)

* polish onednn cc readme (#12114)

* [ONNX] Add operator com.microsoft.Fusedgemm support into frontend/onnx (#11878)

* [GPU] Implement NMS-9 operation (#11890)

* Fix GPU NonMaxSuppression implementation

* Introduce Nms9 single layer tests

* Adapt internal NMS and GPU implementation for NMS9 implementation

* Adapt CPU implementation in GPU for NMS9

* Add blocked layouts support to NMS

* Add unit tests for blocked formats for NMS

* Fix boxes groups size for the small shapes

* Use ocl implementation for blocked layout input

* Fix templates typedefs to pass win build

* Fix second output to set data in correct format

* [POT] optimizer - update usage of IndexSampler (#12146)

* Revert "[GPU] Pass activation unit tests on DG2 (#11969)" (#12167)

This reverts commit 3334e8933c.

* Fix IRDFT for case when axes are in reversed order (#12155)

* [MO] Fix output shape bug in GatherNDDecomposition (#12110)

* [GPU] Add reorder from i32 to f32 for max-pooling/conv/fc which doesn't support i32 (#12137)

* Update pypi.org pages (#12170)

* fix references

* update links

* update the wording to be more clear

* add the error message about Visual studio back

* update links to static html links of 2022.2

* Ubuntu 22.04 support (#11472)

* Ubuntu 22.04 support

* Try to fix setuptools

* Try to fix arm

* Try to add more packages

* Test 2

* test 3

* Turn dependnecies download off

* Fix

* Fix

* Fix

* Fix

* Fix

* test

* Fix

* restore eveything

* Try to restore

* Restore install_openvino_dependencies.sh

* clean-up raspbian

* New if conditions

* Removed excess dependencies

* COsmetic chnages

* Removed autools

* Removed libgkt-2

* Added HDLDL libs

* Test

* Removed some dependnecies

* Better fixes

* Removed some dependencies

* Fixed compilation

* Removed all extra

* [GPU]  optimize permute_ref  (#12159)

* change memory access pattern of fsv layout for permute

* Fix permute_ref to process F first only when (bf...) => (b...f)

* Refactor

Co-authored-by: si-eun-kim <sieun.kim@intel.com>

* Update of naming of the last operators in the graph (#12139)

* Update opset.md with opset9 (#12169)

* [GPU] integrate persistent caching for onednn (#12094)

* integrate persistant caching for onednn
* add api to save/load binary file.

* Check memory allocation size of network graph (#11911)

+ Add exception handling for out of resource

* TI repetative shape inference (#12178)

* Fixes for system libraries pugixml, tbb (#12206)

* Fixes for system libraries pugixml, tbb

* Added more dependencies for core

* Debian packages: base version (#11387)

* Xp/benchmark app ocl (#12112)

* Add some tip description about enable OpenCL for benchmark_app.

Signed-off-by: xipingya <xiping.yan@intel.com>

* Export doesn't work, we need to add -Dopencl_root_hints=[PATH]/OpenCL-CLHPP/include to cmake command.

Signed-off-by: xipingya <xiping.yan@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>

* ONNX: Pass name to the InputEdge (#12177)

* [IE TESTS][CONFORMANCE] Fix OpImplCheck Precision (#12148)

* add new article for using binaries

* [PyOV][DOCS] Python API contribution and developer guide (#12145)

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* [DOC][CPU] Denormals optimization doc (#12127)

* Use system pugixml where it's possible (#12218)

* Restore FEM to be static instance (#12219)

* Restore FEM to be static instance

* Restore frontend manager in ie_read_network.cpp

* [MO] Fix TopK partial shape inference with dynamic K (#12212)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [CPU] Fixed heap sort bug regarding heapifying (#12221)

* [CPU] Explicitly enable DNNL_VERBOSE only in case of CPU_DEBUG_CAPS (#12108)

* [GNA] Fixed convolutions with shared transpose and un-fuse-able activations after Convolution filter (Renew PR11373) (#12152)

* Commits from PR11373:
Fixed handling of transpose after convolution
[GNA] Fixed calculation of dimensions for ConvolutionFilter and PWL primitives
[GNA] Fixed coverity error and failed tests

* Apply comments

* Update src/plugins/intel_gna/gna_graph_compiler.cpp

Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>

* Update src/plugins/intel_gna/gna_graph_compiler.cpp

Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>

* Rollback names

* Separate test data

* Move coverity issue to separate request

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>

* [GNA] Fix accuracy degradation in compact mode (#12150)

* [TF FE] Handle optional attributes for Convolutional operations (#12230)

* [TF FE] Handle optional attributes for Convolutional operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* update the information for pypi.org pages

* [GPU] ROIAlign v9 support (#11899)

* ROIAlign v9 support

* Code changes after review1

* Code changes after review2

* fix of single layer test for Windows

* Since PR #12043 we don't need strong include order of primitive_base.hpp and
impls/implementation map.hpp anymore

* Code changes after review3

* Code changes after review4

* update the verifying checksum step

* Fixed WIndows backslash paths (#12250)

* update install_dir info

* Move GNU build flag to "cmake/developer_package/compile_flags/sdl.cmake" (#12143)

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* [MO] Fix Mul fusion with dynamic dimension (#12253)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* updates

* update wording for pypi.org

* Fixed newAPI for case if core was removed (#12207)

* Fixed newAPI for case if core was removed

* Fixed code style

* Fixed typo

* Use new API by default

* Create core with template plugin

* Added doxygen comment

* Install user provided TBB as well (#12260)

* Disable loading of v7 reader for new IR versions (#12252)

* Disable loading of v7 reader for new IR versions

* Try to fix CI

* Fixed PDPD frontend

* Fixed error message creation

* Fixes for cases when TBB_DIR env var is set (#12266)

* Fixes for cases when TBB_DIR env var is set

* Don't use make in build_samples.sh script

* [GPU] Get rid of direct layout::size field usages  (#12172)

* [GPU] Get rid of direct layout::size field usages to simplify further replacement

* [GPU] Enabled -Wall and resolved compiler complaints

* Update summarize.py (#12175)

* [CPU] Add RDFT and IRDFT operators (#12099)

* [CPU] Add RDFT and IRDFT operators

Tickets: 79178 and 79192

Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>

* Remove Interpolate Transposes as it does nothing (#12205)

* [TF FE] Implement LinSpace and BatchMatMul translators (#12271)

* [TF FE] Implement LinSpace and BatchMatMul translators

It helps to convert STN model (from e2e testing) using TensorFlow frontend

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix BatchMatMul translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix LinSpace operation translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update error message on pypi.org (#12243)

* Add Overview page

* Revert "Add Overview page"

* fix references

* update links

* update the wording to be more clear

* add the error message about Visual studio back

* update links to static html links of 2022.2

* port changes to master

* update description

* update commands and uninstallation

* Add const fold check in operators instead pass (#12189)

* Add const fold check in operators instead pass
- refactor constant fold pass to using ov instead of ngraph
- add constant_folding_is_disabled overload for raw pointer

* Remove Reshape from skip const inferences
in legacy graph transformer

* Const fold test for modified operators

* [GPU] Use int64_t type for axis in softmax (#12287)

* remove obsolete info from source files to avoid confusion

* [DOC] [CPU] Proofreading for grammatical and stylistic corrections (#12288)

* Porting to master - update -readme for CPP and Python benchmark (#12245)

Porting #11961

* Fixed build_samples.sh not to call setupvars.sh for Debian package case (#12309)

* Investigate GNA tests (#12267)

* Test commit

* Revert "Disable loading of v7 reader for new IR versions (#12252)"

This reverts commit cb6ca7bb89.

* Revert "Test commit"

This reverts commit 977b83f2ba.

* [PyOV] Test refactoring (#12248)

* [GNA] Add missing support for batch normalization with weights broadcasting. Add unit tests. (#12301)

* Xiaoxia/onetbb old version (#12303)

* support oneTBB old version

* fix oneTBB version mismatch issues

* fix clang issue

* add 'tbb' path to setupvars.sh and OpenVINOConfig.cmake.in

* Update scripts/setupvars/setupvars.sh

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

* simple Windows installer POC (#12308)

* Fixes for cases when TBB_DIR env var is set

* Don't use make in build_samples.sh script

* First version of Windows installer

* WIndows NSIS installer

* [GPU] Fix get_default_params & choose_impl not to dependent on program_node  (#12239)

* Getting rid of dependency from get_default_param for typed_program_node

* Fix bug

* Enable two pathes to call choose_impl / does_possible_impl_exists / does_an_impl_exists to be able to use given layout

* Replaced impl factory API to get kernel_impl_param's pointer

* Update for recently added primitives

* Add and apply optional_layout

* fix kernel_param_impl to be handled as unique_ptr

* Applied review comments

* Fix rebase conflict

* Fix CI error

* [CC]Fix CC issue for transformation (#12292)

* Revert "Fixed 3 naming issue"

This reverts commit a92d3cfff5.

* Revert "Fix CC issues for transformation and snippets"

This reverts commit d08a3f5aac.

* Fix NGRAPH_PASS_CALLBACK issue to make it can work

* Fix matcher name missing issue

* [TF FE] Fix conversion of NetVLAD model (#12328)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [MO] Fix broken port numbering for Constant operations (#12318)

* Restore inputs order in IR Reader

* Fix broken port numbering for Constant operations

Co-authored-by: Chetverikov <anton.chetverikov@intel.com>

* [GPU] Align TopK parameters with ngraph (#12278)

* [GPU] Use int64_t type for axis in CumSum (#12306)

* [GPU] Use int64_t type for axis in ScatterElementsUpdate (#12323)

* Bump OMZ submodule to fix pip-conflicts ssues (#12320)

* [PyOV] Enable type casters (#12204)

* add type caster for ov::Layout, enable load method to take pathlibs.Path as arugment

* fix typo

* fix style

* add missing blank line

* add common function to check if py::object is either Path or string

* fix style

* Update src/bindings/python/src/pyopenvino/graph/preprocess/pre_post_process.hpp

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* add tests, fix style, remove pointer argument overload

* fix style

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* [GNA] Replace GNA SoftSign by opset9 SoftSign (#12302)

* Replace GNA SoftSign by opset9 SoftSign

* v9 -> opset9

* [GPU] ScatterUpdate axis alignment (#12233)

* [GPU] added is_dynamic methods to program_node and primitive_inst. Minor refactoring (#12322)

* updates

* [GPU] Remove dependency to typed_program_node from calc_output_layout (#12378)

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Use static pointers to frontend libraries (#12235)

* Add static shared_objects map in FEM
- add unit tests for frontend lib close
- not use static FEM in ie network reader
- add main for gtest which can use manifest file to filter tests

* Move library pointers map to manger impl
- add to manger impl method to make frontend from loaded plugin

* Add shutdown function to ov namespace
it cleans the static resources

* Revert changes related to linking mian for tests

* Add python binding to ov::openvino_shutdown

* Renamed shutdown method and added to legacy C++ API

(cherry picked from commit a8395bd207)

* Added C bindings

(cherry picked from commit d2c9ddc263)

* Move frontend lib close test to ieFunctTest
- moved to not introduced new test binary and modification on CI
  the frontend tests use dynamic linked frontend lib which is load
  on test application start and mask lib close tests
- remove gtest_main_manifest as not required now
- add ov::shutdown test to expect application crash

* Fix lib_close test
- remove not get_disabled_tests from utils
- revert CMake file formating

* Fix get model path in lib close tests

* Skip frontend lib close tests if static lib build

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

* Decompose NormalizeL2 on GPU (#12361)

* [TF FE] Implement translators for TensorFlow ConvBackpropInput operations (#12356)

* [TF FE] Implement ConvBackPropInput translators

Now the translators supports dynamic input_sizes attribute and different padding modes
including EXPLICIT mode

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix clang-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback and fix build issues

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback: check for input size

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix retrieving explicit_padding attribute

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code style

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add debug log showing the result transformation callback (#12365)

* [AUGRU] AUGRUCell/Sequence op specification (#12162)

* [GPU] Add exception handling for calc_output_layout (#12393)

* Add exception handling for calc_output_layout

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Apply comment to error handler

Signed-off-by: Andrew Park <andrew.park@intel.com>

* [GPU]get data type of conv weights from node.weights() when network is internal (#12232)

* get data type of convolution weights from node.weights() when network is internal

* use only instance.node.weights().get_output_layout().data_type

* fix typo

* add unit test for the case

* Update pre_replace_deconv to support output_shape for transposed conv (#12335)

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Improved OpenVINO debian packages (#12385)

* [GPU] implement lru_cache(#12349) (#12349)

* Fix memory leak issue

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>

* DOCS-fix_maths_formatting (#12402)

mathematical equation formatting issue fixed in POT readme for range supervision

* [GPU] Pass concat unit tests on DG2 (#12142)

* check optimized
* skip kernel compile when optimized

* GroupedGatherElimination short circuit (#12380)

* Disable GroupedGatherElimination in case of scalar inputs containing indices

* clang format

* [MO, POT] Top up upper bounds for TensorFlow and NumPy modules in all requirement files (#12191)

* [MO] Relax MO upper-bound requirements for TensorFlow and NumPy

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Just debug numpy version

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Pin upper-bounds for NumPy and TensorFlow modules in all reqs files

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update submodule dependency for open_model_zoo

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Install numpy module first

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update NumPy version in POT setup.py

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Extend telemetry tests with a set of possible solutions for events

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update NumPy module version for layer tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [GPU] Added common impl for optionals (#12366)

* [LPT] Correct a check for whether model is quantized (#12364)

Look inside subgraph operations, such as TensorIterator, Loop, If, etc

* Update doc for AUTO and AUTO_BATCH (#12265)

* Update doc for AUTO and AUTO_BATCH

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* fix: incorrect fq type (#12234)

Co-authored-by: Wonju Lee <wonju.lee@intel.com>

* Implement workaround to convert non-frozen models using new TensorFlow frontend (#12386)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert "Merge branch 'master' into add-install-binaries-22/2"

This reverts commit f4d6f04636, reversing
changes made to e505e739e2.

* update comments

* update comments

* Update docs/install_guides/installing-openvino-from-archive-windows.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* update OpenCV installation

* Update docs/install_guides/uninstalling-openvino.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/uninstalling-openvino.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/uninstalling-openvino.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* update uninstall wording

* add C++ redistributable to pypi.org pages

* update pypi.org pages and opencv for macOS

* update whats next

* add a note about long paths on Windows

* fix errors

* update CMake dependency

* fix formatting

* apply the same changes from Ilya's comments

* update uninstall, remove dev from pkg names

* update C++ requirements according to Ilya's requests

Signed-off-by: Min, Byungil <byungil.min@intel.com>
Signed-off-by: Andrew Park <andrew.park@intel.com>
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Signed-off-by: Yan, Xiping <xiping.yan@intel.com>
Co-authored-by: Felix Dohyun Kim <tuxedcat@gmail.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
Co-authored-by: Piotr Milewski <piotr.milewski@intel.com>
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
Co-authored-by: Paul Youngsoo Ahn <paul.y.ahn@intel.com>
Co-authored-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: RICKIE777 <ruiqi.yang@intel.com>
Co-authored-by: Bonhun Koo <bonhun.koo@intel.com>
Co-authored-by: avoskoboinyk-lohika <avoskoboinyk@lohika.com>
Co-authored-by: Chenhu Wang <chenhu.wang@intel.com>
Co-authored-by: Marcin Kusmierski <marcin.kusmierski@intel.com>
Co-authored-by: Szymon Irzabek <szymon.jakub.irzabek@intel.com>
Co-authored-by: Yaroslav Torzuk <yaroslav.torzuk2@altran.com>
Co-authored-by: Oleksii Khovan <okhovan@lohika.com>
Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com>
Co-authored-by: Tingqian Li <tingqian.li@intel.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
Co-authored-by: Krzysztof Bruniecki <krzysztof.bruniecki@intel.com>
Co-authored-by: Min, Byungil <byungil.min@intel.com>
Co-authored-by: Andrew Kwangwoong Park <andrew.park@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
Co-authored-by: Luo Cheng <cheng.luo@intel.com>
Co-authored-by: zihan wu <zihan.wu@intel.com>
Co-authored-by: sheng.gui@intel.com <guisheng315@sina.com>
Co-authored-by: Tetiana Gubanova <tgubanova@lohika.com>
Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>
Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Kelvin Choi <kelvin.choi@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
Co-authored-by: si-eun-kim <sieun.kim@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Co-authored-by: Sungeun Kim <sungeun.kim@intel.com>
Co-authored-by: Jade Cho <jade.cho@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
Co-authored-by: Xiping Yan <xiping.yan@intel.com>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Irina Efode <irina.efode@intel.com>
Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: River Li <river.li@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Chen Xu <chen.xu@intel.com>
Co-authored-by: Egor Duplenskii <egor.duplensky@gmail.com>
Co-authored-by: Nadezhda Ageeva <nadezhda.ageeva@intel.com>
Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
Co-authored-by: Konstantin Beluchenko <kostiantyn.bieliuchenko@altran.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
Co-authored-by: Roman Lyamin <Roman.Lyamin@intel.com>
Co-authored-by: almilosz <108654258+almilosz@users.noreply.github.com>
Co-authored-by: Sun Xiaoxia <xiaoxia.sun@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Chetverikov <anton.chetverikov@intel.com>
Co-authored-by: Alina Kladieva <alina.kladieva@intel.com>
Co-authored-by: Bartek Szmelczynski <bartosz.szmelczynski@intel.com>
Co-authored-by: Wilson Seok <wilson.seok@intel.com>
Co-authored-by: Inhyuk Jo <andy.inhyuk.jo@intel.com>
Co-authored-by: Wonju Lee <wonju.lee@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-09-06 12:19:12 +04:00
Anastasia Popova
37097c71cc Fixed error in indices processing. (#12869) 2022-09-02 14:28:00 +02:00
Roman Kazantsev
9b170e63fd [TF FE] Add Transpose Sinking for Prelu operation (#12832)
* [TF FE] Add Transpose Sinking for Prelu operation

Now it covers a case with a scalar slope.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add unit-tests for Transpose sinking of Prelu

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix non-scalar slope case

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-09-01 11:22:22 +04:00
Mateusz Tabaka
41fa6f360b Explicitly link onednn with tbb for tbb version in [2018,2019.4] (#12789) (#12837)
Ticket: 89800
2022-08-31 17:14:54 +03:00
Ilya Lavrenov
1e9da3f5de Added version generation as in CI (#12521) (#12831) (#12840)
* Added vresion generation as in CI (#12521)

* Allow CI_BUILD_NUMBER to define only build number
2022-08-31 17:40:41 +04:00
Roman Kazantsev
6987465875 [Python API] Replace deprecated NumPy type np.bool (#12786) (#12824)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-31 15:46:45 +04:00
Gorokhov Dmitriy
a0b661a274 [CPU] Fixed MHA accuracy for mixed precision case (#12820) 2022-08-31 10:53:38 +04:00
Alina Kladieva
a466b3fea6 Port py checks changes (#12826)
* Run py checks on changes to yaml

* Try using setup-python@v4

* Use ubuntu20.04
2022-08-30 16:24:41 +04:00
Roman Kazantsev
cb6b1fe56f [TF FE] Fix BatchToSpace translator (#12815)
According to the specification we must have the same type for block_shape and crops inputs

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-30 14:24:36 +04:00
Evgenya Stepyreva
66d3048598 Make model reshape and track batch (#12736) (#12777)
* Make model reshape and track batch (#12736)

* CVS-89672 Make model reshape and track batch

* Minor refactoring

* Changed mechanism of constant replacement to more mature

* Update src/common/transformations/include/transformations/smart_reshape/lstm_states_broadcast.hpp

* Update src/common/transformations/src/transformations/smart_reshape/lstm_states_broadcast.cpp

* Comments resolving

* Style and getting rid of asserts

* style

* Apply suggestions from code review
2022-08-26 16:53:47 +00:00
Roman Kazantsev
d2e06d4f25 [TF FE] Port fixes for Convolutional operations, ExtractImagePatches and MatrixDiag (#12764)
* [TF FE] Implement translators for ExtractImagePatches and MatrixDiag (#12593)

It allows to convert Inpaint model and infer it correctly

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* [TF FE] Correct Deconvolution for NCHW layout

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert Deconvolution implementation and work around -1 for SS

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fixing Conv3DBackpropInputV2 operation translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2022-08-26 16:38:30 +04:00
Tomasz Dołbniak
72d7b518ca cltools update to 22.08 [2022/2] (#12690)
* cltools update to 22.08

* Hash update

* Hash update

* Adjustments for the new package
2022-08-26 15:28:40 +04:00
Sergey Shlyapnikov
41a404f290 [GPU] fix Transpose issue for ConvertColor with FakeQuantize. (#12645) (#12761)
Co-authored-by: Tang Wei <wei1.tang@intel.com>
Co-authored-by: Kurt Chen <kurt.chen@intel.com>
2022-08-26 12:29:21 +04:00
Ekaterina Aidova
abaa9e6404 [OMZ] update submodule (#12681)
* [OMZ] update submodule

* move submodule
2022-08-26 10:53:38 +04:00
Sergey Shlyapnikov
429c7265df [GPU] Implement NMS-9 operation (#11890) (#12760)
* Fix GPU NonMaxSuppression implementation

* Introduce Nms9 single layer tests

* Adapt internal NMS and GPU implementation for NMS9 implementation

* Adapt CPU implementation in GPU for NMS9

* Add blocked layouts support to NMS

* Add unit tests for blocked formats for NMS

* Fix boxes groups size for the small shapes

* Use ocl implementation for blocked layout input

* Fix templates typedefs to pass win build

* Fix second output to set data in correct format

Co-authored-by: Tetiana Gubanova <tgubanova@lohika.com>
2022-08-26 00:37:20 +04:00
Trawinski, Dariusz
7123433ce3 adjustments for 2022.2 release on rh platform (#12758) 2022-08-26 00:30:34 +04:00
Nikita Malinin
319e95e419 fix: incorrect fq type (#12234) (#12757)
Co-authored-by: Wonju Lee <wonju.lee@intel.com>
(cherry picked from commit 0592ba3e8c)

Co-authored-by: Inhyuk Jo <andy.inhyuk.jo@intel.com>
2022-08-25 16:43:39 +00:00
Maxim Vafin
bafd45502b Fix issue with Squeeze with empty squeeze_dims (#12700)
* Fix issue with Squeeze with empty squeeze_dims

* Rework solution

* Apply code style

* Improve error logging

* Improve formatting

* Add more types

* Apply review feedback

* Add file which was forgotten
2022-08-25 18:30:19 +04:00
Maxim Vafin
4ea602bc7e Use new reprocessing for legacy MO (#11302) (#12653)
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2022-08-25 18:25:03 +04:00
Liubov Talamanova
891f1c49bc [POT][OV 2022.2] Fixed insert_fake_quantize() with empty hw_config (#12678) 2022-08-25 12:23:38 +00:00
Sergey Shlyapnikov
a3f8cef198 [GPU] Shared memory optimization for network::execute_impl() call (#12748) 2022-08-25 15:49:56 +04:00
Artur Kulikowski
826a54dc20 Backport of #12713 "MO uses the same version of protobuf like other packages" (#12734)
* MO uses the same version of protobuf like other packages

* Restrict Protobuf to version >=3.18.1 and lower than 4.0.0
2022-08-25 13:17:14 +02:00
Yuan Xu
99b8c80677 update with external suggestions (#12726) 2022-08-25 11:45:31 +04:00
guozhong wang
f409e95768 do not remove cpu when bind buffer (#12556)
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-08-25 09:05:42 +03:00
Roman Kazantsev
d2f7816e6f [TF FE] Port changes for TF FE from the master branch (#12691)
* [TF FE] Add Transpose Sinking for additional unary-wise Operations

It helps to fix performance degradation for MobileNet models

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add LogicalNot for Transpose sinking

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Support dynamic rank support for Convolutional and Pooling operations (#12661)

* [TF FE] Add dynamic rank support for Convolutional and Pooling operations

Refactor DepthwiseConv2D, AvgPool, and FusedBatchNorm operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue with rvalue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue with climit

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Skip duplication of Parameter nodes

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert changes in StridedSlice and add check for AvgPool operation type

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Revert the rest of changes for StridedSlice

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix translator for AvgPool: add pad mode

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Introduce helper default_op_checks

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Refactor translators for Resize operations and correct Pooling (#12721)

* [TF FE] Refactor translators for Resize operations and correct Pooling

It allows to convert magenta_arbitrary-image-stylization model

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Align TF FE tranlator for Resize with legacy frontend

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Do minor fix for MaxPool

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-24 21:10:05 +03:00
Nikita Malinin
3980672082 Disable LWT tests (#12740) 2022-08-24 17:48:27 +00:00
Evgenya Stepyreva
6a20d1408e GroupedGatherElimination short circuit (#12380) (#12733)
* Disable GroupedGatherElimination in case of scalar inputs containing indices

* clang format

Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com>
2022-08-24 16:04:28 +03:00
Chen Xu
1e5fec7e25 [CPU] Reduce node improve performance for nspc layout (#12671) 2022-08-24 15:39:55 +04:00
Maxim Vafin
188746224c Add ScatterUpdate value infer (#12595) (#12714)
* Add ScatterUpdate value infer

* Add additional test case to ScatterUpdate tests
2022-08-24 14:51:03 +04:00
Luwei Zhou
aa1a607328 [CPU] Fix the strided slice issue when ellipsis_mask has redundant data. (#12705) 2022-08-24 09:43:08 +04:00
Artur Kulikowski
6fecdbca36 Backport of #12650 "Properly reading parameters with whitespaces from IR" (#12677)
* Add overrided method to generating vector of strings

* Trim the value from the the left and right

* Add test to verify that output names are correctly read from IR

* Use spaces instead of tabs

* Add C++ tests for read model contains outputs with whitespaces

* Fix test for add output

* Remove python test
2022-08-23 21:29:04 +03:00
Andrei Kochin
f87e00398d updated to convert b_fs_yx_fsv16 to o_is_yx_isv16 (#12630) (#12675)
Co-authored-by: Eddy Kim <eddy.kim@intel.com>
2022-08-23 15:46:54 +03:00
Maxim Vafin
4cdd8119da [MO] Improve layout help (#12535) (#12590)
* [MO] Improve layout help
2022-08-23 13:10:55 +02:00
Tomasz Dołbniak
714b1de678 GridSample op check test (#12586) 2022-08-23 12:06:11 +02:00
Zhen Zhao (Fiona)
0000550371 Update to add climits for ULLONG_MAX (#11958) (#12709)
Avoid GCC compiling issue ‘ULLONG_MAX’ was not declared in this scope

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-08-23 12:52:30 +04:00
Gorokhov Dmitriy
a6bfc0cf0e [CPU] Support MHA optimization (#12643)
* [CPU] Support MHA optimization

* [CPU] Extend pattern supported by MHA node

* [CPU] MHA: fixed int8 perf issue

Co-authored-by: Gu, Jianan <jianan.gu@intel.com>
2022-08-23 12:50:02 +04:00
Ilya Lavrenov
b4d18bb406 Don't use system tbb for 2022.2 (#12702) 2022-08-23 10:34:40 +04:00
yanlan song
4d9443eb0e do not call get_profiling in threads (#12635)
* do not call get_profiling in threads

Signed-off-by: fishbell <bell.song@intel.com>

* indent

Signed-off-by: fishbell <bell.song@intel.com>

Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-08-23 13:50:52 +08:00
Ilya Lavrenov
d770b535fb Don't run CPU tests if some previous steps have failed (#12701) 2022-08-23 02:41:26 +04:00
Alina Kladieva
5a0dea4a46 Cherry-pick U22 adoption in github actions (#12550) (#12697)
* Cherry-pick U22 adoption in github actions

* More fixes for shellcheck

* More fixes for shellcheck

* Update .github/workflows/py_checks.yml

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-08-23 01:50:29 +04:00
Alina Kladieva
c8d57bbc77 Cherry-pick disable CUDA plugin building Azure (#12699) 2022-08-23 01:49:49 +04:00
Artyom Anokhov
a8f2365563 coverage.cmake: Added general target for collecting coverage counters for whole project (#12655) 2022-08-22 18:20:37 +04:00
Daniil Lyakhov
8a1d34d317 [POT] Gold References Update (#12646)
* [POT] Precommit reference update (#12304)

* WIP graph tests fixing

* Fix collectiors graph tests

* remove debug code

* Fix rebase

* eps update for scales tests

* Outputs for some reference models was changed

* Sanity reference metrics update for VNNI CI hosts

* Unused hyperopt dependency which broke python3.6 support is commented

* Minor comments fixes

* [POT] Finetuned model reference update (#12610)

* Finetuned model reference update

* Comment with AVX512 reference value

* [hotfix] pytest error of act_act example (#12093)

* [hotfix] pytest error of act_act example

* remove needless import

Co-authored-by: Bonhun Koo <bonhun.koo@intel.com>
2022-08-22 12:25:42 +00:00
Mateusz Bencer
7fe32c89ae [MO] Fix SSliceComplex transformation (#12538) 2022-08-20 12:15:08 +02:00
Maxim Vafin
067c21f110 [TF FE] Refactor constant reading to not use protobuf directly (#12518) (#12651)
* Refactor constant reading

* Remove needless code

* Implement compressed value reading

* Remove needless protobuf headers

* Remove commented code

* Remove unnecessary comment

* Apply review feedback

* Fix linux build

* Fix win build

* Fix copyright
2022-08-19 20:02:11 +04:00
Roman Kazantsev
aafabb41b8 [MO, POT] Top up upper bounds for TensorFlow and NumPy modules in all requirement files (#12191) (#12628)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-19 19:33:25 +04:00
Luo Cheng
e03fbd5c15 [CPU] Default enable avx512 f32 brgconv (#12620) 2022-08-19 17:59:15 +04:00
Xiake Sun
4e02bd2771 Add missing patchelf dependency for REHL8 for openvino runtime python wheel build (#12618) (#12625) 2022-08-18 22:19:22 +04:00
Mateusz Tabaka
8ca594f49a handle tbb library path like .../tbb/lib/intel64/gcc4.8 (#12606) 2022-08-18 13:42:19 +03:00
Artur Kulikowski
2c78fdb7c7 Fix: Refreshing of places after subgraph extraction (#12497) 2022-08-18 11:30:23 +02:00
Roman Kazantsev
544b3f8191 [TF FE] Port TF FE changes from master for integration with OVTF (#12575)
* [TF FE] Handle optional attributes for Convolutional operations (#12230)

* [TF FE] Handle optional attributes for Convolutional operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Implement LinSpace and BatchMatMul translators (#12271)

* [TF FE] Implement LinSpace and BatchMatMul translators

It helps to convert STN model (from e2e testing) using TensorFlow frontend

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix BatchMatMul translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix LinSpace operation translator

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code style rules

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Fix conversion of NetVLAD model (#12328)

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Implement translators for TensorFlow ConvBackpropInput operations (#12356)

* [TF FE] Implement ConvBackPropInput translators

Now the translators supports dynamic input_sizes attribute and different padding modes
including EXPLICIT mode

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix clang-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code-style issue

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback and fix build issues

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply code-review feedback: check for input size

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix retrieving explicit_padding attribute

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix code style

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Fix StridedSlice translator for new_axis vector size longer input rank (#12442)

* [TF FE] Fix StridedSlice translator for new_axis vector longer input rank

Currently, new_axis vector is cut by input rank that is correct and leads to the loss of new axes.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Use int64 type in mask_to_vector function

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Refactor translators for Conv2d and Conv3d (#12444)

It allows to convert CNN-Transformer model. Padding was previously incorrect.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Implement conversion for Attention OCR model (#12428)

* [TF FE] Implement conversion for Attention OCR model

The following scope of work is done to make Attention OCR convertable:
1. Refactored translators for BiasAdd, Slice, and ArgMax operations. Add translation for StopGradient operation.
2. The previous traversing algorithm to compute topological sorted nodes list was incorrect. Now it is implemented based on topologically_sorted function from core/graph_util.hpp.
3. The unsupported data types are now preliminary converted to undefined type for the purpose of to have them cut off.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* [TF FE] Refactor MaxPool operation translator for xj_feature model (#12485)

* [TF FE] Refactor MaxPool operation translator for xj_feature model

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Correct MaxPoolV2 since it has three inputs

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2022-08-17 16:16:48 +04:00
guozhong wang
5bd1e64a42 remove test case LoadNetwork_SingleIECore (#12597) 2022-08-17 09:33:19 +04:00
Xuejun Zhai
66257530e3 [Coverity Scan] sample issue from CS fix (#12509)
Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>
Co-authored-by: River Li <river.li@intel.com>
2022-08-16 11:09:00 +08:00
Ekaterina Aidova
cfbf5a1808 [releases/2022/2] openvino-dev uses opencv-python-headless as default (#12559) 2022-08-15 21:58:06 +03:00
Ilya Lavrenov
4f03abe2ca [SAMPLES] Fix flake issues in Python speech sample (#12514) (#12529)
* Fix flake issues

* Add whitespace

* Add whitespaces in tests asserts

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-08-15 16:58:13 +04:00
Anastasia Popova
389c970c12 Update date changed. (#12531) 2022-08-15 09:08:57 +03:00
Ilya Lavrenov
29628a89b7 Tbb port (#12541)
* Fixes for TBB 2018-2019.4

* Fixed CVS-89248
2022-08-15 06:26:47 +04:00
Andrew Kwangwoong Park
f3adf63f6b [GPU] Disable TCs for OVClassHeteroExecutableNetworkGetMetricTest (#12433) (#12472)
Signed-off-by: Andrew Park <andrew.park@intel.com>

Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-08-13 21:56:23 +09:00
Mateusz Tabaka
c0212a361a [CPU] Add RDFT and IRDFT operators (#12290)
Tickets: 79178 and 79192

Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com>
2022-08-12 14:10:53 +02:00
Adrian Boguszewski
d8d5dfb34a Fixed NameError: name 'ARCH' is not defined on Raspberry Pi (#12421) (#12512)
(cherry picked from commit fe4e875586)
2022-08-12 12:43:25 +04:00
Mateusz Bencer
e628fae196 [GPU] Decompose NormalizeL2 for not supported cases (#12404) 2022-08-11 11:32:03 +02:00
Min, Byungil
f0f6896fc0 [GPU] Fix network loading time related to onednn engine creation (#12492)
+ benchmark cache_dir option takes longer than cl_cache_dir env in loading network.
+ For clDNN execution, benchmark cache_dir created onednn_engine if just ONEDNN_ENABLE config is ON.
+ Creation of onednn_engine in ocl_engine is changed to on-demand.

Signed-off-by: Min, Byungil <byungil.min@intel.com>

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-08-11 09:32:20 +04:00
Tomasz Dołbniak
9163114290 Undefined Behavior sanitizer fixes [2022/2] (#12339)
* UBSan errors fix

* Cleanup
2022-08-11 08:03:21 +04:00
River Li
53a3cb377b [C 2.0 API]revert OV C 2.0 APIs in 2022.2 release branch (#12180)
* Revert "[C API] Enable hello_nv12_input_classification samples for C APIs of OV API 2.0 (#12031)"

This reverts commit 70d967ffb6.

* Revert "Add hello_classification_ov_c test (#11933)"

This reverts commit ebeb0a3802.

* Revert "Refine ov_partial_shape for OV API 2.0 C interface (#11891)"

This reverts commit ce5b2c6a45.

* Revert "Enable unit test for OV 2.0 C API (#11828)"

This reverts commit c4fdcafa70.

* Revert "OV 2.0 C API (#11700)"

This reverts commit 8faf8f2d89.
2022-08-11 07:16:26 +04:00
Alina Kladieva
ac805c66e1 Update Azure refs for 2022/2 (#12501) 2022-08-10 18:21:11 +00:00
Evgenya Stepyreva
c9afc5a5c1 Auto Batch: if disabled during cmake (#12382) (#12479) 2022-08-10 09:51:26 +00:00
River Li
d328b00e48 [CC]Fix CC issue for transformation (#12292) (#12489)
* Revert "Fixed 3 naming issue"

This reverts commit a92d3cfff5.

* Revert "Fix CC issues for transformation and snippets"

This reverts commit d08a3f5aac.

* Fix NGRAPH_PASS_CALLBACK issue to make it can work

* Fix matcher name missing issue
2022-08-10 11:36:51 +04:00
Wilson Seok
1788c86943 change to node.weights() from weights_memory(0) (#12407) 2022-08-10 16:18:58 +09:00
Ilya Churaev
32713f744d Use static pointers to frontend libraries (#12235) (#12471)
* Add static shared_objects map in FEM
- add unit tests for frontend lib close
- not use static FEM in ie network reader
- add main for gtest which can use manifest file to filter tests

* Move library pointers map to manger impl
- add to manger impl method to make frontend from loaded plugin

* Add shutdown function to ov namespace
it cleans the static resources

* Revert changes related to linking mian for tests

* Add python binding to ov::openvino_shutdown

* Renamed shutdown method and added to legacy C++ API

(cherry picked from commit a8395bd207)

* Added C bindings

(cherry picked from commit d2c9ddc263)

* Move frontend lib close test to ieFunctTest
- moved to not introduced new test binary and modification on CI
  the frontend tests use dynamic linked frontend lib which is load
  on test application start and mask lib close tests
- remove gtest_main_manifest as not required now
- add ov::shutdown test to expect application crash

* Fix lib_close test
- remove not get_disabled_tests from utils
- revert CMake file formating

* Fix get model path in lib close tests

* Skip frontend lib close tests if static lib build

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
2022-08-10 09:05:12 +04:00
Andrew Kwangwoong Park
ea302afb47 Update pre_replace_deconv to support output_shape for transposed conv (#12418)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-08-10 10:37:51 +09:00
Tomasz Dołbniak
5871d5dc38 OpenCV build switched off by default [2022/2] (#12213) 2022-08-09 14:45:52 +02:00
Ilya Lavrenov
125adeaf29 CVS-88328: Ported fixes for TBB (#12461)
* Fixed WIndows backslash paths (#12250)

* Install user provided TBB as well (#12260)

* Fixes for cases when TBB_DIR env var is set (#12266)

* Fixes for cases when TBB_DIR env var is set

* Don't use make in build_samples.sh script

* Xiaoxia/onetbb old version (#12303)

* support oneTBB old version

* fix oneTBB version mismatch issues

* fix clang issue

* add 'tbb' path to setupvars.sh and OpenVINOConfig.cmake.in

* Update scripts/setupvars/setupvars.sh

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>

* Trying to fix CVS-85530 (#12455)

Co-authored-by: Sun Xiaoxia <xiaoxia.sun@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-08-08 23:40:37 +00:00
Tomasz Dołbniak
a1bd02e633 Friendly names fix for ONNX models (#12412) 2022-08-08 21:56:03 +02:00
Trawinski, Dariusz
3068b3823c changes needed to rhel8 certification (#12242)
* changes needed to rhel8 certification

* preserve opencl drivers in version 21

* updated comment about supported RH versions
2022-08-08 11:36:11 +04:00
Chen Peter
71b97b69a8 Add notes for AUTO hints (#12076)
* Update doc for AUTO and AUTO_BATCH

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Update per the comments

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Move default hint to THROUGHPUT section

Signed-off-by: Chen Peter <peter.chen@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-08-04 14:26:21 +08:00
Katarzyna Mitrus
e9030cca21 Update opset.md with opset9 (#12171) 2022-07-26 14:13:58 +02:00
Sebastian Golebiewski
c591d773d4 Porting to 2022.2 - DOCS-code-reference-css-style-change (#12198)
Porting  the following PR:

https://github.com/openvinotoolkit/openvino/pull/12109/

to 2022.2
2022-07-26 14:13:13 +02:00
Sebastian Golebiewski
5f4999117d Porting to 2022.2 - update -readme for CPP and Python benchmark (#12246)
Porting #11961 to 2022.2
2022-07-26 14:12:38 +02:00
Ilya Churaev
c9f9795d29 Fixed newAPI for case if core was removed (#12208)
* Fixed newAPI for case if core was removed

* Fixed code style

* Fixed typo

* Use new API by default

* Create core with template plugin

* Added doxygen comment

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-07-23 11:53:26 +00:00
Maciej Smyk
28922e2080 Install guide update for 2022.2 (#12222)
* Toctree

* Fixing reference

* Linux

* Windows

* macOS

* Raspbian-OS

* Whats-Next-Section

* References

* HDDL-MYRIAD

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Runtime-fix

* Update docs/install_guides/installing-openvino-overview.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update installing-openvino-overview.md

* Update docs/OV_Runtime_UG/supported_plugins/MYRIAD.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/HDDL.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update configurations-for-intel-gpu.md

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/configurations-for-intel-gpu.md

* Delete installing-openvino-images.md

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

* Update installing-model-dev-tools.md

* Update configurations-for-intel-gpu.md

* Revert "Update configurations-for-intel-gpu.md"

This reverts commit f5294de324.

* Revert "Update installing-model-dev-tools.md"

This reverts commit 9109a916d6.

* ID-fix

* Update installing-openvino-macos.md

Co-authored-by: sgolebiewski-intel <101244613+sgolebiewski-intel@users.noreply.github.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-07-21 13:55:28 +00:00
Tomasz Dołbniak
4a88aa0493 ipython removed from the dependencies of docs (#12193) 2022-07-20 16:39:43 +02:00
Kelvin Choi
3a72200f92 [GPU] Add reorder from i32 to f32 for max-pooling/conv/fc which doesn't support i32 (#12144) 2022-07-20 22:14:22 +09:00
Egor Duplenskii
fdae95a769 [CPU] Explicitly enable DNNL_VERBOSE only in case of CPU_DEBUG_CAPS (#12151)
and rely on oneDNN default behavior otherwise
2022-07-20 14:07:42 +04:00
Sebastian Golebiewski
483f38e6d8 Porting OV Runtime to 2022.2 (#12192)
Porting OV Runtime (PR #11658) to 2022.2

https://github.com/openvinotoolkit/openvino/pull/11658/
2022-07-20 11:14:45 +02:00
River Li
c144702d8b Restore static fem in 2022.2 (#12223)
* Restore FEM to be static instance

* Restore frontend manager in ie_read_network.cpp
2022-07-20 10:29:52 +04:00
Yuan Xu
79db96d61e Pypi org updates 22/2 (#12210)
* fix references

* update links

* update the wording to be more clear

* add the error message about Visual studio back

* update links to static html links of 2022.2
2022-07-19 06:26:32 +00:00
Chenhu Wang
123f8e62bf [DOC][CPU] Denormals optimization document (#12132) 2022-07-18 16:37:44 +04:00
Taylor Yeonbok Lee
8c80f9ff58 [GPU] optimize permute_ref (#12160)
* change memory access pattern of fsv layout for permute

* Fix permute_ref to process F first only when (bf...) => (b...f)

* Refactor

Co-authored-by: si-eun-kim <sieun.kim@intel.com>
2022-07-18 18:26:00 +09:00
Eddy Kim
de5e9bb397 Revert "[GPU] Pass activation unit tests on DG2 (#11969)" (#12165)
This reverts commit 3334e8933c.
2022-07-18 18:25:45 +09:00
zihan wu
32f800c6a6 [CPU] polish onednn cc readme (#12114) (#12176) 2022-07-15 16:36:31 +00:00
Min, Byungil
b492f98d30 [GPU] modify fusing condition for reduce (#12147)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-07-15 16:07:43 +09:00
Andrew Kwangwoong Park
9c49b71c11 Enable tensor offset to GemmKernelRef for input padding support (#12140)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-07-15 16:01:35 +09:00
Tomasz Dołbniak
02330bc11c Zlib update to 1.12.2 (#12129) 2022-07-14 14:40:23 +02:00
Luo Cheng
4412e1ddfa [CPU] revert pr 11990 and enable brgconv avx512 on SPR by default (#12134) 2022-07-14 14:10:51 +04:00
Tingqian Li
b7b3f0ab4a move cpu_dump_check into CPU plugin's tools folder (#12123) 2022-07-13 13:38:17 +08:00
Paul Youngsoo Ahn
0621e8cf28 [GPU] Fix gather data type issue (#12089) (#12089) 2022-07-12 19:01:07 +09:00
Tomasz Dołbniak
9d6d84088f Virtual destructor for the base class (#12103) 2022-07-12 11:55:41 +02:00
Eddy Kim
a63dad6fdd updated to fuse activation in eltwise_vload8 (#12092) 2022-07-12 18:51:48 +09:00
Wang, Yang
bbc1c26750 setting tput as the default performance mode only for AUTO, excluding MULTI plugin. (#12090)
Signed-off-by: ywang2 <yang4.wang@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-07-10 15:16:59 +08:00
Eddy Kim
8d852b4aee fixed 'is_rotating_except_batch' to follow the IE order (#12050) 2022-07-08 15:36:17 +09:00
Min, Byungil
b5c058738d Update module version to remove vulnerability (#12054) 2022-07-08 08:28:48 +02:00
Tingqian Li
bc34fa0934 [CPU] Re-enable Selective build on oneDNN2.6 (#12074)
* update submodule onednn26 selective build

* onednn code review

* merge onednn selective build

* fix bug in cc onednn26

Co-authored-by: zihan wu <zihan.wu@intel.com>
2022-07-08 03:48:12 +00:00
Taylor Yeonbok Lee
2500ad120e Revert the mmap-ed constants to the original buffer load (#12075) 2022-07-08 11:51:53 +09:00
guozhong wang
ab8c2f6fd8 change gpunum to 3 (#12073) 2022-07-07 18:15:27 +03:00
Andrew Kwangwoong Park
32937ab7ca Add Debug Config for maximum kernels per batch (#12068)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-07-07 14:26:51 +03:00
guozhong wang
cd6c7da91c AUTO/MULTI supports ov::auto_batch_timeout (#12023)
* add auto_batch_timeout for MULTI and AUTO

* fix clang-format for ie_core.cpp

* fix coredump

* simplify insert key to deviceConfig logic and parseDeviceNameIntoConfig() check "AUTO" and "AUTO:" only

* check config auto_batch_timeout

* add CleanUpInIECore()

* fix clang-format for ie_core.cpp
2022-07-07 10:33:04 +00:00
Bonhun Koo
d5e8d1d968 Nm/outputs quantization scheme (#12035)
* [POT] outputs quantization scheme

* [POT] remove needless blank line

* add a range_estimator config for outputs

* Add fq_config_priority
2022-07-07 16:08:55 +09:00
Luwei Zhou
0224e6a067 Fix the deconv depwise post ops issue on AVX2 and AVX512 and enable deconv test (#11870)
* Fix the deconv fused issue on AVX2 and AVX512 and enable deconv test

* Keep GroupDeconv BF16 test cases still disabled.

* Update to also excluding nightly

* Update onednn submodule.

* Update onednn submodule

* Update onednn submodule.

* Update the ONDENN submodule

* Update the ONEDNN commit.

* Update with merged onednn commit.
2022-07-07 13:26:44 +08:00
RICKIE777
70d967ffb6 [C API] Enable hello_nv12_input_classification samples for C APIs of OV API 2.0 (#12031)
* Define new ppp API for nv12

* Add new ppp API function

* Add new ppp API unit test

* Add hello nv12 input classification ov

* Define new ppp API for nv12

* Add new ppp API function

* Add new ppp API unit test

* Add hello nv12 input classification ov

* Fix the clang -formate issue

* Modify the function called is_supported_image_size

* Update code as suggested

* Add hello_nv12_input_classification e2e test

* clang-format openvinotoolkit

* Fix the doc error in CI

Co-authored-by: River Li <river.li@intel.com>
2022-07-07 11:36:55 +08:00
Xiping Yan
e8bd70f273 Add build flag for GCC. (#12017)
Some compiler flags restrict the compiler from making arbitrary decisions while handling undefined C/C++ behaviors.

Therefore they can be used to fix some issues caused by undefined behavior.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-07-07 10:02:04 +08:00
River Li
b80f724414 Fix rnn cache missing issue (#12053) 2022-07-06 11:20:27 +00:00
Kelvin Choi
63ab516c85 [GPU] Delete previous inputs by numbered new name for batching (#12045) 2022-07-06 16:32:14 +09:00
Bonhun Koo
1901087677 [POT] GNA - prevent an overflow in eltwise layer (#12048) 2022-07-06 16:15:13 +09:00
yanlan song
e718e51a85 Bell/fix lifecycle coredump (#11934)
* enable binder schedule

Signed-off-by: fishbell <bell.song@intel.com>

* add cases

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

* fix build failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix coredump

Signed-off-by: fishbell <bell.song@intel.com>

* do not return hw requests directly, potential issues

Signed-off-by: fishbell <bell.song@intel.com>

* fix bug

Signed-off-by: fishbell <bell.song@intel.com>

typo

Signed-off-by: fishbell <bell.song@intel.com>

* optimize memory

Signed-off-by: fishbell <bell.song@intel.com>

* hold the hw plugin

Signed-off-by: fishbell <bell.song@intel.com>

* Revert "hold the hw plugin"

This reverts commit 5b537f5b6f.

* apply the fix

Signed-off-by: fishbell <bell.song@intel.com>

apply the fix

Signed-off-by: fishbell <bell.song@intel.com>

* hold the plugin library for destructing tensor

Signed-off-by: fishbell <bell.song@intel.com>

* solve the virtuual plugin Getblob life cycle issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove log

Signed-off-by: fishbell <bell.song@intel.com>

* refine interface

Signed-off-by: fishbell <bell.song@intel.com>

* fix build failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix for hetero plugin

Signed-off-by: fishbell <bell.song@intel.com>

* replace with vector

* enable life time tests for virtual plugins

Signed-off-by: fishbell <bell.song@intel.com>

rework cases due to vpux build issue

Signed-off-by: fishbell <bell.song@intel.com>

disable context test for now

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-07-06 05:21:17 +00:00
opoluektov-lohika
7a50ce2491 Coverity: fix issue with uninitialized members (#11996) 2022-07-05 23:55:53 +00:00
Mateusz Bencer
43c0c964b8 Added FoldSubgraphEmptyInputs transformation (#11957) 2022-07-05 19:38:46 +02:00
Maciej Smyk
d91a06ac08 Apache MXNet rename (#11871)
* MXNet

MXNet renaming into Apache MXNet

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* MXNet 2

* MXNet 3

* Revert "MXNet 3"

This reverts commit 046c25239d.

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-07-05 15:04:03 +02:00
Karol Blaszczak
f635621aab DOCS-return ONNX and MO_techniques articles (#11805)
* DOCS-return ONNX and MO_techniques articles

ONNX support article revised and changed to local tabs

* Update docs/MO_DG/prepare_model/Model_Optimization_Techniques.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/ONNX_Support.md

* Update docs/OV_Runtime_UG/ONNX_Support.md

* Update docs/OV_Runtime_UG/ONNX_Support.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-07-05 15:03:26 +02:00
Pawel Raasz
e1bcfeca9d Add SoftSign to CPU plugin (#12034) 2022-07-05 13:34:42 +02:00
River Li
177d977449 oneTBB support terminate tbb thread (#11972)
Change-Id: Iea618b72db193bd48bfbf0dba3586dcdb139c43f

Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-07-05 14:11:10 +03:00
Bartek Szmelczynski
82f691c38b [ONNX FE] Extend ONNX FE NMS-9 (#11790)
* update ONNX FE NMS to v9

* remove reshaping dynamic shapes

* fix style

* xfail two MSFT models
2022-07-05 12:10:29 +02:00
Sungeun Kim
654105d567 Set reorder format in reorder_inputs pass (#11992)
* Set reorder format in reorder_inputs pass
* set zyx_fsv16 formats if input is zyx_fsv4 formats
2022-07-05 18:20:01 +09:00
Tingqian Li
3e97d12fe2 [Transformation] transform ConvertLike into Convert when applicable (#7543)
* [Transformation] transform ConvertLike into Convert when applicable

Signed-off-by: Li, Tingqian <tingqian.li@intel.com>

* Remove xfail markup for onnx castlike cpu tests

Signed-off-by: Li, Tingqian <tingqian.li@intel.com>

* remove un-used xfail

Signed-off-by: Li, Tingqian <tingqian.li@intel.com>

* Enable ConvertLike cpu functional test

Signed-off-by: Li, Tingqian <tingqian.li@intel.com>

* remove unnecessary headers

* move to common transformation

Signed-off-by: Li, Tingqian <tingqian.li@intel.com>

* remove test

* fix pytest CI issue

* minor fix after rebase

* fix clang format check

* fix failed tests
2022-07-05 15:50:35 +08:00
Chenhu Wang
8c152405ad [CPU] General denormals optimization (#11883)
* FTZ_and_DAZ_set_for_cpu

* remove DAZ

* fix

* extract to utils

* ie core part changes to add do as property and benchmark_app enable do

* enable brgcov from Luocheng patch

* add debug info

* enable_brgemm_on_avx512

* add python binding

* dlb test

* FTZ_and_DAZ_set_for_cpu

* remove DAZ

* fix

* extract to utils

* ie core part changes to add do as property and benchmark_app enable do

* enable brgcov from Luocheng patch

* add debug info

* enable_brgemm_on_avx512

* add python binding

* dlb test

* revert test code

* revert test code
2022-07-05 15:50:16 +08:00
Kelvin Choi
186189ee09 [GPU][DOC] Change fixed input string name to get_friendly_name at yuv infer request (#12003) 2022-07-05 15:36:02 +09:00
Tingqian Li
3f9c6b2f3f [BUG fix] Reshape node: WA in-place failure case by mem-copy (#10828)
* Handle in-place failure cases in reshape node

* Disable inplace when non-const reshape connected to constant

* Add comment to reshape_inplace test

* move copy WA into execute() to cover more general in-place failure cases
2022-07-05 04:46:27 +00:00
Mang Guo
a571539107 Optimize FullyConnected FakeQuantize post-ops (#11819)
* Optimize FullyConnected FakeQuantize post-ops

* matmul bias fuse

* Add simplifyToScale for FakeQuantize and use it in FC and Conv.

* Add fakequantize documentation

* Update doc and fix accuracy issue

* Update doc

* Fix accuracy regression

* Generalize the judgment Criteria about fake quantization with scale

* Update document

Co-authored-by: Zhang Yi3 <yi3.zhang@intel.com>
Co-authored-by: xuchen-intel <chen.xu@intel.com>
2022-07-05 09:39:42 +08:00
Luo Cheng
35ee842446 [CPU] [WA] Use config to enable brgconv f32 kernel (#11990)
* enable brgconv f32

* use config to enable brgconv f32

* when brg disabled not init bin-postops

* change prop name for extensive

* use more general field

* fix review comments.
2022-07-05 07:14:40 +08:00
River Li
30c7f561e3 Add FORCE_TBB_TERMINATE to legacy API (#12022)
* Add FORCE_TBB_TERMINATE to legacy API

* Put this config into proper place

* fix issue in property test

Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-07-04 18:49:23 +03:00
Jan Iwaszkiewicz
e5a719480b [PyOV] Fix bugbear's B023 (#12040) 2022-07-04 15:57:36 +02:00
Felix Dohyun Kim
3334e8933c [GPU] Pass activation unit tests on DG2 (#11969)
* supprot bfwzyx format
* Add test: bfwzyx activation
* Make opt kernel support bfwzyx
2022-07-04 19:54:17 +09:00
Jade Cho
195f5df2e8 set zero-point as immediate value (#12002) 2022-07-04 19:53:50 +09:00
avoskoboinyk-lohika
88784c2b6f [CPU] Optimize NonZero operation (#11549)
* [CPU] Optimize NonZero operation

# Conflicts:
#	src/plugins/intel_cpu/src/nodes/non_zero.cpp

* [CPU] Rewrite NonZero implementation, so it will use generic ie_parallel API

* [CPU] NonZero operation: apply an additional optimization

* NonZero operation: add fallback code for inRank >= 6

* NonZero operation: apply review modifications

# Conflicts:
#	src/plugins/intel_cpu/src/nodes/non_zero.cpp

* NonZero operation: inShape.getDims().size() -> inRank

* NonZero operation: eliminate input array index calculation by slight modification of ie_parallel API

* Adjust ie_parallel.hpp style for clang-format

* Try to unbreak the build

* Move to parallel_nt and add a cache for nd loops to optimize more

* Add minimal size threshold for threading and reduce warning count

* Try to workaround linter errors

* One more try to unbreak cpplint build

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2022-07-04 10:52:18 +08:00
Mang Guo
d22c429d0e [CPU] Remove vmaxps in store_vector. (#12005)
* Remove vmaxps in store_vector.
This instruction is not needed for dst_prc int8.
And it may lead to wrong result with denormals optimization is on.

* Add vpmaxsd if dst_prc is u8 or u16.
2022-07-02 13:22:05 +00:00
Oleksandr Kramskyi
dde3300cac [GPU] Add SoftSign_9 operation (#11795) 2022-07-01 15:22:17 +09:00
Mykhailo Hnap
e23a568b7a Added axes node validation to DFTs operations (#11814)
* Fix DFTs axes node validation.

* Add DFTs type prop tests for invalid nodes.

* Adjusted DFTs axes node validation.
2022-07-01 15:19:04 +09:00
Wang, Yang
8138e240a0 Remove the config key MULTI_WORK_MODE_AS_AUTO from the AUTO/MULTI plugin. (#12016)
Signed-off-by: Wang, Yang <yang4.wang@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-07-01 05:05:39 +00:00
Wang, Yang
378b3a2dca Enable default performance hint as tput in AUTO (#11848)
* Enable hint to tput if no property is specified for both AUTO device and target device.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* 1. Update logic.
2. Add test cases.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update. Set hints to default for target device if no hints setting for AUTO plugin and no specific properties setting for target device.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-07-01 02:49:20 +00:00
Taylor Yeonbok Lee
1872e05375 Fix wrong output type propagation for gather elements (#12013) 2022-06-30 17:22:38 +00:00
Tomasz Dołbniak
5a5c404f13 Hello reshape SSD sample fix (#12004) 2022-06-30 13:35:14 +03:00
Bo Liu
7834dba545 fix CPU Plugin deformable conv Node output incorrect issues with uneven dilations (#11940) 2022-06-29 18:14:30 +08:00
Min, Byungil
730c3f8f25 [GPU] Update Debug config for GPU plugin (#11983)
+ Added OV_GPU_DumpLayersResult
+ Applied minor update

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-06-29 11:41:13 +03:00
Karol Blaszczak
563d4f16e6 DOCS-nncf_rephrasing (#11997) 2022-06-29 08:54:05 +02:00
Wonju Lee
62d5f9a006 [POT] Rename ranger to range_supervision (#11998)
* fix: rename ranger by range_supervision

* ci: dummy commit
2022-06-29 15:27:57 +09:00
stephenli2000
4125d71ce8 setupvars.sh: Removing extra semicolon, which breaks glibc build (#11849)
This extra semicolon creates an output as example below. The extra
'::' is equivalent to add '.' as part of the LD_LIBRARY_PATH. This
breaks glibc build, and very often creates weird issue when launch
commands from different path.

...inference_engine/external/tbb/lib::/opt/intel/openvino_2021/...

We also noticed that :${parameter:+:$parameter} is widely used in
this file. Please review the code and fix as needed.
2022-06-29 08:11:24 +02:00
Vladyslav Tsilytskyi
72505b1d82 Use 2020-resolver feature only for pip3 < 20.3.0 (#11926)
Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-06-28 12:05:13 +02:00
Katarzyna Mitrus
674ca1ccc2 [ONNX] Enable Concat with empty tensor/scalar (#11979) 2022-06-28 08:34:10 +02:00
Chenhu Wang
1288706589 large_batch_opt (#11951) 2022-06-28 10:33:16 +08:00
Karol Blaszczak
a4e6cda7e8 DOCS-restore_gsearch_comma (#11980) 2022-06-27 23:27:12 +02:00
Mateusz Mikolajczyk
a70bbad988 [PyOV] Fix loading data to Tensor from array-like objects (#11974) 2022-06-27 16:26:45 +02:00
opoluektov-lohika
8a21e4e062 [GPU] Implement ExperimentalDetectronDetectionOutput operation (#11772)
* ExperimentalDetectronDetectionOutput: refine sorting criteria for NMS stage

This is to ensure the operation produces stable predictable results across
the possible sorting algorithm implementaions.
This property is useful for the operation testing.

* [GPU] Implement ExperimentalDetectronDetectionOutput operation

* [GPU] ExperimentalDetectronDetectionOutput: use vector types and operations in kernel

* Reformat changed files to make clang format checker happy

* [GPU] ExperimentalDetectronDetectionOutput: add another test case to the unit test

* [GPU] ExperimentalDetectronDetectionOutput: Add f16 test

* ExperimentalDetectronDetectionOutput: single-layer test: use all three outputs

* [GPU] ExperimentalDetectronDetectionOutput: increase single layer test coverage

More attribute permutations were added.
2022-06-27 23:11:03 +09:00
Chenhu Wang
95a297ed68 onednn_update (#11930) 2022-06-27 11:22:50 +00:00
Inhyuk Jo
4d741703d1 [POT] Fix dynamic shapes and batchfying data (#11923)
* fix: batchfy data by sampler or data loader

* feat: add batched sampler

* docs: update doc
2022-06-27 15:18:36 +09:00
Luwei Zhou
4be0c59505 Fix the Non_Zero childedge check. (#11963) 2022-06-27 10:43:38 +08:00
guozhong wang
44da3f06c4 add tag for log (#11887)
* add tag for log

* cumulative log tag output AUTOPLUGIN

* add use comment for log_xxx_tag and use AUTO OR MULTI for log tag
2022-06-26 15:10:45 +08:00
guozhong wang
c0a2c98a45 add testcase for plugin properties should not be revised by compile_m… (#11842)
* add testcase for plugin properties should not be revised by compile_model

* rename smoke_cpuCompileModelBehaviorTests to smoke_gpuCompileModelBehaviorTests

* remove property EXCLUSIVE_ASYNC_REQUESTS in ov2.0 test

* add testcase for plugin properties should not be revised by loadNetwork
2022-06-25 16:37:43 +08:00
River Li
2bbd2b1990 Fix coverity issue in executorManager (#11964)
1. fix coverity issue
2. avoid oneTBB build error due to different API with TBB

Change-Id: I0339446e33186e0ce57de07aa8492186f2f6e369
2022-06-25 10:22:34 +08:00
Wang, Yang
bd04dc1ecf fix compiled model failed issue when set config with ov device properties (#11793)
* 1. Enable IE Core filter to promote the secondary properties to first level for hardware device.
2. Enable IE Core filter to pass the secondary properties to AUTO plguin.
3. Enable AUTO Plugin to parse secondary properties to first level and pass them to corresponding target hardware device.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* 1. Enable MULTI Plugin to support secondary properties.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* 1. Enable HETERO Plugin to support secondary priorities.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Catch the EXPECT_CALL with AVAILABLE_DEVICES argument inputting to GetMetric.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Revert the logic of handling secondary properties for MULTI and HETERO device.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Remove the secondary property flattening logic because this logic has been implemented within AUTO plugin.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* 1. update flatten logic when secondary properties is specified.
2. add the test case with secondary properties for CPU.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* add the test case with secondary properties for GPU plugin.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add debug message to fix the test case failure issue.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add more debug info.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.
1. For IE Core, 1st level property overides the 2nd level property.
2. For AUTO plugin, add available device list to check if the secondary properties is vaild.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add CUDA and ARM.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update device name for ARM Plugin and add device name for HPU plugin.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-06-25 10:17:30 +08:00
Karol Blaszczak
23b0ba6898 Update integrate_with_your_application.md (#11833) 2022-06-24 13:26:06 +02:00
Kevin Putnam
f19b4c7ae4 Puts page switch parameters in alphabetic order to support S3 (#11960)
Signed-off-by: intelkevinputnam <intelkevinputnam@github.com>

Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
2022-06-24 11:59:10 +02:00
Wang, Yang
30bd4a905e Support ov::device::capabilities for AUTO plugin. (#11925)
* 1. Enable OPTIMIZATION_CAPABILITIES for AUTO plugin.
2. Add corresponding test case.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Remove EXPORT_IMPORT as Export is not implemented in the AUTO/MULTI.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-06-24 10:14:34 +08:00
Yuan Xu
079d1d6e74 fix the link to VPU extensibility article (#11956) 2022-06-23 11:24:32 +00:00
Yuan Xu
5008ee8090 Troubleshooting guide update (#11896)
* Add Overview page

* Revert "Add Overview page"

* fix errors & formatting

* fix article usage according to the styles

* fix errors

* update according to PXT comments

* CVS-80775

* update support matrix with Python version

* fix formatting

* fix formatting

* CVS-71745

* update formatting

* fix formatting

* fix formatting

* fix links & errors

* fix formatting

* update bullet points

* update

* adjust the order

* update

* update

* updates

* update references

* update

* update

* apply same updates with 22/1

* minor fix

* update reference link

* fix CVS-71846

* test

* add troubleshooting steps

* restructure get started home page

* update navigation menu

* update formatting

* fix mistakes

* update wording

* update

* rename configurations files

* update wording

* adjust the structure

* update formatting

* reverse the heading

* test with formatting

* 2nd version of Get Started homepage

* add line breaks

* change to ordered list

* update wording

* update content

* updates

* update DL workbench reference

* update wording

* update references to pip installations

* remove redundant files

* update headings

* update

* update

* restructure

* rename

* updates

* remove a comment

* correct grammar

* fix formatting

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* integrating comments

* update

* update

* correct an error

* update

* typo

* hiding CentOS issues

* update verification steps

* to show one change

* to show the change

* add comments

* update comments

* revert the changes

* update formatting

* test formatting

* update code formatting

* update formatting

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* update content, remove some comments

* update Python installation info

* update formatting

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Ryan Loney <ryanloney@gmail.com>

* update wording

* test formatting

* update formatting

* update formatting

* fix formatting

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: Ryan Loney <ryanloney@gmail.com>
2022-06-23 10:22:30 +00:00
Karol Blaszczak
1341c0d6f4 Docs doc structure step 1 (#11922)
* DOCS-structure_workflow
workflow diagram files and formatting
added overview articles on models and deployment
added the ecosystem page and changed the header from addons

* DOCS-structure_dlworkbench
* DOCS-structure_ovtf
2022-06-23 10:38:08 +02:00
Artur Kulikowski
7e7bb37fe3 Fixed FakeOutputResolver to avoid renaming correctly named nodes (#11938)
* fixed FakeOutputResolver to avoid renaming correctly named nodes

* fixed failed mo_args test: process reverse_input_channels through eltwise with constant with shape=[]

* changed fix to more accuarate to avoid possible issues

* Remove unnecessary iterating over producer outputs

Co-authored-by: sadolini <svetlana.a.dolinina@intel.com>
2022-06-23 08:05:06 +00:00
RICKIE777
ebeb0a3802 Add hello_classification_ov_c test (#11933)
Co-authored-by: River Li <river.li@intel.com>
2022-06-23 15:08:53 +08:00
Paul Youngsoo Ahn
11a9888c3f [GPU] Fix coverity issues(#11876) (#11876)
- CID: 1489915
2022-06-23 07:21:05 +03:00
River Li
b490ef545f Property to terminate tbb threads (#11650)
* Property to force terminate tbb threads

During inference done, tbb threads cannot be closed by itself, which cause memory leak and unload/lingering threads.
Sometimes the tbb threads need to be terminate for resource(memory, thread) consumption

This PR contains:
1. Add a new property to control whether force to terminate tbb threads.
2. Property key is "FORCE_TBB_TERMINATE", default value is false.
3. Explicitly to terminate tbb task scheduler during unload openvino dll if this property is set true.
    e.g: core.set_property(device, ov::force_tbb_terminate(true));
4. If not set FORCE_TBB_TERMINATE, there will be no any additional tbb operations.

Change-Id: I32dc0ba122bb19a9dbf3ba12fdd596aad9ac54b4

* Fix executorManager test case

Change executorManager from static to be dynamic, the test case should fit this change.

* Change frontendManger to be non-static instance

Make frontendManger to be non-static instance.
We should guard it is not released before Model, due to Model will use the mem allocated by frontendManger.
So put frontendManager reference in ov::Model to make it work.

* Fix race condition between executor and executorManger

* Add test case for tbb property

1. Add basic test case for ov::force_tbb_terminate property
2. set ov::force_tbb_terminate to be false

* Avoid terminate tbb in case of no tbb thread created

* Fix Constant ops segmentfault issue

There is segmentfault issue during Constant destruction, which is caused by some shared memory is double free
Test case is:
	ie = IECore()
        net = ie.read_network(model=test_net_xml, weights=test_net_bin)
        query_res = ie.query_network(net, device)
        func_net = ng.function_from_cnn(net)
        ops_net = func_net.get_ordered_ops()
ie and net will be released before ops_net destruction, so Constant will free the shared memory that has been freed

* Make constant::m_data is released before frontendmanager

* tiny format change

* change tbb blocking_terminate to terminate

Tbb blocking_terminate calling will cause some segmentfault during run some special models,
the reason may comes from block_terminate cause current thread block here to wait for tbb exit,
but cannot handle some resource dependencies.
After adopt terminate(), the dependencies can be resolved and no segmentfault any more.

Change-Id: I0b920630a25cd3fd2747c57ec71ca749ba35573b

* Remove unnecessary dependencies

* Disable dynamic lib test case in static library compilation version

As CVS-68982 description, we should disable the test case which will load
dynamic library in openvino static library compilation.

* Fix nested-namespace-definition issue

* Address reviewer's comments
2022-06-23 03:20:15 +00:00
Tomasz Jankowski
bf2bd624b6 Limit ONNX version (#11949)
OV does not currently support opset 17 introduced in onnx 1.12 release.
2022-06-22 22:26:47 +02:00
Bartek Szmelczynski
1543e35b1b [MO] update ROIAlign op to opset9 (#11623) 2022-06-22 10:58:15 +02:00
Adam Tumialis
4426aa58e2 Update windows.yml (#11943)
Temporary change for Win build timeout in Azure.
2022-06-22 10:30:33 +02:00
Roman Baranchuk
5cba0ae871 [CPU] GRN: dynamic shapes support (#11678) 2022-06-22 10:45:06 +08:00
Roman Baranchuk
dab9da25fa [CPU] Roll: dynamic shapes support (#11707) 2022-06-22 10:33:18 +08:00
Roman Baranchuk
44cecc8579 [CPU] ReverseSequence: dynamic shapes support (#11644) 2022-06-22 10:27:06 +08:00
Tomasz Jankowski
c8ced8728e [Core] Fix GridSample assertion (#11924)
* Fix GridSample assertion

* Avoid ONNX opset 17 backend tests
2022-06-21 23:28:21 +02:00
Krzysztof Bruniecki
d25b8466f6 [GNA] Fix and extend removed logging capabilities (#11880)
* [GNA] Fix and extend warning and debug log levels using speech_sample

* [GNA] Cleanup and apply const correctness

* Apply review
2022-06-21 09:57:44 +02:00
Yuan Xu
dc6e5c51ee update pypi installation for pypi.org (#11907)
* update pypi installation for pypi.org

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-rt.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/pypi-openvino-dev.md

* update

* remove comment

* remove comment

* hide Visual Studio error

* update wording

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-06-20 17:06:49 +08:00
Katarzyna Mitrus
217b39e76b [MO] Name fix for mxnet Eye conversion (#11903)
* Fix mx eye op name string

* Enable default for num_columns 0
2022-06-20 07:03:11 +02:00
yanlan song
7f435059c9 Bell/fix single device coredump (#11895)
* enable binder schedule

Signed-off-by: fishbell <bell.song@intel.com>

* add cases

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

* fix build failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix coredump

Signed-off-by: fishbell <bell.song@intel.com>

* do not return hw requests directly, potential issues

Signed-off-by: fishbell <bell.song@intel.com>

* fix bug

Signed-off-by: fishbell <bell.song@intel.com>

typo

Signed-off-by: fishbell <bell.song@intel.com>

* optimize memory

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-06-20 10:10:45 +08:00
Wilson Seok
477bfa1c09 [GPU]fix kernel select validate in deconv kernel fsv16 for fs=8 input (#11909)
* fix kernel select validate in deconv kernel fsv16 for fs=8 input
* apply the WA for Windows only
* Narrow down condition of ref kernel selection
2022-06-20 10:34:45 +09:00
River Li
ce5b2c6a45 Refine ov_partial_shape for OV API 2.0 C interface (#11891)
* Refine ov_partial_shape for OV 2.0 C interface

To avoid potential string security problem, remove string pointer from ov_partial_shape structure.

* Remove redundant code

* fix typo issue

* fix shape test issue

* fix some minor issues

* Address reviewing comments

Use Dimension to represent rank of parital shape.

* Appy safer method to parse partialShape string

1. adopt ov::Dimension::value_type to construct ov::Dimension
2. safter method to convert string to dimension value
3. apply std::vector<std::string> to replace std::vector<char *> during pasrsing partialShape string

Change-Id: I0e0b70a915fc5c5fefad51de51f167798854f55e
2022-06-20 08:22:39 +08:00
Xiping Yan
870f84f19b Xp/maxnick in place fix 43602 (#11664)
* Convolution concat sum inplace conflict fix

* Minor refactoring.

* Rebase to OV2.0, build pass.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Remove old file.
Rebase introduce this file by mistake.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Move functional test for subgraph.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Disable some crash test for continue to test others.

* Rename ConcatConvSumInPlaceTest to ReLuConcatConvSumInPlaceTest
fix ci crash issue.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* Revert "Disable some crash test for continue to test others."

This reverts commit f7a8677c002747b45e84f74672f76e2fdfc7ab22.

* Add const for inPlace.

Signed-off-by: Yan, Xiping <xiping.yan@intel.com>

* fix build issue, missing braces;

Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>
2022-06-17 16:35:58 +08:00
Min, Byungil
d3ca2f8cf1 [GPU] Add functionality of debug config ForceImpltype (#11884)
+ Force one layer using id
+ Add do, reduce to the primitive list

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-06-17 15:03:34 +09:00
Kelvin Choi
49c1a25e0f [GPU] Add deeper case handling for ConvertColor op (#11874) 2022-06-17 09:15:17 +09:00
Yuan Xu
d7b8d80a61 Cumulative throughput 2022.2 (#11831)
* Add Overview page

* Revert "Add Overview page"

* update auto with cumulative throughput

* update formatting

* update formatting

* update content

* update

* fix formatting

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Chen Peter <peter.chen@intel.com>

* update

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Chen Peter <peter.chen@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* Update docs/OV_Runtime_UG/auto_device_selection.md

* update indentation of table

Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-16 16:23:10 +00:00
Tingqian Li
2fec03024d Add signal stack management for AMX in linux python API (#11894)
* Add signal stack management for AMX in linux python API

* fix wording

* fix empty line

* add AT_MINSIGSTKSZ definition

* Fix misspelling and conditional compiling on __linux__
2022-06-16 20:17:05 +08:00
guozhong wang
164a59925a ctput enable perf_count (#11854) 2022-06-16 15:44:35 +08:00
Tomasz Jankowski
f8c4e736b4 [Core] GridSample operator reference implementation (#11841) 2022-06-16 08:47:29 +02:00
mei, yang
39981bf2b8 relax the class number check in paddle multiclass_nms op (#11857)
* relax the class number check in paddle multiclass_nms op

* relax checks in paddle multiclass_nms op
2022-06-16 11:29:15 +08:00
Yuan Xu
54fe2d1a3f Fix yum code format (#11902)
* fix formatting

* update formatting
2022-06-16 09:52:32 +08:00
Bonhun Koo
e5ddce54f6 Add an option to store the quantized weights as INT8 on the GNA speech sample (#11838) 2022-06-16 09:51:28 +09:00
Przemyslaw Wysocki
09a0fb7890 [PYTHON] Create graph and generate image in tests (#11569)
* Change read_image() into generate_image()

* Move test utils from testdata repo to local files

* Minor changes

* Remove unnecessary code

* Minor changes

* Fix compatibility tests

* Fix imports for Azure pipeline

* Move model generation into test_utils

* Minor changes

* Minor changes

* Update linux.yml CI

* Remove testdata repo from .ci/linux.yml

* Remove testdata repo from pipelines

* Fix Azure compatibility tests

* Reset linux.yml

* Remove testdata repo from linux CI

* Try eliminating one of configs

* Attempt at fixing Azure tests

* Add separate utils for compatibility

* xfail comp if op tests

* Minor changes

* Revert changes to .ci files

* minor changes

* Remove xfails

* Remove unecessary import

* Skip if op tests

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2022-06-16 00:31:05 +03:00
Helena Kloosterman
7114863cbc Update date in MO update message (#11834) 2022-06-15 16:53:45 +02:00
Tomasz Dołbniak
eb2fb5bf7d Additional tensor names for ONNX variadic ops (#11893) 2022-06-15 13:22:59 +02:00
Szymon Irzabek
9f7d08154d [GNA] Decompose 2D convolution fix (#11818) 2022-06-15 12:39:22 +02:00
yanlan song
6d903d4376 optimize sample (#11864)
Signed-off-by: fishbell <bell.song@intel.com>

optimize sample

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-06-15 13:38:10 +03:00
Bartek Szmelczynski
6033e52dd9 Remove set_from from samples, update docstrings (#11889) 2022-06-15 12:10:00 +02:00
OpenVINO-dev-contest
594c3dac49 Top k v2 (#11731)
* add paddle op top_k_v2

* rebase

* fix variable support issue for paddle top_k_v2

* Update src/frontends/paddle/src/op/top_k_v2.cpp

Co-authored-by: Bo Liu <bo4.liu@intel.com>

* Update src/frontends/paddle/src/op/top_k_v2.cpp

Co-authored-by: Bo Liu <bo4.liu@intel.com>

* Update src/frontends/paddle/src/op/top_k_v2.cpp

Co-authored-by: Bo Liu <bo4.liu@intel.com>

* format the top_k_v2.cpp

Co-authored-by: meiyang-intel <yang.mei@intel.com>
Co-authored-by: Bo Liu <bo4.liu@intel.com>
2022-06-15 17:59:48 +08:00
Artur Kulikowski
6fdbbe03f5 Remove protobuf requirements in python bindings (#11886) 2022-06-15 10:52:33 +02:00
Yuan Xu
7963ba20f4 Update Get Started Guide structure (#11875)
* Add Overview page

* Revert "Add Overview page"

* fix errors & formatting

* fix article usage according to the styles

* fix errors

* update according to PXT comments

* CVS-80775

* update support matrix with Python version

* fix formatting

* fix formatting

* CVS-71745

* update formatting

* fix formatting

* fix formatting

* fix links & errors

* fix formatting

* update bullet points

* update

* adjust the order

* update

* update

* updates

* update references

* update

* update

* apply same updates with 22/1

* minor fix

* update reference link

* fix CVS-71846

* test

* add troubleshooting steps

* restructure get started home page

* update navigation menu

* update formatting

* fix mistakes

* update wording

* update

* rename configurations files

* update wording

* adjust the structure

* update formatting

* reverse the heading

* test with formatting

* 2nd version of Get Started homepage

* add line breaks

* change to ordered list

* update wording

* update content

* updates

* update DL workbench reference

* update wording

* update references to pip installations

* remove redundant files

* update headings

* update

* update

* restructure

* rename

* updates

* remove a comment

* correct grammar

* correct grammar

* update structure

* update headings

* restructure

* fix formatting

* change the capitalization

* update heading

* update PyPI install

* updates

* update formatting

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/install_guides/troubleshooting-steps.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* integrating comments

* update

* update

* correct an error

* correct an error

* update

* update

* update wording

* typo

* typo

* hiding CentOS issues

* update headings

* update heading

* Update docs/get_started/get_started_demos.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/install_guides/pypi-openvino-dev.md

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-15 05:59:17 +00:00
Krzysztof Bruniecki
5939cb1b36 [GNA] Use node name as tensor name on import previous format where tensor n… (#11826) 2022-06-14 17:16:59 +02:00
Sungeun Kim
a454b7371e add zyx_fsv32 format for onednn deconv (#11873) 2022-06-14 22:04:29 +09:00
Krzysztof Czugala
25e002808a Change pyenchant version to 3.0.0 (#11735)
* Change pyenchant version to 3.0.0

* Change pyenchant version to 3.0.0
2022-06-14 07:56:33 +02:00
Krzysztof Czugala
15d64d25af removal of compatibility tests executing models with dynamic shapes (#11751) 2022-06-14 07:55:24 +02:00
Taylor Yeonbok Lee
c73201c9e6 Optimize memory depdendency analysis (Constant memory does not use pool : No need to add constant nodes to deps) (#11861) 2022-06-14 13:46:24 +09:00
Luo Cheng
cae0c924b6 [CPU] [Test] Eye testcase does not populate all parameters (#11869) 2022-06-14 09:32:56 +08:00
Luo Cheng
151d77062f [CPU] remove unused primitive (#11811)
* remove unused primitive

* update onednn commit
2022-06-14 06:19:05 +08:00
Zhang Yi
209331d9df Pr/8669 (#11840)
* Added tests

* Apply comments

* Update

* Apply comments

* Fixed remaining comments

* Use ov::test::SubgraphBaseTest

Co-authored-by: Egor Shulman <egor.shulman@intel.com>
2022-06-13 20:25:59 +08:00
Tomasz Dołbniak
8603acecba Dont use moved-from object (#11859) 2022-06-13 10:13:47 +00:00
Katarzyna Mitrus
421520bda0 Handle error from propagate_rt_info (#11773) 2022-06-13 11:11:30 +02:00
Luo Cheng
922e32e2f1 disable avx512 brgconv (#11866) 2022-06-13 17:10:42 +08:00
cecilia peng
d91b8bd17c disable random unit tests for multiclass nms and matrix nsm. (#11839)
They sporadically impact CI... possible reason is the order of paddle and openvino is not guaranteed when more than
one bboxes have equal scores.
Actually there is no need for these random tests as the remainding cases have covered them.
2022-06-13 16:43:57 +08:00
Bartek Szmelczynski
93839f8379 [PYTHON API] Check new config (#11402)
Improve code-style in Python API and reduce number of ignored hints.

Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-06-13 10:21:04 +02:00
Luo Cheng
9fe27be1cb [CPU] Fix smoke_Conv_3D_FP32_fusingScaleShiftAndFakeQuantizePerChannel sporadic failure (#11813)
* fix smoke_Conv_3D_FP32_fusingScaleShiftAndFakeQuantizePerChannel sporadic failure

* rebase onednn
2022-06-13 15:29:20 +08:00
Luo Cheng
91216fef5a [CPU] Revert enable ReduceSum -> AvgPool transformation due to perf issues (#11865)
* disable ConvertReduceMeanToPooling

* recover testcase
2022-06-13 14:11:53 +08:00
River Li
c4fdcafa70 Enable unit test for OV 2.0 C API (#11828) 2022-06-13 13:37:59 +08:00
River Li
0571124fd3 Fix CC issues for transformation and snippets (#11798)
* Fix CC issues for transformation and snippets

Matcher should be enabled if it was hit during analyze stage.

* Fixed 3 naming issue
2022-06-13 13:36:35 +08:00
Luo Cheng
c848e138f8 [CPU] cherry-pick: Fix possible data race when accessing global reorder list (#11829)
* [CPU] cherry-pick: Fix possible data race when accessing global reorder list

* rebase onednn
2022-06-13 13:11:53 +08:00
Eddy Kim
3961a241cd Revert "Disable fc_int8_inputs_fused_fp32_sum (#11709)" (#11863) 2022-06-13 13:25:59 +09:00
Sieun Kim
6e3dd4adce Update scatter nd update kernel to support blocked_formats (#11533)
* draft pr for planar and fsv16

* draft pr for general test

* update fusion test (failing)

* update fusing test (pass)

* update fusing test (include exception)

* clean gpu unit test

* review comment applied

* unit test cases added & cpplint applied

* cpplint error fixed

* change gpu test cases for fp16

* fusing test fix generate_unique_indices

* fix typo

* revise cl kernel for occasions when updates shape is altered
2022-06-13 13:25:24 +09:00
Paul Youngsoo Ahn
ca7ddae9ba [GPU][Coverity] Fix coverity defects (#86191) (#11852)
- CID 1486993 Big parameter passed by value (PASS_BY_VALUE)
- CID 1489431 Uninitialized scalar variable
- CID 1489432 Uninitialized scalar variable
2022-06-13 02:51:45 +00:00
Sungeun Kim
9bda2bd580 lws quantize/eltwise (#11851) 2022-06-13 11:18:49 +09:00
Felix Dohyun Kim
d831047f30 [GPU] Optimize DepthToSpace (#11761)
* lws optimize
2022-06-13 11:17:22 +09:00
Luwei Zhou
0066ddbd22 Update onednn submodule hash to fix 3D deconv post-ops issue. (#11836) 2022-06-13 09:21:29 +08:00
Luwei Zhou
c73f6576e0 Luwei/extend reorder test (#10003)
* Extend the reorder unit test.

* Update CMake

* fix somme issues.

* Update

* Update

* Update

* Update

* Update and fix caused by input portconfig only support NCSP.

* Update Copyright

* Add more tests

* Apply review comments.

* Update

* Update

* Fix building error.

* Applied review comments.

* Update

* Update

* Update

* Fix CI

* Update

* Update Cmake
2022-06-11 21:09:26 +08:00
Mateusz Bencer
8e9eaaee91 [FakeQunatize] Use shape from HostTensor instead of FakeQuantize* (#11843) 2022-06-10 17:57:03 +03:00
mei, yang
38a81ec486 update and reorder supported paddle op list (#11601) 2022-06-10 18:46:11 +08:00
guozhong wang
408bdc9f81 add single IECore for core_threading test (#11796) 2022-06-10 08:25:12 +03:00
Bo Liu
79d3fbe3c1 remove limitation usage of brgemm for 'FullyConnected' Node (#11783) 2022-06-10 10:19:41 +08:00
Chenhu Wang
1066d4551f fix_nms_ops_transformation (#11794)
* fix_nms_ops_transformation

* replace node when 5-9
2022-06-10 10:18:28 +08:00
Chenhu Wang
604dc4589c [CPU] Deconvolution caching support (#11835)
* Deconvolution caching support

* get ride of deprecated name

Co-authored-by: mandrono <maxim.andronov@intel.com>
2022-06-10 10:17:59 +08:00
Chenhu Wang
e2e7417c2a load_store_emitters_optimization_and_apply_to_interpolate (#11742)
* load_store_emitters_opt_and_apply_to_interpolate

* zmm_zero_is_always_needed_on_all_platform
2022-06-10 10:17:29 +08:00
opoluektov-lohika
d87233863d Fix experimental detectron do ref impl (#10621) 2022-06-10 03:10:13 +03:00
Tomasz Dołbniak
0932c74ff8 GridSample operator (#11770) 2022-06-09 14:21:53 +00:00
Andrew Kwangwoong Park
5fa669785c [GPU] Update reorder_inputs for data type conversion on fc layer (#11832)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-06-09 15:18:28 +09:00
River Li
8faf8f2d89 OV 2.0 C API (#11700)
* Initial files & cmakefiles for ov 2.0 c api development

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Add all ov 2.0 C APIs define

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Fix review comments

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Disable test of OV 2.0 C APIs test for tmp

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Add related property key for ov 2.0 C-API

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Add description for ov_property_key_e

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Add EXECEPTION handling

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* compiledModel add interface

* add inferrequest interface

* solve cpplint problem

* Finished OV 2.0 C-APIs PPP related development

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Fix code review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Add ov::tensor API

* add compiled model func

* Finished C-API funs about core, model, node development

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C-API] add const to ov_output_node

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C-API] Using define GET_OV_ELEMENT_TYPE & GET_CAPI_ELEMENT_TYPE in tensor APIs

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C-API] add string initialize

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* add inferrequest func

* add move construction to runtime_model

* supplement two infer request interface functions

* [OV 2.0 C-API] Add the common framwork of unit test

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* modify ov_infer_request_get_profiling_info

* add tests dir

* restore CMakeLists.txt

* Fix the bug of COPY in Tensor

* [OV 2.0 C API] Finished core related function unite test

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Add ov:Tensor API test

* [OV 2.0 C API] fix some review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* add some infer request test

* add compiled model test

* [OV 2.0 C API] Finished preprocess related function unite test

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C API] Fix review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C API] Modify to use default model

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* transfer device_name from fix value to parameter

* add some infer request test

* remove compiled model get_property test

* add infer request tests

* Add ov::model Test and modify Tensor Test name

* Determine whether partial shape meets the standard

* Add get tensor name function and Modify reshape test case

* modify fixed tensor name,remove unnecessary comparison

* add ov_model_get_nodes_info, modify according to comments

* Update reshape test

* extract common function, modify interface about get tensor name,shape and type

* modify according comments

* [OV 2.0 C API] Finished hello classification with ov 2.0 c-api development

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C API] Fixed hello classification with ov 2.0 c-api review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [OV 2.0 C API] delete inactive code hello classification with ov 2.0 c-api

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Fix clang format issue

* [OV 2.0 C API] rename

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Fix windows build erre

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Apply qsort for sorting data

Apply qsort for sarting data
Fix issues of "potentially uninitialized local pointer variable"

* Not use deprecated INSTANTIATE_TEST_CASE_P for c api gtest

INSTANTIATE_TEST_CASE_P is deprecated, should use INSTANTIATE_TEST_SUITE_P.

* Fix some review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [Ov 2.0 C API] Add error info

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Fix some review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* Fix review issues

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* polish error message for ov c api

* Redefined ov_shape_t, ov_partial_shape_t and ov_layout_t. Modified functions and test cases involving these variables

* Added the conversion between char* and partial_shape

* Add partial_shape_to_shape

* prune code

* modify split

* Use regex to split and search pattern

* Modify str_to_char_array delete

* Add the judgment of rank

* Fix compiling error

Fix issue: address of array 'shape.dims' will always evaluate to 'true'  if -Wpointer-bool-conversion

Co-authored-by: xuejun <Xuejun.Zhai@intel.com>
Co-authored-by: sunxiaoxia2022 <xiaoxia.sun@intel.com>
Co-authored-by: ruiqi <ruiqi.yang@intel.com>
2022-06-09 10:39:07 +08:00
Roman Baranchuk
0231637441 [CPU] GatherTree: dynamic shapes support (#11544) 2022-06-09 10:18:51 +08:00
Karol Blaszczak
fe41b8eacc Docs multiplugin page-wide tabs merge (#11461) (#11784)
* Docs multiplugin page-wide tabs merge

porting to master changes aligning multi plugin with other articles already present in 22.1

* Update docs/snippets/MULTI4.cpp

* Update docs/snippets/MULTI4.cpp
2022-06-08 16:56:37 +02:00
hyunback kim
41a0d3a0f8 [GPU] Add oneDNN zero point value debug log. (#11817)
* [GPU] Add oneDNN zero point value debug log.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-06-08 22:01:16 +09:00
Sungeun Kim
2c4cb88862 [83906] DG2 Systolic engine not getting used for fp16 Action Recognition model (#11815)
* shallow feature convolution needs to set by zyx_fsv2

* bug fix: wrong get_output_layout calling in is_mixedLayout
2022-06-08 16:48:40 +09:00
Mykhailo Hnap
9ee9f880bd [GPU] Implement Bucketize-3 (#11738)
* [GPU] Implement Bucketize-3

* Add i8 and u8 to evaluates map for bucketize op.
2022-06-08 15:47:19 +09:00
yanlan song
bd02415ad5 enable binder schedule for preview (#11763)
* enable binder schedule

Signed-off-by: fishbell <bell.song@intel.com>

* add cases

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

* fix build failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix coredump

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-06-08 10:02:33 +08:00
Tetiana Gubanova
a3cd2bcc49 [GPU] Implement Reverse operation (#11672)
* Reverse operation single-layer test

* Reverse CreateOp(), primitive and kernel selector

* Implement Reverse CL kernel

* Implement reverse GPU unit tests

* Add boolean as extended input type to reverse operation
2022-06-08 09:46:52 +09:00
Steve Yoo
263c184b97 Fix deformable convolution cl kernel for Multi-Groups (#11613)
* Fix deformable convolution cl kernel for multi group and add its test cases

* Add batch 2, 3, 4 test case to multiple groups
2022-06-08 09:45:42 +09:00
David Nam
755394c1cb [GPU] Use IE_THROW() instead of throw std::runtime_error (#11769)
* Use IE_THROW() instead of throw std::runtime_error

* Change all of throw std::runtime_error into IE_THROW()
2022-06-08 09:44:59 +09:00
Karol Blaszczak
e70074d9db DOCS-clear out the integrate_with_app article (#11799)
* DOCS-clear out the integrate_with_app article

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-06-07 21:52:27 +02:00
Karol Blaszczak
275f2adf52 DOCS-add supported PdPd models (#11804) 2022-06-07 21:51:17 +02:00
Eddy Kim
3a5805fce0 quick fix in tests (#11812) 2022-06-07 15:22:39 +00:00
hyunback kim
98f989302a [GPU] Update to use immad with oneDNN test case. (#11808)
Update correct test case

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-06-07 17:08:23 +03:00
Felix Dohyun Kim
1c9cd04f96 [GPU] Enable blocked formats in Gather primitive (#11486)
* gather blocked format
* enable double blocked
* 5d test
* support cross dimension
* Add some disabled test for later use
* Support non-default planar formats
2022-06-07 17:50:09 +09:00
Felix Dohyun Kim
c1f4cc04de [GPU] Enable blocked formats in Border primitive (#11652)
* fix default_value of border tensor parameters
* Enable all in/out layout
2022-06-07 17:39:57 +09:00
Min, Byungil
49942c2f80 [GPU] Forcing to use clDNN FC on small batch size (#11715)
+ forced to use clDNN FC due to perf drop

Signed-off-by: byungilm <byungil.min@intel.com>
2022-06-07 10:25:14 +03:00
Kelvin Choi
f9afe07c9d [GPU] Add reorder between FC matmal and reshape (#11706) 2022-06-07 13:10:58 +09:00
Mateusz Bencer
7335984edd [MO] Check only source layout if both changing layout and reverse input channel is applied (#11779) 2022-06-06 17:46:07 +02:00
Min, Byungil
104a9d8d52 Add debug config for forcing impl type of fc (#11713)
+ Added new debugging config OV_GPU_ForceImplType to set impl type

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-06-06 23:02:34 +09:00
Luo Cheng
bb3868d8cd [CPU] OneDNN 2.6 migration (#11627)
* Migrate on OneDNN 2.7

* [CPU] Enabled brconv implementation

* Post ops optimizations

* [CPU] Enabled I8 precision on activations for Convolution node

* [CPU][WA] Disabled Deconvolution + post ops fusing optimization

* Fixed FQ post op optimization

* [CPU] Optimize post ops processing

* [WA] Add node name if tensor names are empty

* [WA] remove  layout compatibility chheck that leads to the fase-positive exceptions

* [CPU] Optimize processing for FQ + Sum + FQ post ops pattern

* [CPU][WA] Enabled ReduceSum -> AvgPool transformation due to perf issues

* fix compiler error

* rebase onednn master

* cherry pick from 2.7 to 2.6

* [WA] make cpu case to run completed

* fix xmm zero check

* reopen 'FuseDeconvolutionAndSimpleOperation' Transform  to fix CPU 'ConvolutionBackpropDataLayerTest' fail issue

* [WR] Removed failed the ReduceMean tests caused by 21f3555.

* group deconv may crash on memory out of bound

* [WA] Remove the moc fail case by #af4731a1

* testcase conv maxpool will check brgconv instead of jit

* test subgraph added nhwc format check

* fix gemm bf16 win crash

* fix avx2 groupconv accuracy problem

* [WA] remove invalid FQ tests

* WR to disable the LPT multiplyToGroupConv test because the  transformation was disabled in d5e16f

* add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail

* add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail

* fix gemm bf16 fail

* Fix ConcatConvSumInPlaceTest

* Add cpuDebugFuncTests target

* [WA] bf16 crash due to MemoryInput/Output

* OVClassBasicTest case typo

* testcase subgraph sets default ENFORCE_BF16 to NO

* fix clang check

* Fix primType check issue

* Fix cpplint error

* MemoryInput/Output support bf16; Enforce bf16 'NO' should enable snipptes

* disable BF16 fusing fakequant testcase

* testcase init support amx check

* testcase for conv brgconv avx512/amx

* testcase for conv brgconv avx512/amx

* WR enforce reorder bug and add NSPC into deconv supported list.

* Compiling issue fix.

* [WA] skip fakequantize fusing in bf16

* mix legacy/new binary postops

* make nightly case run. tested on amx/avx512/avx2.

* [CPU] Add BF16 AMX test for Matmul

* Add CPU dump check tool

* Add verbose log

* Generate exec graph in cpu dump check tool

* fix binary prelu post Ops

* fix cpplint

* Update ONEDNN version to fix AVX2 bug.

* cpu dump check supports compare dump files

* Add a new CPU_DEBUG_CAPS: OV_CPU_SUMMARY_PERF

* change VERBOSE_LOG to DEBUG_LOG

* fix oneDNN register_jit_code log

* fix cpplint

* Add OV_CPU_DEBUG_LOG controls debug logs to show

* Revert reorder WR.

* Enhanced CPU debug logs and breakpoint support

* Enhanced cpu_dump_check with --ports

* Fix DEBUG_LOG compile issue

* GroupDeconvolutionLayerCPUTest extend to add amx test cases

* Add Node into DBUEG_LOG

* cpu_dump_check: Dump results even no port is specified

* FIx MergeTransposeAndReorder for blocked input

* Fix cpu_dump_check result names

* Enhance DEBUG_LOG on edges

* Cpu dump check support shape mismatch

* Fix bi-directionl inplace

* Cpu dump check support inference_precion_hing f32.

* fix windows dump fail.

* fix depthwise nwc conv

* add rtol arg

* win debugbreak

* fix pooling accuracy

* GroupDeconvolutionLayerCPUTest remove invalid test param for nspc

* recover ov onednn fork

* revert af4731a1f1 '[WA] remove layout compatibility chheck'

* [WA] disable avx2 conv3d fusing case

* [WA] disable avx2 conv3d fusing case

* [WA] Disabled weights md transpose in FC to prevent perf degradations

Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
Co-authored-by: Zhang Yi3 <yi3.zhang@intel.com>
Co-authored-by: liubo-intel <bo4.liu@intel.com>
Co-authored-by: Luwei Zhou <luwei.zhou@intel.com>
Co-authored-by: Li, Tingqian <tingqian.li@intel.com>
Co-authored-by: xuchen-intel <chen.xu@intel.com>
Co-authored-by: ceciliapeng2011 <cecilia.peng@intel.com>
2022-06-06 18:30:32 +08:00
Helena Kloosterman
764a2ec012 Fixes for OpenCV script filename in docs (#11791) 2022-06-06 16:42:04 +08:00
guozhong wang
d3b7a1b86c fix cumulative donot handle nireq (#11775) 2022-06-06 10:10:05 +08:00
Artur Kulikowski
32580ca65b Add ReduceMerge transformation (#11746)
Tickets: 60918
2022-06-04 13:44:50 +02:00
Sungeun Kim
8029fd9675 add mixed precision case (#11747) 2022-06-04 11:40:52 +09:00
Mateusz Bencer
dba4dbb9d6 Improve L2NormFusion transformation (#11765) 2022-06-03 18:51:21 +02:00
Mateusz Tabaka
4650ecd0b5 Disable fusings_gpu/deconv_scale_activation_quantize_i8_eltwise_quantize_u8.basic/4 (#11777) 2022-06-03 21:07:48 +09:00
Min, Byungil
aea04e275c [GPU] Enable onednn reduction (#11570)
* It has better performance by using reduction kernel instead of pooling kernel in oneDNN for reduction layer.
* Stop using global pooling instead of reduce primitive
* Use oneDNN reduction if its mode is supported by optimized onedNN kernel
* activation pow is supported
* Use clDNN reduce if 3d or redundant reduce, tensor size mismatch
* Updated thirdparty onednn_gpu

Signed-off-by: Min, Byungil <byungil.min@intel.com>

Co-authored-by: Wei Tang <wei1.tang@intel.com>
Co-authored-by: Chen Kurt <kurt.chen@intel.com>
2022-06-03 18:36:27 +09:00
mei, yang
b780c61506 Extend python API for GenerateProposals (#11723) 2022-06-03 10:38:02 +02:00
Krzysztof Bruniecki
4ef0aab166 Create copy of RO IR/bin file mapped Blob to allow converting from NCHW to NHWC (#11771) 2022-06-03 08:38:52 +02:00
Krzysztof Bruniecki
10a6e56811 Return instead throw in check function aroun dtors (#11745) 2022-06-03 08:37:33 +02:00
Katarzyna Mitrus
47155b43d0 [Transformation] Add GeluFusion with Tanh (#11752) 2022-06-03 08:36:12 +02:00
Przemyslaw Wysocki
1db4446e2a [MO] Extend MO for NonMaximumSupression-9 (#11576) 2022-06-02 18:18:14 +02:00
Sebastian Golebiewski
b88eed7645 Proofreading MO Guide (#11605)
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update Additional_Optimizations.md

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Update IR_and_opsets.md

* Update Getting_performance_numbers.md

* Update Model_Optimizer_FAQ.md

* Update Supported_Frameworks_Layers.md

* Update Convert_Model_From_Caffe.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_ONNX.md

* Update Convert_Model_From_Paddle.md

* Update Convert_Model_From_PyTorch.md

* Update Convert_Model_From_TensorFlow.md

* Update Convert_Model_Tutorials.md

* Update Converting_Model.md

* Update Cutting_Model.md

* Update IR_suitable_for_INT8_inference.md

* Update Aspire_Tdnn_Model.md

* Update Convert_GluonCV_Models.md

* Update Convert_Style_Transfer_From_MXNet.md

* Update Convert_Faster_RCNN.md

* Update Convert_Mask_RCNN.md

* Update Convert_Bert_ner.md

* Update Convert_Cascade_RCNN_res101.md

* Update Convert_F3Net.md

* Update Convert_QuartzNet.md

* Update Convert_RCAN.md

* Update Convert_RNNT.md

* Update Convert_YOLACT.md

* Update Convert_AttentionOCR_From_Tensorflow.md

* Update Convert_BERT_From_Tensorflow.md

* Update Convert_CRNN_From_Tensorflow.md

* Update Convert_DeepSpeech_From_Tensorflow.md

* Update Convert_EfficientDet_Models.md

* Update Convert_FaceNet_From_Tensorflow.md

* Update Convert_GNMT_From_Tensorflow.md

* Update Convert_NCF_From_Tensorflow.md

* Update Convert_Object_Detection_API_Models.md

* Update Convert_RetinaNet_From_Tensorflow.md

* Update Convert_Slim_Library_Models.md

* Update Convert_WideAndDeep_Family_Models.md

* Update Convert_XLNet_From_Tensorflow.md

* Update Convert_YOLO_From_Tensorflow.md

* Update Convert_lm_1b_From_Tensorflow.md

* Update Customize_Model_Optimizer.md

* Update Extending_Model_Optimizer_with_Caffe_Python_Layers.md

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_GPT2.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_RetinaNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Getting_performance_numbers.md

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_Kaldi.md

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_Paddle.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 17:05:14 +02:00
Eddy Kim
04b69af0f5 [GPU] Support for PReLU with multiple dims slope tensor for GPU (#11782)
* reshape a slope tensor of channel-wise prelu

* changed to follow prelu spec

* added unittests for prelu with multiple dims slope

* Update constant.cpp

Blanks are added.

* added comments about PReLU slope reshape policy

* added int8 prelu fusion tests
2022-06-02 23:01:01 +09:00
Yuan Xu
fc61b001c0 Yuan transition guide restructure (#11778)
* Add Overview page

* Revert "Add Overview page"

* restructure

* update

* updates

* update

* update

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/preprocessing.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* fix formatting

* fix formatting

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 11:33:44 +00:00
Katarzyna Mitrus
fb09555b6d [Eye-9] Extend MO with Eye-9 op (#11555) 2022-06-02 11:29:55 +02:00
guozhong wang
5b75d69712 add testcase for EXCLUSIVE_AYSNC_REQUESTS when input device is AUTO (#11716)
* add AUTO cpu and gpu testcase for EXCLUSIVE_AYSNC_REQUESTS

* add AUTO myriad testcase for EXCLUSIVE_AYSNC_REQUESTS
2022-06-02 10:28:30 +08:00
guozhong wang
cd771ed23b Simple graph correctness test for virtual devices (#11492)
* add cumulative correctness test

* add infer_correctness test

Signed-off-by: Hu, Yuan <yuan2.hu@intel.com>

* add comments

Signed-off-by: Hu, Yuan <yuan2.hu@intel.com>

Co-authored-by: Hu, Yuan <yuan2.hu@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-06-02 09:42:38 +08:00
Mykhailo Hnap
d26ed6a180 [GPU] Roll-7 (#11602)
* [GPU] Implement Roll kernel

* [GPU] Add Roll kernel selector

* [GPU] Add Roll primitive

* [GPU] Add Roll helpers

* [GPU] Implement unit tests for the Roll operation

* [GPU] Add Roll operation to GPU plugin

* [GPU] Add single layer tests for the Roll operation

* [GPU] Add changes after review

* [GPU] Improve cldnn unit test
2022-06-02 09:42:11 +09:00
Jan Iwaszkiewicz
c7f8112339 [ONNX] Extend ONNX FE with SoftSign-9 operation (#11766) 2022-06-01 13:37:17 +02:00
River Li
042bd7274a Dynamic shape mem reuse solution (#11667)
* Dynamic shape memory reuse solution

* Fix Split node to properly work with dyn mem

* Fix race condition for Memory mgrHandle

* Avoid Memory race condition between GetData and SetDataHandle

Add a lock for race condition between  ov::intel_cpu::Memory::GetData() and ov::intel_cpu::Memory::SetDataHandle() is not a good solution,
which will impact the inference performance. We found that it is unnecessary get edge DataPtr in inferRequest::SetBlob or GetBlob, which
only need the tensorDesc, so we can only get tensorDesc to replace get dataPtr to avoid this race condition.

* Resolve reviewer's comments

* Avoid performance impact due to frenquent reset MemMngrHandle

If MemMngrHandle already has been assigned an external buffer, it can be reused.
Else it need create a new one.
2022-06-01 18:49:47 +08:00
Artur Kulikowski
8b1ed3d5b2 Fix onnx_frontend_tests (#11358) 2022-06-01 10:59:37 +02:00
Artur Kulikowski
c519aff42f Enable ShuffleChannelsFusion and DepthToSpaceFusion in MOC (#11662)
Ticket: 79523
2022-05-31 11:09:00 +02:00
Chenhu Wang
2e4f14a9f7 fix unitialized value in code scan (#11711) 2022-05-31 05:53:50 +03:00
cecilia peng
016c5f537a Cecilia/multiclass nms9/cpu impl (#11246)
* multiclass_nms opset9 spec, api, reference, paddle fe mapper, paddle fe unittest.

* multiclass_nms opset9 cpu node impl.

* multiclass_nms opset9 shape infer fix.

* multiclass_nms opset9: add transform ConvertMulticlassNms8ToMulticlassNms9.

* ConvertMulticlassNmsToMulticlassNmsIE: to MulticlassNmsIEInternal

* add test dependency package paddledet==2.1.0

* 1. fix for roisnum overflow. 2. common shape_infer private function.

Signed-off-by: jialipen <cecilia.peng@intel.com>

* 1. use common infer_shape helper. 2. fix roisnum overflow issue. 3. fix for nmsWithEta.

* test suite for opset9 multiclass_nms smoke tests pass, with both static and dynamic shapes.

code clean for unit test.

* decouple specification from this PR.

* op fuzzy: dynamic input/output

* reference impl refactor

* multiclass_nms_base no need clone_inputs.

* code clean

* restrict ppdet import

* fix clang format error

* change ppdet import to resolve CI fail issue related to its dependency.

* fix CI

* refactor: multiclass_nms_shape_inference for opset9 and reference impl.
TODO: could be applied to opset8 and even matrix_nms.

* fix CI build failure.

* CI fix for ambiguous namespace reference issue when
building static libs.

* update nms save_model python scripts.

* dynamic inputs for NMS with CPU plugin.

* copyright header for test scripts.

* op comformance test for multiclass_nms_9.

* minor update: is_type

* python opset9 and multiclass_nms

* flake8 CI fix

flake8 CI fix

flake8 CI fix

* remove NmsBase. stage1.

flake8 CI fix

remove NmsBase. stage 1 fix.

* rm NmsBase. stage2.

* more multiclass_nms prop tests and fix.

* remove unchanged ops from binding opset9.

* dependcy of paddle_tests.

* fix: add MulticlassNms to op mapper.

* clang format fix

* fix merge error.
2022-05-31 07:56:01 +08:00
Sungeun Kim
82fdf165eb [GPU] choose onednn for 3d conv (#10857)
* add formats for 3d conv
   data formats
   -bs_fs_zyx_bsv32_fsv32
   -bs_fs_zyx_bsv32_fsv16
   -bs_fs_zyx_bsv8_fsv4
   -bs_fs_zyx_bsv8_fsv2
   -bs_fs_zyx_bsv16_fsv32
   -b_fs_zyx_fsv2, b_fs_zyx_fsv4
   weight formats
   -os_is_zyx_osa2_isa8_osv8_isv2
   -os_is_zyx_osv8_isv4
   -os_is_zyx_osv8_isv2
   -gs_oizyx_gsv32
* add supported formats for primitives
* choose onednn convolution impl for 3d conv
* optimize layout of shallow depth convolution
* remove reorder for conv
* Don't remove reorder between bs_fs_zyx_b32_f16/f32 and bfyx.
* add formats to SetDefault() to optimize gws/lws for quantize/eltwise
* fallback cldnn if onednn pooling's layout is b_fs_zyx_fsv32 and i8.
* fixed wrong position for new weight formats
* restore imad_case()
* This func is used to choose format for fallbacked cldnn
* [GPU] add debug flag: OV_GPU_SerialCompile
    0(default): parallel compile
    1: serial compile
* add is_mixed_layout
* remove format::bs_fs_zyx_bsv8_fsv4 in needs_onednn_small_ic_to_blocked
* prevent to fuse the reorder which is between quantize and conv
* shallow feature first conv
2022-05-31 07:54:00 +09:00
Yuan Xu
b67ffe303f Fix a heading issue in Auto (#11744)
* fix the heading issue

* fix headings
2022-05-30 09:01:54 +00:00
Artur Kulikowski
93021121e3 Fix cutting the graph (#11574)
* Revert "[MO args][ONNX FE]fix cutting graph with input, output or both (#9698)"

This reverts commit 2b03d5fe66.

* Fix cutting the graph when inputs/outputs are passed to the MO

* Check that port exists

* Simplification of getting node port

* Reducing amount of nesting inside searching of node by operation name

* Refactoring

- remove mutable default arg
- changes in code style
- change variables name

* Check that user input data type is dictionary

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2022-05-30 10:04:24 +02:00
guozhong wang
f5e2a463b5 remove CPU from default candidate list while GPUs more than 2 (#11753) 2022-05-30 10:04:13 +08:00
Tomasz Jankowski
70e9cc0ce8 Enable ConvertNegative in MOC (#11720) 2022-05-27 13:37:35 +02:00
Artur Kulikowski
ae84e11a41 [ONNX Import] add method Node::get_attribute_as_constant() (#10783)
Tickets: 53284
2022-05-27 12:34:48 +02:00
Katarzyna Mitrus
8a975c886a [MO] Support for TF GRUBlockCell (#11732)
* Add GRUBlockCell front extractor

* Add GRUBlockCell Op to mo ops

* Add TF GRUBlockCell mo layer tests

* Add GRUBlockCellToGRUCell Replacement init

* Update GRUBlockCellToGRUCell Replacement with gate order adjustment

* Update GRUBlockCellToGRUCell Replacement with weights transpose

* GRUBlockCellToGRUCell Replacment refactor

* Set tests eps to avoid sporadic failures

* Style
2022-05-27 11:47:42 +02:00
Bartek Szmelczynski
da09272d9f remove xfails and update tolerance (#11729) 2022-05-27 10:26:28 +02:00
Bartek Szmelczynski
ffd797bc9f [PYTHON][NMS-9] Extend Python API for NMS-9 (#11681)
* extend NMS-9 ngraph python

* add tests for NMS

* move tests for NMS from test_reduction to test_create_op
2022-05-27 10:25:49 +02:00
Sungeun Kim
7a1e7f122f [GPU] some convs are in ref for WDSR (#11728)
* add supported data types for onednn conv

* Remove case: in_f32 to out_f32 in are_data_types_suitable_for_onednn
2022-05-27 13:50:30 +09:00
Artur Kulikowski
873e3dad2d Limiting protobuf to version < 4.0.0 (#11748)
* Upgrade Protobuf to version 3.18.2 in python's requirements

* PaddlePaddle tests requires protobuf < 4.0.0

* ONNX tests use protobuf 3.18

* Python bindings protobuf <4.0.0
2022-05-26 21:32:37 +02:00
Yuan Xu
1bcdf48f42 Get started guide restructuring and updating (#11719)
* Add Overview page

* Revert "Add Overview page"

* restructure get started home page

* update navigation menu

* update formatting

* update wording

* update

* rename configurations files

* update wording

* adjust the structure

* update formatting

* reverse the heading

* test with formatting

* 2nd version of Get Started homepage

* add line breaks

* change to ordered list

* update wording

* update content

* updates

* update DL workbench reference

* update wording

* update references to pip installations

* remove redundant files

* update headings
2022-05-26 17:09:31 +02:00
Paul Youngsoo Ahn
c185198785 [GPU] Added UUID property(#81574) (#11567)
Co-authored-by: Ahn, Paul Y <paul.y.ahn@intel.com>

Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
2022-05-26 16:44:53 +09:00
opoluektov-lohika
ccd001f25b [GPU] Support axis 0 for Softmax (#10364)
* [GPU] Modify Softmax single layer tests to check Softmax-8 is supported with axes in [-rank, rank) interval

* [GPU] Fix cldnn::softmax::dimension_t documentation

* [GPU] Fix ParamsKey::EnableSoftmaxDim

Support Z dimension.

* [GPU] Add Softmax single layer test that checks 5D case

Since some Softmax kernel code contains ifdef on 5-dimensional case,
a test case is needed that covers this functionality.

* [GPU] Support axis 0 in Softmax

* [GPU] Modify Softmax single layer tests to check axis 0

* [GPU] Modify Softmax items class optimized kernel to handle axis 0 correctly

Modify single layer test accordingly.

* [GPU] Modify Softmax unit-test to check softmax::normalize_b

* Split SoftMaxLayerTest into opset1 and opset8 versions

Use SoftMax8LayerTest in the tests throughout repository.
SoftMaxLayerTest now defaults to SoftMax1LayerTest for compatibility.

* [GPU] Add f16 test-case for Softmax single-layer test

Co-authored-by: tgubanova-lohika <tgubanova@lohika.com>
2022-05-26 12:06:08 +09:00
Bo Liu
1cce278fcb Paddle Frontend Op conversion: ROIAlign9,Sqrt,Swish (#11661)
* Paddle Frontend Op conversion: ROIAlign9,Sqrt,Swish

* modify import ppdet way based on the latest master branch
2022-05-26 08:38:31 +08:00
Tomasz Dołbniak
dd930fdb6e GridSample-9 specification (#11703) 2022-05-25 21:34:11 +02:00
yanlan song
0c7840ef28 multi code refine (#11663)
* draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* refactor for multi

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* refactor auto draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try to fix executable get config test failed issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* set ExecNetwork only one time

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* format code and using alias

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* clear head file

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* change name from Context to ScheduleContext

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* polishing

Signed-off-by: fishbell <bell.song@intel.com>

* polish & new implementation

Signed-off-by: fishbell <bell.song@intel.com>

* enable/test a new schedule

Signed-off-by: fishbell <bell.song@intel.com>

* port fps logs over

Signed-off-by: fishbell <bell.song@intel.com>

* restructure

Signed-off-by: fishbell <bell.song@intel.com>

* fix windows build failure

Signed-off-by: fishbell <bell.song@intel.com>

* clean up code

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-05-25 22:53:47 +08:00
Tetiana Gubanova
eb36b891eb [GPU] Impelement ExperimentalDetectronPriorGridGenerator-6 gpu implementation (#11632)
* Implement experimental_detectron_prior_grid_generator kernel

* register experimental_detectron_prior_grid_generator operation

* implement single layer tests

* add unit tests to detectron
2022-05-25 18:52:03 +09:00
Jan Iwaszkiewicz
320531def0 [MO] SoftSign operator extractors (#11726) 2022-05-25 11:44:26 +02:00
Krzysztof Bruniecki
81adc47e83 [GNA] Implement GNA memory region splitting (RO/Input/Output/State/Scratch) and export in GNA format enabled (#11577) 2022-05-25 11:40:50 +02:00
Serhii Pavlovskyi
4b08ce4787 [GPU] (I)Dft with single layer test (#9891)
* dft with single layer test

* idft with single layer test

* fix output param usage in dft

* update dft according to the clang-format

* move output layout setup to calc_output_layout

* add support for other dimensions

* add clDNN unit test for DFT/IDFT

* remove unnecessary original rank

* use defined formats in kernel

* fix dft docs

* changes after review

* Revert "fix dft docs"

This reverts commit 45b05172dfd161d92dae6d26e0f1b74748e56fd5.

Co-authored-by: Serhii Pavlovskyi <spavlovskyi@lohika.com>
Co-authored-by: Mykhailo Hnap <mhnap@lohika.com>
2022-05-25 16:24:46 +09:00
Mateusz Tabaka
e767e9e243 Extend python API of RDFT and IRDFT (#11737)
Tickets: 79184 and 79198
2022-05-25 08:25:01 +02:00
Mateusz Tabaka
5ffba43f62 Disable fc_int8_inputs_fused_fp32_sum (#11709)
Ticket: 85210
2022-05-25 06:54:22 +02:00
Chenhu Wang
fa7ca20425 NMS-9 op creation and ref implementation and CPU plugin (#11132)
* operation creation

* refrence implementation

* code style

* soft_nms_supported_by_nms9

* IE core and cpu plugin update

* apply review

* add transformation test
2022-05-25 06:27:12 +03:00
Karol Blaszczak
728a243d77 Docs-minor fix for documentation build error (#11724)
* Docs-minor fix for documentation build error

* fix one snippet comment and auto-device updates
2022-05-24 19:50:29 +02:00
Mateusz Tabaka
3a202c2775 Don't install networkx with version 2.8.1 (#11718)
With new networkx release (2.8.1) some of MO tests started to fail
with following error:
```
def __setstate__(self, state):
    self._graph = G = state["_graph"]
    self._adjdict = G._pred if hasattr(G, "pred") else G._adj
    AttributeError: 'Graph' object has no attribute '_adj'
```

Seems like regression that was introduced in
f50fc70b8c
2022-05-24 14:07:36 +02:00
Tetiana Gubanova
22ee17fda6 [GPU] AdaptiveMaxPool and AdaptiveAvgPool gpu implementations (#11556)
* Add kernel for AdaptivePooling

* Add GPU primitive for AdaptivePooling

* Add single-layer tests for GPU

* Add adaptive pooling unit tests
2022-05-24 00:48:55 +09:00
Mateusz Tabaka
ff6ea62ce0 Fix local work size for conv kernel yxfb_yxio_b16 with fp16 (#11679)
convolution_gpu_yxfb_yxio_b16 for fp16 has hardcoded reqd_work_group_size
to (16, 1, 1). On devices where CL_DEVICE_MAX_WORK_GROUP_SIZE is 512
GetOptimalLocalWorkGroupSizes picks (16, 2, 1) for LWS.
That causes issues during clEnqueueNDRangeKernel since LWS doesn't match
with reqd_work_group_size in the kernel.
2022-05-23 15:59:37 +02:00
Mateusz Tabaka
fbc99ef1ad Disable conv_int8_activation_eltwise_quantize_onednn/bsv32_fsv32 (#11708)
Tests fail because eltwise is not fused properly to convolution.

Ticket: 85205
2022-05-23 12:50:53 +02:00
Mateusz Tabaka
488315fe2e Fix pooling_onednn_activation2 test in cldnn tests (#11680) 2022-05-23 10:26:42 +02:00
Mateusz Bencer
a859024d76 MO support of RDFT and IRDFT (#11690) 2022-05-21 13:14:23 +02:00
Szymon Irzabek
714601cf5b [GNA] Change Constants' precision in MVN decomposition (#11665) 2022-05-20 13:21:32 +02:00
Adam Tumialis
14f82ea31c Update CODEOWNERS
Maintainers for IE transformations changed to a new team.
2022-05-20 12:22:43 +02:00
Tetiana Gubanova
91ab69e0c7 [GPU] Implement ExperimentalDetectronGenerateProposalsSingleImage-6 (#11616)
* Add single layer tests for GPU

* Add GPU primitive for ExperimentalDetectronGenerateProposalsSingleImage

* Add kernel for ExperimentalDetectronGenerateProposalsSingleImage

* Add unit test

* rename abbreviation edgpsi to the full name experimental_detectron_generate_proposal_single_image

* Add f16 support to operation

* Add f16 support to the unit test

* Add notification about the second output in primitive

Co-authored-by: Oleksii Khovan <okhovan@lohika.com>
2022-05-20 17:02:29 +09:00
Chen Xu
8886d0fde7 [CPU] Fix shape mismatching in fusing per channel (#11162)
* Fix shape mismatching in fusing per channel

* channelAxis data type changes to int
2022-05-20 11:05:17 +08:00
yanlan song
35ba009cd6 Bell/cache refine (#11414)
* cache compliance

Signed-off-by: fishbell <bell.song@intel.com>

* clang format

Signed-off-by: fishbell <bell.song@intel.com>

* fix crash

Signed-off-by: fishbell <bell.song@intel.com>

* enable test cases

Signed-off-by: fishbell <bell.song@intel.com>

* enable more tests

Signed-off-by: fishbell <bell.song@intel.com>

* refine cases

Signed-off-by: fishbell <bell.song@intel.com>

* do not use try catch

Signed-off-by: fishbell <bell.song@intel.com>

* case refine

Signed-off-by: fishbell <bell.song@intel.com>

* fix unicode failure

Signed-off-by: fishbell <bell.song@intel.com>

* use model_path.empty instead of try catch

Signed-off-by: fishbell <bell.song@intel.com>

* add mock test

Signed-off-by: fishbell <bell.song@intel.com>

* add more mock test

Signed-off-by: fishbell <bell.song@intel.com>

* disable unicode test on windows

Signed-off-by: fishbell <bell.song@intel.com>

* add hetero caching/stateful mode support

Signed-off-by: fishbell <bell.song@intel.com>

* remove the disable label for CPU

Signed-off-by: fishbell <bell.song@intel.com>

* resolve the CI failure

Signed-off-by: fishbell <bell.song@intel.com>

remove redundant lines

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-05-20 10:17:40 +08:00
Katarzyna Mitrus
76dfeceb93 [Eye-9] Reference and CPU implementation for Eye-9 (#11538)
* Added shell for Eye-9

* Updated spec for Eye-9

* Added reference for Eye-9

* eye cpu

* Added op impl check for Eye-9

* Fix unallowed dynamic to static dim conversion in eye shape_infer

* Add template plugin tests for dynamic shapes

* Add template plugin tests for dynamic shapes batch input

* Enable batch shape input dynamic rank

* Uncomment 3D batch cpu Eye tests

* Update assertions and messages

* use ov::element type

* Remove redundant evaluate from eval map

* Style fix

* Add static_cast<T>(1) to cpu eye

* Add defaults to eye cpu class members

* Reuse out_ptr and checks

* Reutrn if onesPerBatchNum == 0

* Add Eye CPU Dynamic shape tests with 2D batch

* Additional test cases for CPU and reference

* Disable 3D batch eye cpu tests

* Fix CPU implementation for matrix with not equal cols and rows

* Update CPU test name

* Disable CPU Eye 3D batch static shapes tests

Co-authored-by: Alexandra Sidorova <alexandra.sidorova@intel.com>
Co-authored-by: Yury Gaydaychuk <yury.gaydaychuk@intel.com>
2022-05-19 16:37:00 +03:00
Bartek Szmelczynski
f83530138e [ROIAlign-9] Extend nGraph Python API for operation "ROIAlign-9" (#11572)
* add opset8 ngraph ROIAlign op

* fix style

* fix style v2

* remove redundnat added files

* fix __init__.py imports

* fix style v3

* fix wrong imports

* fix flake error

* fix minor errors

* add blank line

* fix args name

* Update src/bindings/python/src/compatibility/ngraph/opset9/ops.py

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* update docsstring, move roi_align tests to test_create_op.py file

* Update src/bindings/python/tests/test_ngraph/test_create_op.py

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* Update src/bindings/python/tests_compatibility/test_ngraph/test_create_op.py

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* add alias

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-05-19 13:55:46 +02:00
Mateusz Bencer
4d0a572f13 Move RemoveConcatZeroDimInput and RemoveMultiSubGraphOpDanglingParams to CNNNetworkNGraphImpl ctor (#11547) 2022-05-19 10:50:22 +02:00
Yuan Xu
18260e0b4b Revert "CVS-82186 port to master (#11701)" (#11704)
This reverts commit 31dab599c7.
2022-05-18 13:43:22 +02:00
Tomasz Jankowski
a61902fc7c Disable ONNX UT on Mac (#11685) 2022-05-17 16:15:26 +02:00
Tomasz Dołbniak
94ce06fc29 ONNX node names as friendly names (#10532) 2022-05-17 13:47:52 +02:00
Yuan Xu
31dab599c7 CVS-82186 port to master (#11701)
* Add Overview page

* Revert "Add Overview page"

* plugin api separate config (#11109)

Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
2022-05-17 11:46:46 +00:00
hyunback kim
8691b88296 [GPU] Support weight tag for oneDNN v2.6 (#10859)
* Update oneDNN rls-v2.6
* Support weight tag for oneDNN v2.6
* Fix first conv selection issue in oneDNN
* oneDNN v2.6 required specific tags to run jit:ir primitives.
* any_tag can find optimized primitives in oneDNN.
* Enable aBcd2b src tag for oneDNN v2.6
* Add create_memory_desc from format string.
* Apply group depthwise separable conv uses jit:ir in oneDNN v2.6
* Use byxf format.
* Update only use acdb format in shallow group conv
* Fix refconv selection in shallow conv with post operations.
2022-05-17 16:55:55 +09:00
Taylor Yeonbok Lee
a82a0d3672 [GPU] Improve int8 FC performance (#11612)
* Enable reshape int8

* Fixed quantize fusing through reorder+reshape : Fixed the condition to check per_tensor_input_shift only when need_input_shift is true

* minor change

* Allow FP quant to be fused to FC/gemm

* Disable reshape tranform for onednn until onednn FC is optimized
2022-05-17 12:48:38 +09:00
Tomasz Dołbniak
ffea6b5aac ONNX Resize - sizes/scales inputs handling (#11692) 2022-05-16 20:05:06 +03:00
Mateusz Bencer
3b32502fbf Add IRDFT reference implementation (#11642) 2022-05-16 10:30:56 +03:00
mei, yang
9648080fbc Meiyang/paddle generate proposals 2 (#11285)
* create new op v9::GenerateProposalsSingleImage and support paddle generate proposal v2

* support scale in GenerateProposals

* Add output roi_num in GenerateProposal; change anchor's shape to [H, W, A, 4]

* fix paddle generate proposals frontend issue

* rename MKLDNNGenerateProposalsSingleImage to GenerateProposalsSingleImage

* add GenerateProposals attribute 'roi_num_type'

* fuse type togenerate_proposals

* multibatch support

* fix review comments; paddle tests added

* use pad instead of concat

* fix generate proposals visitor test parameter

* add testcase for generate proposal scale and fix generate proposals reference issue

* rename to GenerateProposals

* add generate proposals ngraph reshape test; opset9 support and test;

* fix compiling issue

* add dependency 'paddledet' on paddle frontend test

* Update src/core/include/ngraph/op/generate_proposals.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/include/openvino/op/generate_proposals.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/reference/include/ngraph/runtime/reference/generate_proposal.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/src/op/generate_proposals.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/include/ngraph/op/generate_proposals.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/src/op/generate_proposals.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/src/op/generate_proposals.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/src/op/generate_proposals.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/include/openvino/op/generate_proposals.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update src/core/include/openvino/op/generate_proposals.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* fix compiling issue after newly added commit

* clang fix

* fix compiling issue

* add paddledet dependency

* fix compiling issue

* fix compiling issue

* clang fix

* skip ppdet.modeling.ops in paddle generate proposals test

* single layer update after rebase master

* set pycocotools to 2.0.4

* skip ppdet.modeling.ops.__init__

* add paddle test dependency

* fix template issue

* rename mkldnn to dnnl

* fix template issue

* fix windows compiling issue

* update testcase vector construction

* add shape check and test; add some annotation; apply review suggestion

* Revert "add paddle test dependency"

This reverts commit 959a2d770d3f6cb28d4609981c79cc49a25847fd.

* rm dependency of paddledet for paddle frontend test

* update opset9 number

* fix windows issue

Co-authored-by: Luo Cheng <cheng.luo@intel.com>
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2022-05-16 09:13:52 +08:00
Jade Cho
6ce6a7662e [GPU] Support implicit crop in input transposition. (#11496)
* [GPU] Support implicit crop in input transposition.

+ Make the crop in front of quantize implicit by changing output format to bfyx.
+ Use implicit concat after quantize nodes.

* Add unit test for implicit crop and concat.

+ remove unnecessary code.
2022-05-13 18:02:13 +09:00
Min, Byungil
5da3162cd1 [GPU] Bugfix for invalid LoadType for planar input (#11493)
+ Modified jitter Load for planar input of fused eltwise
+ Bugfix in jitter if planar input has LT_ALIGNED_READ

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-05-13 17:31:20 +09:00
Jan Iwaszkiewicz
d19b80a82d [SoftSign-9][PYTHON] Python API for SoftSign (#11677) 2022-05-13 09:55:14 +02:00
Jade Cho
de1db99383 [GPU] Replace out scale attribute of onednn pooling with binary mul (#11674) 2022-05-13 13:34:12 +09:00
yanlan song
b1ecd3ea8f only strip when device start with - (#11651)
Signed-off-by: fishbell <bell.song@intel.com>
2022-05-12 04:53:38 +00:00
Jan Iwaszkiewicz
62d17a070a [SoftSign-9] Shell, reference impl and decomposition for SoftSign-9 (#11546) 2022-05-11 16:44:15 +02:00
Katarzyna Mitrus
7851020cd3 [ONNX] Enable ONNX ROIAlign-16 (by ov::v9::ROIAlign) (#11633) 2022-05-11 11:43:15 +02:00
Wilson Seok
5456631517 Cleanup eltwise skipped testcase in gpu single layer test (#11525)
* cleanup eltwise skipped test cases in gpu plugin

* update comment clearly
2022-05-11 18:07:36 +09:00
hyunback kim
a7368d2e35 [GPU] Apply hsigmoid decomposition in oneDNN post-opt (#11603)
* [GPU] Apply hsigmoid decomposition in oneDNN post-opt

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-05-11 10:07:02 +09:00
Anuj Mittal
01a08c0a97 installing-openvino-yocto: update for 2022.1 (#11657)
Update the branch to be used for 2022.1 and remove reference to
-staticdev package which isn't generated anymore.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-05-10 18:00:42 +02:00
Kevin Putnam
bdf1d92660 1. Makes transition guide link absolute so it will work for API docs using the same theme (#11610)
2. Adds # to links that are broken in openvino_docs_get_started_get_started_demos.htm

Signed-off-by: intelkevinputnam <intelkevinputnam@github.com>

Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-05-10 17:59:48 +02:00
Sebastian Golebiewski
4924e33255 Proofreading the Transition Guide (#11543)
* Update common_inference_pipeline.md

Minor stylistic and grammar corrections.

* Proofreading

Minor stylistic and grammar corrections.

* Update common_inference_pipeline.md

* Update common_inference_pipeline.md

* Proofreading

Minor corrections

* Update preprocessing.md

* Update intro.md

* Update deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update intro.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/common_inference_pipeline.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update deployment_migration.md

* Update intro.md

* Update common_inference_pipeline.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update intro.md

* Update deployment_migration.md

* Update docs/OV_Runtime_UG/migration_ov_2_0/deployment_migration.md

Co-authored-by: msmykx <101244365+msmykx-intel@users.noreply.github.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-05-10 17:59:12 +02:00
Paul Youngsoo Ahn
35b8972da3 [GPU] fix coverity control flow issue (#11585) (#11585)
- Coverity CID: 1488473
- Coverity CID: 1486904
2022-05-10 09:15:03 +09:00
Bo Liu
eddd31f58f Liubo/roi align 9 ov core cpu plugin (#11188)
* roi_align_9: ov_core, transformations, template_plugin

* roi_align_9: CPU Plugin

* keep only constructor with enums which is aligned with spec

* remove evaluate function for ROIAlign_9

* Add op check test for operation ROIAlign-9

* Apply suggestions from code review

* fix version name from 'v0' to 'v3' in transform part

* use common shape_infer function for v3 and v9

* remove'tf_' prefix for ROIAlign::AlignedMode to avoid misleading for models from different platforms
2022-05-10 08:14:37 +08:00
Karol Blaszczak
ad61593aa5 Update Convert_Model_From_TensorFlow.md (#11425) (#11591)
* Update Convert_Model_From_TensorFlow.md (#11425)

* Apply suggestions by Yuan

The changes are made in the port PR, so will be published with the 22.2 version.

Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-05-09 16:49:15 +02:00
Karol Blaszczak
ab7b9afc14 Docs: Add links to specific examples - port (#11618) (#11653)
* Docs: Add links to specific examples (#11618)

* Update docs/OV_Runtime_UG/integrate_with_your_application.md
* Add links to specific examples

This edit adds links to more example applications, making it easier for users to discover how to build an OpenVINO application around their specific model.

* Add links to MO installation and ONNX examples (#11617)

These edits help make it easier for a new user to find more information on how to convert ONNX models.

* Apply suggestions by Yuan

The changes are made in the port PR, so will be published with the 22.2 version.
Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-05-09 16:48:39 +02:00
Tomasz Dołbniak
53e62d2ac6 ONNX OperatorsBridge - non-static version (#11578) 2022-05-09 12:58:51 +00:00
Kelvin Choi
860a074fa9 [GPU] Add tuning cache v2 for yolo_v3 and yolo_v3_tiny (#11589) 2022-05-09 17:10:11 +09:00
Mateusz Bencer
99c04c0d6a additional validation of places (#11480) 2022-05-09 07:24:07 +00:00
yanlan song
35ad252003 Bell/rule out device (#11516)
* Fix batchability check of MAX_BATCH_SIZE

* Applied review comment

* new implementation

Signed-off-by: fishbell <bell.song@intel.com>

* enable device removing

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* Update plugin.hpp

cpplint error

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2022-05-07 17:41:59 +08:00
Mateusz Bencer
c404e7f76d Handle onnx shape inference exceptions (#11451) 2022-05-07 01:04:07 +03:00
Mateusz Bencer
d60deae083 If shape inference - scalar and 1D union handle (#11499) 2022-05-07 00:55:36 +03:00
Tomasz Jankowski
e0e916b557 [ONNX] Extract onnx frontend tests from ov core unit tests (#11535) 2022-05-06 11:21:55 +02:00
Jan Iwaszkiewicz
6eb9c11d7e [PYTHON] Infer calls improvement (#11498) 2022-05-06 08:58:00 +02:00
yanlan song
912f40e74d stateful inferface impl for AUTO/HETERO (#11590)
* CPU for stateful model

Signed-off-by: fishbell <bell.song@intel.com>

* log

Signed-off-by: fishbell <bell.song@intel.com>

* hetero impl

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>
2022-05-06 12:43:56 +08:00
guozhong wang
870455675c add cumulative_throughput for python (#11195) 2022-05-06 12:43:41 +08:00
guozhong wang
0a65f5f607 selectdevice returns MULTI:device in cumulative_throughput (#11367)
* selectdevice returns MULTI:device in cumulative_throughput

* load multi with throughput and disable cpu helper in cumulative

* disable cpu helper in cumulative_throughput

* add cumulative to bechmark_app help message

* modify benchmark_app.hpp clang-format
2022-05-06 12:42:59 +08:00
Andrew Kwangwoong Park
caccde6a82 [GPU] Add TC for MXNet-style NMS model(decrease_label_id) (#10624)
- Add TC for decrease_label_id=true to cover MXNet-style NMS models
- Fix segfault issue that occurs when data precision is fp16

Signed-off-by: Andrew Kwangwoong Park <andrew.kwangwoong.park@intel.com>
Signed-off-by: Andrew Park <andrew.park@intel.com>
2022-05-06 10:36:46 +09:00
Artur Kulikowski
1331eabbed Getting of python version depend on the used shell (#11614) 2022-05-05 23:42:54 +02:00
Katarzyna Mitrus
50287625d7 [Eye-9] Python API for Eye-9 (#11552) 2022-05-05 16:16:36 +02:00
mei, yang
2d0ffd8fe5 Specify GenerateProposals-9 (#11004) 2022-05-05 10:28:18 +02:00
Bo Liu
d560cf19a3 Specify ROIAlign-9 (#11067) 2022-05-05 10:27:58 +02:00
cecilia peng
e68613a2fc Specify MulticlassNonMaxSuppression-9 operation (#11083) 2022-05-05 10:27:47 +02:00
Mateusz Tabaka
68ef1555bc Enable test_loop_simple_precommit on GPU (#11625) 2022-05-05 09:55:02 +02:00
Tomasz Jankowski
0babf20bd2 Remove unsued map initialization (#11621)
Details:
- Unused variable removed.
- Added pytest markers declaration to avoid useless warnings.
Tickets:
70158
2022-05-05 00:50:34 +02:00
Mateusz Tabaka
b92e10ff6e Fix setting default affinity property in cpuFuncTests (#11609)
CPU plugin sets default affinity to HYBRID_AWARE
if it's running on AlderLake, so we need to reflect that
in cpuFuncTests.
2022-05-04 12:42:43 +02:00
Tetiana Gubanova
7dd8fbd47e [GPU] Einsum with repeated labels and ellipsis support (#11615)
* Einsum test helper

* Einsum single layer tests

* Add Einsum decomposition with repeated labels and ellipsis support
to GPU transformations pipeline

Co-authored-by: Oleksii Khovan <okhovan@lohika.com>
2022-05-03 20:46:22 +09:00
opoluektov-lohika
1ed03bbe6b Fix conformance test runner handling of input directories (#11611)
Check first whether the path specified by --input_dirs is a directory.

Otherwise the argument is always treated as a .lst file,
and in case it is a directory it silently fails,
which causes the test runner to not execute any tests intended.
2022-05-02 22:00:21 +09:00
Artur Kulikowski
26aa197cb1 Expand ONNX functions in Graph ctor (#11599)
* Expand ONNX function in constructor of class Graph

* Add test to expanding function GreaterOrEqual inside If op
2022-05-02 10:12:41 +02:00
Jan Iwaszkiewicz
8d221d1e06 Add lower bound for wheel package (#11595) 2022-04-28 13:02:46 +02:00
Sebastian Golebiewski
844fbde328 Update installing-openvino-windows-header.md (#11221)
* Update installing-openvino-windows-header.md

* Update docs/install_guides/installing-openvino-windows-header.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-04-27 13:09:31 +02:00
Karol Blaszczak
f5781b1255 DOCS-cpu_language_review-port (#11537)
* DOCS-cpu_language_review

Co-Authored-By: Yuan Xu <yuan1.xu@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/CPU.md

* Update docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-04-27 13:07:38 +02:00
Mateusz Tabaka
54e5af95da Disable two cases from smoke_Conv_3D_FP32_fusingScaleShiftAndFakeQuantizePerChannel (#11536)
They sporadically fail in precommit.
Ticket: 84153
2022-04-20 14:34:46 +02:00
Mateusz Tabaka
e53f702f81 Handle negative axis in SimplifySecondInputOfReshape (#11524)
Fixes #11501
2022-04-19 12:20:21 +02:00
Karol Blaszczak
22398ac9cd sphinx google search (#11439) (#11506)
porting from 22.1 as per Andrey's request from 04.08
* sphinx google search

* fixes

* fixes

* fix version tabs

Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
2022-04-13 12:39:36 +02:00
Karol Blaszczak
1b5756a4d7 Docs benchmarktool python correction - port (#11505)
* DOCS-benchmarktool_python_correction

add info on tool installation

* Update docs/OV_Runtime_UG/Samples_Overview.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2022-04-13 11:14:39 +02:00
Karol Blaszczak
1e1735b022 Fixed operation names (#11447) (#11507)
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-04-13 11:14:24 +02:00
Alexandra Sidorova
9dffa706fb Added shell for Eye-9 (#11210) 2022-04-12 12:49:07 +02:00
Przemyslaw Wysocki
76610393a0 [PYTHON] Change dependabot schedule (#11497) 2022-04-12 11:17:29 +02:00
Katarzyna Mitrus
5bedbbe05d [ONNX] Add Scan operator to ONNX Frontend (#11053) 2022-04-12 10:35:15 +02:00
Karol Blaszczak
e1cd7bfc5b DOCS-review GPU article changes (#11477)
As per ticket #CVS-80053
int8 link removed
2022-04-12 10:08:10 +02:00
Phillip Schmidt
4da0941cd2 Update README.md (#8048)
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-04-06 06:51:09 +03:00
Luo Cheng
e72d32065c [FrontEnd] add assign, meshgrid, expand_v2 for paddle faster_rcnn model (#10627)
* assign, meshgrid op support

* fix review comments

* add copyright message

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-04-06 09:43:18 +08:00
Alexander Zhogov
9f01380558 Azure CI: Set master ref branch (#11475) 2022-04-05 22:23:07 +03:00
Ilya Lavrenov
c2703c81f6 Don't install nlohmann_json_schema_validator for samples (#11446)
* Try to improve gflags

* Try to improve gflags: part 2

* Tried to use dependencies on system

* Use nlohmann_jsonConfig from system

* Enabled nlohmann_json from system

* Improvements

* handle system gflags in developer package

* Simplifications

* Simplify dependency management

* Corrected package names

* Fixed subgraphsDumper configure stage

* Try to fix rhel8

* Try to fix macosx

* Fixed VPUX build

* Fixed aliasing issues

* Suppress some wanrings

* export gflags when build it

* Fixed some LTO

* Try to fix Mac

* revert

* use gflags as private dependency

* Aligned targets in developer package

* Fixed frontends tests build on U20 with LTO

* PAssed

* Don't use pkg_search_module(zlib ..) during cross-compilation

* Removed unused variables

* Fixed finding of zlib during cross-compilation

* CVS-83529

* Use nothreads_static

* Fixed python
2022-04-05 19:16:32 +00:00
Vladimir Paramuzov
050e2e518d [GPU] Replaced tensor dims usages with layout methods calls (#10984) 2022-04-05 21:49:34 +03:00
Irina Efode
01f530d443 API Conformance refactoring (#11421)
* ie_infer_request + ie_exec_net

* build

* build

* ov_compiled_model

* ov_infer_request

* ov_plugin

* ie_plugin

* build
2022-04-05 18:43:36 +03:00
Irina Efode
3f953b37c1 Reduce conformance execution time (#11416) 2022-04-05 18:40:33 +03:00
Jan Iwaszkiewicz
8e9fb18882 [PYTHON] Properties API, improvements of OVAny (#11389)
* WIP for POC

* Bulkwork

* Clean up current solution

* Extend Core methods

* Refactor Any python converts

* add submodule to runtime

* Add initial tests

* Fix copy-paste error

* Extend test

* Improve casting options and move common parts to utils

* Add newline

* Fix properties, remove class approach, move to all string Properties API

* Fix codestyle

* Fix pystyle

* ResolveTODOs, better align and extend python api

* Add extended test cases

* Fix properties in one of compile_model overload.

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

* Update src/bindings/python/tests/test_inference_engine/test_properties.py

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
2022-04-05 18:01:36 +03:00
Chenhu Wang
fca2595293 Specify NonMaxSuppression-9 (#10729)
* define_NMS-9_specification

* default value as true for soft-nms-supressed-by-iou

* soft nms support

* comments apply
2022-04-05 15:58:02 +03:00
Karol Blaszczak
bff9769dd2 DOCS-transitionguide_name_correction (#11450)
OpenVINO™  2.0 => OpenVINO™ API 2.0
2022-04-05 13:33:28 +02:00
Przemyslaw Wysocki
55497a12d8 [PYTHON] Add method to_dtype() to Type class (#11433)
* Add Type method to_dtype()

* Clang formatting

* Add Type __init__ binding

* Add Type constructor

* Add numpy types classes to test
2022-04-05 14:00:02 +03:00
Ilya Churaev
86495ceb0f Update readme content (#11201)
* Update readme content

* Updated HW support matrix

* Update README.md

* Update doc links to nightly

* Update README.md

* Update README.md

* Update README.md

* Add logo

* Update README.md

* fixed links

* Update README.md
2022-04-05 13:09:22 +03:00
Andrey Noskov
3a36d90c11 [GNA] Moved PWL functional tests (#10731)
* Moving PWL to ngraph

* improving the running time of php_search; refactoring the pwl operation

* fixed erros & refactored code

* moved PWL op to GNA

* Update src/plugins/intel_gna/ops/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/reference/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* fixed compilation error

* Update inference-engine/tests/unit/gna/ngraph/transformations/gna_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* added some tests; changed algorithm of checking accuracy of pwl; refactoring

* added first and last segments; added fq and fixed errors

* fixed after review & rewrote some tests on ngraph

* removed debug logs & fixed code style check error

* s/ngraph_helper/ngraph_util

* removed TRANSFORMATIONS_API in PWLApproximation class declaration

* removed OPENVINO_API in Pwl class declaration

* replaced the deprecated version of evaluate() with a new one

* fixed some problems after reviewing

* fixed a problem when a value of function of left point of segment is less than minimum of function

* corrected a value of the right point of last segments

* [GNA] Moved pwl func tests

* Deleted deprecated test

* s/OPENVINO_RTTI/OPENVINO_OP

* Deleted conflicted test file

* fixed after review

Co-authored-by: Dmitrii Khurtin <dmitrii.khurtin@intel.com>
Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
2022-04-05 11:43:24 +03:00
Maxim Gordeev
9c8a6aacb7 [IE Samples] Activating new parameter is compact mode(memory_reuse) in speech sample (#11405)
* [IE Samples] Activating new parameter is compact mode(memory_reuse) in speech sample

* changed format

* renamed the option to memory_reuse

* renamed the option
2022-04-05 11:31:05 +03:00
Andrey Noskov
a018298023 [GNA] Deleted duplicated config test (#10601) 2022-04-05 11:27:01 +03:00
Maksim Doronin
4edf85c928 [RT][VPU]: Fixes for dynamic models (#10826)
* DynamicShapeResolver is able to save information about dynamic output in order to pass it in INFER_DYNAMIC_SHAPE mode. Previously, it propagated fully dynamic output shape (however ranks were equal) and dynamic Convolutions and Poolings were performed incorrectly. Now in the case of dynamic batch, DSR propagates only dynamic batch and Convolutions and Poolings are performed properly as a Loop of single-batch operations.
* Fixed dynamicToStaticShapeTranspose transformation. There was a bug: transposition indices could not be applied with Scatter because the formula is not applicable for this. Replaced with Gather.
i.e. Shape of output tensor of Transpose with transition [0,3,1,2] indices (NHWC [1, 224, 224, 3]->NCHW [1, 3, 224, 224]) was calculated by ScatterElementsUpdate. So output_shape[transposition[i]] = input_shape[i] and the result was output_shape=[1, 224, 3, 224] which was wrong. Vise-versa Gather does output_shape[i] = input_shape[transposition[i]] and the result is [1, 3, 224, 224] which is right.
* MaxPool and AvgPool can be sliced for loop in case of dynamic batch
* Convert stage for inputs is not inserted in the VPU model in the case of OV API 2.0. It did not cause a problem with non-dynamic functions because Graph Transformer has a pass to eliminate redundant converts (u8->f16, ~f16->f16~). In the case of dynamic inputs, yet another inserted Convert breaks data<->shape relations.
2022-04-05 11:19:43 +03:00
Egor Shulman
ed190374fd [CPU] Added support of 'Batched' memory type (#10909) 2022-04-05 09:20:19 +03:00
Ilya Lavrenov
4ad20fb53f Use system dependencies (#11419)
* Try to improve gflags

* Try to improve gflags: part 2

* Tried to use dependencies on system

* Use nlohmann_jsonConfig from system

* Enabled nlohmann_json from system

* Improvements

* handle system gflags in developer package

* Simplifications

* Simplify dependency management

* Corrected package names

* Fixed subgraphsDumper configure stage

* Try to fix rhel8

* Try to fix macosx

* Fixed VPUX build

* Fixed aliasing issues

* Suppress some wanrings

* export gflags when build it

* Fixed some LTO

* Try to fix Mac

* revert

* use gflags as private dependency

* Aligned targets in developer package

* Fixed frontends tests build on U20 with LTO

* PAssed

* Don't use pkg_search_module(zlib ..) during cross-compilation

* Removed unused variables

* Fixed finding of zlib during cross-compilation
2022-04-05 04:47:22 +03:00
Ilya Lavrenov
90366c2c60 Fixed detection of sample type c / cpp (#11444) 2022-04-05 01:57:10 +03:00
Vladislav Volkov
d654071b51 CPU Plugin refactoring: Transition from Intel MKL-DNN to oneDNN (#11023) 2022-04-05 01:10:53 +03:00
Yegor Kruglov
74638251e8 [MO] Support TensorFlow FusedBatchNorm with channel_first data_format (#11084)
* layout fix in FusedBatchNorm decomposition

* added tests
2022-04-05 00:13:58 +03:00
Anton Chetverikov
c1dc71ce28 [MO] Fix IndexError inside ScatterNDUpdate shape inference function (#11220)
* Restore inputs order in IR Reader

* Add WA to numpy ndarrays IndexError

* Add comments to code

* Add unit test
2022-04-04 23:59:24 +03:00
Svetlana Dolinina
c75bc65b83 added recursive run for transformation to fix fp16 IR with Interpolate inside If/Loop/TI (#10905)
* added recursive run for transformation to fix fp16 IR with Interpolate inside If

* added test for interpolate inside If

* remove useless variable

* fixed transformaion for divide

* fix code style

* commit auto change

* review fix

* add test for recursive call of divide marks

* removed empty line
2022-04-04 23:33:59 +03:00
Ilya Churaev
60f9f3dc92 Enabled LTO for static CPU (#11426)
* Enabled LTO for static CPU

* Update CMakeLists.txt

* Update CMakeLists.txt
2022-04-04 20:22:22 +03:00
Maxim Shevtsov
500d36e1c0 cherry-picking opt guide changes from the release branch (#11430) 2022-04-04 19:41:17 +03:00
Mikhail Letavin
417d75d80b [GPU] Fix allocation for cached USM remote blobs (#11304) 2022-04-04 18:16:53 +03:00
Fedor Zharinov
787610b0db -l option is replaced with -extensions (#10878) 2022-04-04 16:08:38 +03:00
Elizaveta Lobanova
9b4e8f5b59 [GNA] Fixed cascade concats binding (#11326) 2022-04-04 15:56:13 +03:00
Karol Blaszczak
da8388e263 [DOCS] polish autodevice article (#11171) (#11427)
the article has been changed much and its language has been impacted in the process. Here are some corrections.
2022-04-04 13:12:18 +02:00
Gleb Kazantaev
a52092deb0 Enable FQ fusions in MOC (#11269)
* Enable FQ fusions in MOC

* Fix codestyle
2022-04-04 13:34:01 +03:00
Edward Shogulin
542a374c40 [LPT] Introduce new quantization mode attribute (#11380) 2022-04-04 13:27:03 +03:00
Ilya Churaev
b9ba0bb40c Removed OV_NEW_API (#11082)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-04-04 13:07:12 +03:00
Dmitry Pigasin
80e7857eca [Python Speech Sample] Change argument format (#11012)
* Change `-i` argument format

* Change `-sf` argument format

* Change `-o` and `-r` argument format

* Fix saving file with multiple utterances

* Fix flake8 D415

* fix scale factor for imported models
2022-04-04 13:03:39 +03:00
Roman Kazantsev
9dee25fa79 [MO] Support TensorFlow Grouped Conv2DBackpropInput (#11420)
* [MO] Support TensorFlow Grouped Conv2DBackpropInput

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Correct computation of group number for ConvBackpropInput operation

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix get_conv_backprop_groups function

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Add unit-tests for Deconvolution shape inference

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-04-04 12:30:31 +03:00
Chenhu Wang
60521a92c9 [CPU] Fixed NMS implementation on APL targets (#10649) 2022-04-04 10:52:22 +03:00
Maxim Andronov
65a182aaea [CPU] Extract weight cache to executable network (#11118) 2022-04-04 10:47:58 +03:00
Vladimir Paramuzov
afdaa7cf89 [GPU] Align permute axis format with IE (#11379) 2022-04-04 10:28:51 +03:00
Ilya Lavrenov
d879e34363 Tbb: download only if system libraries are not found (#11415)
* Download custom TBB on demand

* Download TBBBind on demand

* Fixed install steps

* FIxes
2022-04-03 19:55:54 +03:00
Edward Shogulin
5d821453ae [LPT] Introduce new granularity attribute instead of OperationPerTensorQuantizationRestriction (#11330) 2022-04-03 19:35:04 +03:00
Ilya Lavrenov
29fb8c79b1 Don't use template plugin unconditionally (#11409) 2022-04-02 11:40:45 +03:00
Ilya Lavrenov
4fcc18c00e Tbb 2018 and older usage (#11411)
* fixed TBB

* Fixed compilation with old TBBs

* Fixed installation for custom provided TBB
2022-04-02 11:11:13 +03:00
Ilya Znamenskiy
1e4a1b2b4a [GPU] Klockwork issue 57997 fix (#10956) 2022-04-02 11:06:56 +03:00
Roman Kazantsev
1a288c2e99 Enable MO unit-tests but bom tests (#11399)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-04-02 10:58:23 +03:00
Ivan Mikhalev
6105ea3902 [CPU] [DEBUG CAPS] Revert DNNL_VERBOSE (#11410)
Compilation with ENABLE_CPU_DEBUG_CAPS was fixed.
Previous to this change it failed due to undefined dnnl::impl::md2dim_str
(since DNNL_VERBOSE was disabled in the scope of PR #11244).
2022-04-02 10:57:29 +03:00
Andrey Zaytsev
415daecc26 Cherry-pick Feature/azaytsev/doc fixes 2022 1 1 (#11388) (#11407)
* Removed a redundant image

* Fixed ops specifications and other issues

* converted html links to anchor links

* converted html links to anchor links

* Fixed a link

* Fixed a link

* Changed anchor links according to dev review
# Conflicts:
#	docs/OV_Runtime_UG/Operations_specifications.md
2022-04-01 19:53:58 +03:00
Ilya Lavrenov
8ae7c9f2cc Disabled TBBBind usage for oneTBB (#11386) 2022-04-01 19:09:06 +03:00
Maxim Gordeev
2388f3b976 Updated docs for python's version of hello_reshape_ssd (#11401) 2022-04-01 18:21:40 +03:00
Vladislav Golubev
a02b3f4995 [Transformation] MarkPrecisionSensitiveDivides extending to mark fp32 divides (#11391)
* MarkPrecisionSensitiveDivides: fp32 divides marking enabled

* ConvertDivide: added a negative test-case with fp32 divide on precision sensitive subgraph
2022-04-01 17:31:23 +03:00
Maksim Derbasov
56df3962e3 Fix for warnings spotted by clang compiler (#11384) 2022-04-01 16:10:51 +03:00
Maxim Andronov
3d92c8c4c7 [CPU] Avoid inserting reorder after RNN in native order case (#10799) 2022-04-01 16:02:50 +03:00
Nikita Semaev
fca159293d Fix Bucketize Conformance tests for Template plugin (#11029)
* Right fill in the values of the inputs

* Using create_and_fill_tensor_unique_sequence() instead of create_and_fill_tensor()

* Fixing a problem with a missing parameter when calling the create_and_fill_tensor method

* Fix Bucketize Conformance tests inputs generation for Template plugin

* Correct filling of the first port (data)
2022-04-01 15:22:45 +03:00
Andrey Zaytsev
cad355a03e Docs labels adjustment (#11227) (#11294)
* Adjusted documentation labels

* Renamed images

* fix doc tests

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
# Conflicts:
#	docs/IE_PLUGIN_DG/ExecutableNetwork.md
2022-04-01 15:06:55 +03:00
Anton Grishin
7efc85063b [GNA] Add GRUCell/GRUSequence/LSTMSequence support (#11333)
* Add grucell/gruseq/lstmseq unrolling

Add tests

* remove bidirectional decomposition

* completly remove bidirectional_sequences_decomposition
2022-04-01 14:16:11 +03:00
Nikita Semaev
dc55f8bb5a Correct the order of passing arguments to the InputGenerateData constructor (Fix Round, Ceiling Conformance tests for Template plugin) (#11099)
* Correct the order of passing arguments to the InputGenerateData constructor

* Full range correction for random numbers

* Refactoring the argument sequence of the InputGenerateData class constructor

* A small imperfection

* Rollback changes that are related to range
2022-04-01 13:42:10 +03:00
Alexey Lebedev
6eaa15745a [PYTHON API] Tensor.data property for low precisions + packing (#11131)
* rebase old branch with master

* Fix doc style

* fix test

* update tests

* Add missed param

* Rewrite docstring for tensor and refactor set_input_tensors test

* update python exclusives

* keep compatibility

* remove notes about slices

* fix code style

* Fix code style
2022-04-01 12:04:04 +03:00
Karol Blaszczak
701d75eafa [DOCS]continue_language_review-transitionguide (#11177)
PR for 22.1 made, now porting to release...
some discrepancy between this version and the 22.1 branch seems to exist, so I adjusted the conflicting link to avoid build check errors...

the overview has been merged, the remaining articles are reviewed here
2022-04-01 17:03:40 +08:00
Ilya Churaev
80739700ff Added clone method for ov::Model (#11390)
* Added clone method for ov::Model

* Changed python API
2022-04-01 10:52:31 +03:00
Ilya Churaev
8ab5dbade0 Revert "Add constant folding to hetero to avoid dynamism on GPU (#10572)" (#11370)
This reverts commit 5b18677f1b.
2022-04-01 10:16:14 +03:00
Bo Liu
070f27a089 Paddle FasterRCNN Ops Conversion: roi_align, strided_slice, where (#10893)
* Paddle FasterRCNN Ops Conversion: roi_align, strided_slice, where

* add check for 'aligned' feature of 'roi_align' op; use common function for idx_node in 'striede_slice' op

* Apply suggestions from code review

* use common funciton for stride_slice and slice, OP_CHECK for 'where' op conversion

* Apply suggestions from code review
2022-04-01 14:37:28 +08:00
yanlan song
4057e408d8 Bell/shape auto (#11284)
* Fix batchability check of MAX_BATCH_SIZE

* Applied review comment

* clonenetwork in auto

Signed-off-by: fishbell <bell.song@intel.com>

* clone in correct way

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Taylor Yeonbok Lee <taylor.lee@intel.com>
2022-04-01 11:09:22 +08:00
Mikhail Nosov
e52bd441e2 Frontend exception safety (#11368)
* Frontend exception safety

Every call to frontend's API (except Places) can throw exception. If during exception handling, FrontEndManager is destroyed and calls 'dlclose' for plugin - call stack will be corrupted and crash will occur.

Solution is to wrap 'plugins' calls with try/catch and throw new exception in 'openvino' context

TODO: currently "Place" objects don't have 'actual' wrappers, so exception in 'place' objects will potentially cause such crash (if exception handler destroys FrontEndManager). Workaround for user would be to try/catch any calls of Place API on their side.
We're not expecting users to use Place API directly, so this workaround looks acceptable

* Add check for exception message

* Keep type of frontend exception during rethrow

* IR FE tests: don't expect InferenceEngine::exception as it be not propagated as is by FrontEndManager
2022-03-31 22:23:40 +03:00
Anastasia Kuporosova
dd54cb9c17 [Python API] Remove old api class from the new api (#10470)
* [Python API] Remove old api class from the new api

* start working on refactoring of OVAny

* fix tests

* fix code-style

* remove tuple test

* fix test

* fix omz hash

* one more overload

* fix pyfloat

* move from_ov_any to utils

* code-style

* move function from common to utils
2022-03-31 21:57:05 +03:00
Elizaveta Lobanova
d3060d4bcc [GNA] Fixed handling of unaligned crop layer (#11316) 2022-03-31 20:03:51 +03:00
Vladimir Paramuzov
1cb254307e [GPU] Gather params update (#11369) 2022-03-31 19:46:38 +03:00
Ilya Lavrenov
3c724a1dee Build with system TBB (#11244)
* Build with system TBB

* Fixes

* Check whether system TBB is available

* Try to fix ONNX Runtime build with system TBB

* Test

* Fixed compilation of threading.cpp

* Fixed unset of cache dirs

* Limit dearch paths of TBB

* Try to enable pip packages with custom TBB

* Fix for TBB 2021.2

* Install only needed TBB libraries

* Install TBB from system to pip package

* Reverted usage of TBBROOT

* Fixed oneTBB case

* Try to fix Android

* Escape some paths

* Added samples path

* Fixed TBBBind usage for case of system TBB
2022-03-31 18:05:59 +03:00
Ekaterina Aidova
d99104cf55 [OMZ]: update submodule (#11305) 2022-03-31 17:53:41 +03:00
Alina Kladieva
dc83410cd7 Revert "Skip sporadic GPU canInferOnUserQueue test case (#11310)" (#11362)
This reverts commit 458378e9e7.
2022-03-31 17:38:03 +03:00
Vladimir Paramuzov
15b4553eaf [GPU] Align OneHot primitive parameters with ngraph (#11361) 2022-03-31 17:13:49 +03:00
Alexey Lebedev
1efb0a034f [PYTHON API] release GIL (#10810)
* AsyncInferQueue nogil update + refactoring

* nogil in compiled model

* nogil in Core

* fix refactoring

* nogil in infer_request

* add tests

* Fix code style

* update test with incrementing reference counting

* try to fix code style

* fix code style

* release gil in reshape and preprocessing

* make args optional in test

* fix code style

* add docs about GIL

* try to link doc string with docs

* Apply suggestions from code review

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* Fix docs

* docs refactoring

* Apply review comments

* Fix code style

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-03-31 16:12:48 +03:00
Maxim Andronov
1d247815be Don't execute reference::strided_slice if input/output tensor is empty (#11337) 2022-03-31 15:42:10 +03:00
Alexandra Sidorova
9185f03e77 Added specification for Eye-9 (#11104)
* Added specification for EyeLike-9

* Update docs/ops/generation/EyeLike_9.md

* removed batch from TF

* minor fix

* Applied comment by Anton

* Added new example with dynamic output, added corner case

* Fixed corner case description

* Rename matrix

* applied comments by Yuan

* Added diag_idx as input, minor fixes, renaming

* added support of batch_shape from TF

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2022-03-31 14:46:55 +03:00
Smirnov Grigorii
a87e8f7880 moved TransformationsTestsF method's definitions from .hpp to .cpp (#11359)
* moved

* fix style
2022-03-31 14:07:41 +03:00
Pavel Esir
16a5962698 [MO] pad fusing fix (#10453)
* pad fusing fix

* added unit-tests for pad fusing fix

* fixed port reconnecting

* Update tools/mo/openvino/tools/mo/middle/passes/fusing/mark_unfused_nodes.py

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-31 14:07:07 +03:00
Smirnov Grigorii
763d522759 add graph_comparator tests (#11360) 2022-03-31 14:06:36 +03:00
Mikhail Ryzhov
a9853d2790 [GNA] Additional tests on compact mode (#10969)
* Moved InitGNADevice to plugin constructor

* Added tests for ordering layers

* Added allocator header

* Fixed fused_iterator header

* protected GNAMemRequestsQueue properties

* Fixed unit test names

* Fixed compile issue

* Fixed default initialization

* Fixed depricated matchers

* Fixed pwl deprecated tests

* Added page alignment

* Reset gnadevice in the tests

* Update src/plugins/intel_gna/gna_fused_iterator.hpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

* Revert "Update src/plugins/intel_gna/gna_fused_iterator.hpp"

This reverts commit d624bdadaf.

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
2022-03-31 13:56:25 +03:00
Elizaveta Lobanova
3578ee9c3f [GNA] Remove extra FQ layers from the final network (#10599)
* [GNA] Fuse all FakeQuantize layers with their previous layers

* [GNA] Fuse FQ with previous layer if it's not required for precision change

* [GNA] Fixed MatMulOverloadCorrectionTest
2022-03-31 13:21:27 +03:00
Sergey Shlyapnikov
79e3272237 [GPU] Update eltwise calc_output_layout function and prevent output layouts invalidation after adding reorders for weights (#11073) 2022-03-31 13:15:47 +03:00
Sergey Shlyapnikov
f2af1ef88a [GPU] Update memory location to __local in GPU Detection Output (#11209)
* [GPU] Update memory location to __local in GPU Detection Output

* Replace hardcoded stack size with JIT constant
2022-03-31 13:13:58 +03:00
Egor Shulman
23476c8eee CC for LPT transformation call in CPU plugin (#11341)
* Use CC in LPT

* Applied comment
2022-03-31 12:35:27 +03:00
Przemyslaw Wysocki
f45ca99de6 [PYTHON] Add ov::clone_function bindings (#11331)
* Add clone_function bindings

* Add clone_function binding tests

* Debugging changes

* Minor changes

* Code style

* Add an assert

* Update src/bindings/python/tests/test_ngraph/test_basic.py

* Update src/bindings/python/src/pyopenvino/graph/model.cpp

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-03-31 12:14:51 +03:00
Alina Kladieva
7d0750fa2a Add SKIP_IF_CURRENT_TEST_IS_DISABLED macros for needed cases (#11335) 2022-03-31 11:12:19 +03:00
Ilya Lavrenov
c795382b1f Configurable OpenCL usage in BA (#11344) 2022-03-31 10:49:40 +03:00
Maxim Gordeev
fad66d8442 [IE Samples] New command line parameters format for speech sample (#11051)
* New command line parameters format for speech sample

* fixed notes

* changed format for scale factor

* changed format for scale factor in tests

* added more variants, when name is directy specified for i/o/r like it is done for sf

* removed nthreads flag

* fixed notes

* changed output params

* updated tests with new format

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-03-31 10:09:31 +03:00
Daria Mityagina
5c917cfaaa [ICV][XLink] - port XLink changes from mdk (#10212)
-76384
Port changes from MDK
2022-03-31 09:50:26 +03:00
Sergey Shlyapnikov
24a74672f6 [GPU] Fix remote blobs tests (#11349) 2022-03-31 09:04:46 +03:00
Vladimir Paramuzov
f5f93cfbeb [GPU] Strided slice primitive params update (#11339) 2022-03-31 08:54:05 +03:00
Ilya Churaev
3e58ccbce7 Fixed evaluate for ov::Tensor (#11354)
* Fixed evaluate for ov::Tensor

* Fixed old ops with EvaluationContext
2022-03-31 07:47:49 +03:00
Anton Pankratov
78285f9db4 Added ov::NotImplemented Exception (#11124)
* Added ov::NotImplemented Exception

* add ie namespace

* Try fix
2022-03-31 07:36:13 +03:00
Oleg Pipikin
88e20199f0 Fix query network for hetero plugin (#10556)
* Fix query network for hetero plugin

* Apply comments

* Fix1

* Add tests

* Apply comments 2

* Apply comments 3
2022-03-31 07:24:46 +03:00
Anastasia Kuporosova
d107cec39f [Python API] Update names in Model class (#11348)
* [Python API] Update names in Model class

* fix code-style

* remove from_capsule in the new API
2022-03-31 00:33:57 +03:00
Anastasia Kuporosova
4c7050f6a9 [Python API] Improve configuration files (#10960)
* [Python API] Improve configuration files

* fix config files

* update setup.cfd + change quotes

* move all codestyle checks to py_checks job

* update requirements_test.txt

* fix  codestyle according to flake-docstring

* fix

* fix mypy

* apply comments
2022-03-30 20:26:36 +03:00
Oleg Pipikin
be6db5d69a Fix for str_to_container if string value has whitespaces (#10224)
* Fix for str_to_container if string value has whitespaces

* Add test

* Add trim for leading and trailing whitespaces

* Apply comments

* Apply comments 2

* Apply comments 3
2022-03-30 19:48:29 +03:00
Alexey Lebedev
c8720f122d [PYTHON API] lifetime test for CompiledModel and extension (#11120)
* Add test for lifitime and extensions

* Remove ExtendedModel
2022-03-30 19:43:07 +03:00
Mateusz Bencer
7a0d85a067 remove resize asserts (#11234) 2022-03-30 19:13:55 +03:00
Mikhail Nosov
a635150b9d [IE Common] Enable explicit TBlob declaration in all compilers (#11183)
* Enable explicit TBlob declaration in all compilers

This fixes problems when linking gcc compiled IE with clang compiled
applications.

Previous to this change, only clang compilers would consider TBlob<T>
templated types as declared externally. When *declared* explictly (with
the `extern template` syntax), the C++ spec says
that any inline methods of the templated class (such as TBlob<T>
constructors) should be ignored in favor of the externally instantiated
version of that templated type:

    "An explicit instantiation declaration (an extern template) skips
    implicit instantiation step: the code that would otherwise cause an
    implicit instantiation instead uses the explicit instantiation
    definition provided elsewhere (resulting in link errors if no such
    instantiation exists)."

However, when IE is compiled with gcc, it does not see the explicit
`extern template` declarations of TBlob<T> (due to the `#ifdef
__clang__` guards in `ie_blob.h`). As an end result, presumably due to
link-time-optimizations during IE library compilation(?), none of the
TBlob<T> implementations are actually included in the IE dynamic
libraries.

* Fix warnings for windows

* Fix typo
2022-03-30 18:56:49 +03:00
Ilya Sharikov
1906c27c2d Update mo_tool parameter for converter.py (#11319)
* Update mo_tool parameter for converter.py

* Hot fix
2022-03-30 18:32:53 +03:00
Egor Shulman
e507b630eb Use CC in CPUSpecificTransform (#11231) 2022-03-30 18:26:35 +03:00
Gleb Kazantaev
8317493e65 Update Accuracy Check inside Func tests (#11314) 2022-03-30 17:44:03 +03:00
Alexey Lebedev
ee31b648d1 [docs] port from release branch (#11309)
* save work

* Add common snipp

* update ie pipeline with python snippets

* ov_common_snippet

* Python snippets for graph construction

* Fix docs

* Add missed old api snippets

* Fix names

* Fix markers

* Fix methods call
2022-03-30 17:03:29 +03:00
Sergey Shlyapnikov
5dc3da926c [GPU] Add fusion through feature (#9674)
* [GPU] Add fuse through feature

* Apply review comments


Tasks marked as not passed:
* https://dev.azure.com/openvinoci/dldt/_build/results?buildId=313769&view=results
* https://dev.azure.com/openvinoci/dldt/_build/results?buildId=313766&view=results
2022-03-30 16:43:32 +03:00
Alexander Kozlov
4b4bd7399c Fixed conflicts (#11332) 2022-03-30 16:10:03 +03:00
Anastasia Kuporosova
9fa5150d71 [Python API][Docs] Fix references for several classes (#11251) 2022-03-30 13:29:30 +03:00
Ilya Lavrenov
932f8bf767 Install 97-myriad-usbboot.rules to install_dependencies (#11301) 2022-03-30 13:03:42 +03:00
Ilya Churaev
17f8f7ec25 Fixed typo in exception message (#11322) 2022-03-30 12:45:09 +03:00
Vladimir Paramuzov
fccd5d4445 [GPU] ShapeOf op (#10983) 2022-03-30 12:27:04 +03:00
Ivan Novoselov
1beb7158d5 [Snippets] Develop Snippets test infrastructure (#10605) 2022-03-30 12:21:19 +03:00
Sergey Shlyapnikov
cd703580b6 [GPU] Host time optimizations for in order queue (#11255)
* [GPU] Host time optimizations

* Fix failed fusings_gpu/permute_eltwise_loop.basic/* tests
2022-03-30 10:53:53 +03:00
Ivan Tikhonov
f13b6252e9 Fix insertion of tensor names after UnrollTensorIterator transformation (#11276)
* revert previous version of convert_seq_to_ti transformation

* try to check that outputs of TI are connected to Result nodes

* add unit tests

* fix codestyle

* fix Memory tests

* revert local change

* revert local change

* replace duplicated code with lambda
2022-03-30 10:26:04 +03:00
Maxim Andronov
72f802f282 [CPU] Fix Parameter -> Result model for dynamic case (#10764) 2022-03-30 10:20:52 +03:00
Anton Pankratov
614a6a3457 [CPU] Graphs are created in compiled_model constructor (#10872) 2022-03-30 10:02:12 +03:00
Vladimir Gavrilov
e7b35c3b00 nGraph reference for the operation RDFT. (#11175)
* Written nGraph reference for the operation RDFT.

* Used std::reverse() algorithm to simplify the function reverse_shape() from fft_common.cpp.

* Added assert into the function offset_from_coords_and_strides().

* Deleted redundant variable.

* Deleted redundant functions from the reference implementation of (I)DFT.

* Renamed the method reverse_shape() in fft_common.hpp.

* Code style fix.
2022-03-30 09:38:05 +03:00
Alexander Zhogov
1386f52dd6 Azure CI: Disable Model Optimizer UT 2022-03-30 09:01:05 +03:00
Ilya Lavrenov
9f923ba39f Install only proper GNA library files (#11243) 2022-03-30 08:58:31 +03:00
yanlan song
f6e5ec9684 return device infer request in passthrough mode (#11253)
Signed-off-by: fishbell <bell.song@intel.com>
2022-03-30 09:21:13 +08:00
Ilya Lavrenov
4145291e84 Added f16 tests for tensor (#11273) 2022-03-30 01:11:07 +03:00
Anton Chetverikov
5d719cbc7b [MO] Remove _output_shape nodes attribute while loading TF meta graph (#11078)
* Remove _output_shape attribute while loading meta_graph

* Update tools/mo/openvino/tools/mo/front/tf/loader.py

Simplify code

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

* Add comment about the problem

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-29 21:27:00 +03:00
Gleb Kazantaev
01b701349d Move tensor utils to common utils (#11306) 2022-03-29 21:08:55 +03:00
Alina Kladieva
458378e9e7 Skip sporadic GPU canInferOnUserQueue test case (#11310) 2022-03-29 19:48:37 +03:00
Ilya Lavrenov
fb99fd1d2f Try to remove MO install rules (#11208)
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2022-03-29 19:24:30 +03:00
Dmitrii Khurtin
4b0417018a [GNA] Fixed clang 13 build issue (#11238) 2022-03-29 19:06:50 +03:00
Vitaliy Urusovskij
d261da4820 Replace get_os_name() with get_os_type() in get_lib_path() (#11300)
In case of ubuntu system `get_os_name()` returns "ubuntu",
but `get_os_type()` returns "Linux" which is expected by tests
2022-03-29 18:17:38 +03:00
Evgenya Stepyreva
ed030e113e StridedSlice default shape inference (#11292) 2022-03-29 16:42:52 +03:00
Gleb Kazantaev
5b0a1fe7bb Move FunctionsComparator to common utils (#11277)
* Move FunctionsComparator to common utils

* Fix includes
2022-03-29 14:51:17 +03:00
Jade Cho
070c47ec09 [GPU] fix a bug of onednn sum post-op (#11254)
+ Add a unit test for this.
2022-03-29 20:35:09 +09:00
Mikhail Letavin
23732417c5 [GPU] Propagate output flag to mutable_data deps to ensure correct event creation (#11021) (#11021) 2022-03-29 13:08:58 +03:00
Maxim Andronov
d764fe7d27 [CPU] Prohibit int8 desc convolution creation in case s8 precision on data input (#10730) 2022-03-29 09:44:06 +00:00
Gleb Kazantaev
866f006a83 Transformations Python API (#10971)
* Keep changes

* Update tests

* Keep changes

* Cleanup

* Add predicates support; new pattern ops; new tests

* support for public passes; added tests

* Fix compilation warning

* Fix code style

* Added docstrings; code cleanup

* Update python API tests

* Fix build on Windows

* Revert back pass registration logic

* Fix flake8 errors

* Update docstrings; fix utils.hpp

* Cleanup

* Cleanup

* Fix flake errors

* Fix mypy

* Skip mypy for passes
2022-03-29 11:41:23 +03:00
Bo Liu
02c60c76ab Paddle FasterRCNN Ops Conversion: greater_than, less_than, gather, floor (#9657)
* Paddle FasterRCNN Ops Conversion: greater_than, less_than, gather, floor

* Apply suggestions from code review

* fix 'gather' testcase failure issue on CI

* implement 'axis' input for 'Gather' Op conversion with testcase comment;use common function for all elementwise Ops
2022-03-29 16:20:37 +08:00
Alexey Lebedev
8f88889876 [docs] python snippets for devices master (#11176)
* Update CPU docs

* update GPU docs

* update with sphinxtab

* Fix docs

* Add preprocessig snippet

* Fix path

Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-03-29 11:01:53 +03:00
Anton Pankratov
bc9f140bb4 Fixed element type destruction in global scope (#11211)
* Fixed element type destruction in global scope

* Fixed mispirint
2022-03-29 10:21:41 +03:00
Nikolay Tyukaev
8aa98ce0d5 fix wildcard sphinxdirective (#11264) 2022-03-28 20:56:56 +03:00
Nikolay Tyukaev
da9596bade cvs-80083 (#11281) 2022-03-28 20:53:21 +03:00
Dmitry Belyakin
30ec7366bb [OMZ]: update submodule (#11279) 2022-03-28 19:48:53 +03:00
Dmitrii Khurtin
a925ec6a29 changed symlink order of libgna (#11267) 2022-03-28 19:48:08 +03:00
Ilya Lavrenov
19d0e5ba52 CMAKE: IE_VERSION => OpenVINO_VERSION (#11242)
* IE_VERSION => OpenVINO_VERSION

* Reverted installation of python unconditionally
2022-03-28 19:32:21 +03:00
Karol Blaszczak
8b591c141e Update installing-openvino-overview.md (#11271) 2022-03-28 15:15:32 +00:00
Dmitry Belyakin
27741d316e [OMZ]: update submodule (#11239)
* [OMZ]: update submodule

* bump omz ver
2022-03-28 15:45:35 +03:00
Valentin Dymchishin
52937967bb Add dynamism in memory tests (API 2) (#10589) 2022-03-28 12:51:53 +03:00
Irina Efode
76e2f2697f [CONFORMANCE] Fix run of Conformance tests (#11225) 2022-03-28 12:29:27 +03:00
Ilya Churaev
10698abc29 Revert vpu custom kernel master (#11228)
* Added original VPU custom kernel doc

* Moved to new API

* Added links from introduction

* Fixed intro
2022-03-28 12:18:29 +03:00
RavirajSitaram
30884a8161 Fix -Winfinite-recursion error reported by compiler (#11247)
Signed-off-by: Raviraj P Sitaram <raviraj.p.sitaram@intel.com>
2022-03-28 11:54:11 +03:00
Nikolay Tyukaev
aded1a2c70 a bunch of doc fixes (#11230) (#11237) 2022-03-25 20:22:43 +03:00
Evgenya Stepyreva
2d7f46b95a Update Divide_1.md (#11232) 2022-03-25 15:41:55 +00:00
Helena Kloosterman
05f97f2bb5 Update two paragraphs in performance hints docs (#11223) 2022-03-25 16:07:46 +03:00
Ilya Churaev
4dc0d6e711 Fixed comments after #11155 (#11202)
* Fixed comments after #11155

* Add information about plugin option
2022-03-25 16:02:18 +03:00
Smirnov Grigorii
a2705b1fed f64 to f32 and 0.0. to 0.1 (#11127) 2022-03-25 15:10:23 +03:00
Andrey Somsikov
3442e90144 Fix setupvars.bat patching (#11160)
* Fix setupvars.bat patching

setupvars.bat shoudl not be patched for regular Debug and Release
configurations.

* Use SRTEQUAL for cmake string comparison
2022-03-25 14:15:04 +03:00
Eddy Kim
200026f28b Missing backslashes right after mo (#11216) 2022-03-25 13:39:33 +03:00
Nikita Malinin
5e3e8d6084 [POT] Update POT with the ResultRenaming flag (#10989)
* Update POT with the ResultRenaming flag

* Update flag

* Update gold
2022-03-25 13:22:20 +03:00
Mikhail Nosov
8bfde58fd9 [Core] Improve performance for 'ov::Model::add_output' (#11052)
* Improve performance for 'ov::Model::add_output'

On first call of `add_output(tensor_name)` all available tensor names are cached.
Next calls take nodes from cache which significantly reduces complexity.
Cache is invalidated if topological cache is not valid or cache points to incorrect output (no tensor name of this node anymore)

The same caching is done for 'add_output(op_name, output_index)'

Tests:
- Verifies that adding outputs to all nodes has linear complexity O(N), not O(N^2)
- Verifies cache invalidation scenarios

* Fix python tests

* Update topological cache after add_output(Output<Node>) by adding result to the end of cached ops

* Add 'm_shared_rt_info' to 'result node just for consistency (there is actually no scenario which may fail due to absence of this info for Result

* Added test cases to verify that names cache should be cleared on refresh of 'get_ordered_ops'
2022-03-25 12:17:41 +03:00
Anastasia Popova
45a21bf902 Fixed setting of input name for case of multiple outputs from Parameter. (#11019)
* Fixed setting of input name for case of multiple outputs from Parameter.

* Small correction.
2022-03-25 12:11:07 +03:00
Sergey Lyubimtsev
fae1c27657 Add missed dependencies for OpenVINO Runtime wheel build (#11193)
* Disable IncrediBuild

* Add dependencies for frontends

* new line

* Add dependencies for frontends

* Enable IncrediBuild
2022-03-25 11:42:40 +03:00
Oleg Pipikin
aa0ab0e995 Move TF frontend tests (#11038) 2022-03-25 08:28:07 +03:00
Oleg Pipikin
5b18677f1b Add constant folding to hetero to avoid dynamism on GPU (#10572)
* Add constant folding to hetero to avoid dynamism on GPU

* Aplly comments

* Apply comments 2

* Fix1
2022-03-25 07:10:23 +03:00
Ilya Lavrenov
a883dc0b85 DOCS: ported changes from 2022.1 release branch (#11206)
* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs

* Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP

* Removed OPENVINO_FRAMEWORK_MAP reference

* Frontend extension detailed documentation

* Fixed distributed snippets

* Fixed snippet inclusion in FE extension document and chapter headers

* Fixed wrong name in a snippet reference

* Fixed test for template extension due to changed number of loaded extensions

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Minor fixes in extension snippets

* Small grammar fix

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* DOCS: transition banner (#10973)

* transition banner

* minor fix

* update transition banner

* updates

* update custom.js

* updates

* updates

* Documentation fixes (#11044)

* Benchmark app usage

* Fixed link to the devices

* More fixes

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed several hardcoded links

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Updated documentation for compile_tool (#11049)

* Added deployment guide (#11060)

* Added deployment guide

* Added local distribution

* Updates

* Fixed more indentations

* Removed obsolete code snippets (#11061)

* Removed obsolete code snippets

* NCC style

* Fixed NCC for BA

* Add a troubleshooting issue for PRC installation (#11074)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

* add a troubleshooting issue

* update

* update

* fix CVS-71846

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* DOCS: fixed hardcoded links  (#11100)

* Fixes

* Use links

* applying reviewers comments to the Opt Guide (#11093)

* applying reviewrs comments

* fixed refs, more structuring (bold, bullets, etc)

* refactoring tput/latency sections

* next iteration (mostly latency), also brushed the auto-batching and other sections

* updates sync/async images

* common opts brushed

* WIP tput redesigned

* minor brushing of common and auto-batching

* Tput fully refactored

* fixed doc name in the link

* moved int8 perf counters to the right section

* fixed links

* fixed broken quotes

* fixed more links

* add ref to the internals to the TOC

* Added a note on the batch size

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [80085] New images for docs (#11114)

* change doc structure

* fix manager tools

* fix manager tools 3 step

* fix manager tools 3 step

* new img

* new img for OV Runtime

* fix steps

* steps

* fix intendents

* change list

* fix space

* fix space

* code snippets fix

* change display

* Benchmarks 2022 1 (#11130)

* Minor fixes

* Updates for 2022.1

* Edits according to the review

* Edits according to review comments

* Edits according to review comments

* Edits according to review comments

* Fixed table

* Edits according to review comments

* Removed config for Intel® Core™ i7-11850HE

* Removed forward-tacotron-duration-prediction-241 graph

* Added resnet-18-pytorch

* Add info about Docker images in Deployment guide (#11136)

* Renamed user guides (#11137)

* fix screenshot (#11140)

* More conservative recommendations on dynamic shapes usage in docs (#11161)

* More conservative recommendations about using dynamic shapes

* Duplicated statement from C++ part to Python part of reshape doc (no semantical changes)

* Update ShapeInference.md (#11168)

* Benchmarks 2022 1 updates (#11180)

* Updated graphs

* Quick fix for TODO in Dynamic Shapes article

* Anchor link fixes

* Fixed DM config (#11199)

* DOCS: doxy sphinxtabs (#11027)

* initial implementation of doxy sphinxtabs

* fixes

* fixes

* fixes

* fixes

* fixes

* WA for ignored visibility attribute

* Fixes

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Ilya Naumov <ilya.naumov@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-24 22:27:29 +03:00
Anastasia Kuporosova
8118147a96 [Python API] Fix documentation for Core API (#11187)
* [Python API] Fix documentation for Core API

* fix style
2022-03-24 19:56:45 +03:00
Ilya Churaev
a8f9863f72 Add compiler requirements master (#11190)
* Added software tab for Linux installer

* Added information for apt and yum

* Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-linux.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Update docs/install_guides/installing-openvino-apt.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-03-24 14:25:49 +00:00
Sofya Balandina
0c6c3b0d74 [subgraphsDumper] Serialize the lightest operations from existing (#11097) 2022-03-24 16:43:46 +03:00
Irina Efode
f156429b11 [IE TESTS] API Conformance review (#11133)
* [IE TESTS] API Conformance review. Part 1

* properties

* properties
2022-03-24 16:34:05 +03:00
Maxim Shevtsov
7dc1d0935c Auto batching improved tests (#11179)
* wip remote tests2, fixed smoke_canInferOnUserContext

* completed the OV 1.0 tests for remote blobs

* updated OV 2.0 tests for remote blobs with auto-batching (using the ngraph func that is reshape-able by the batch)

* re-using the DetectionOutput-based ngraph func that is 100% batch-reshapeble
2022-03-24 16:23:00 +03:00
Ilya Churaev
b5dbabe41d Fixed registration of template plugin (#11155)
* Fixed registration of template plugin

* Added option for template plugin

* Fixed static build
2022-03-24 14:39:20 +03:00
Mikhail Nosov
9d865a2133 [Model Caching] Enabling per-device cache dir (#10774)
* Initial commit

10 more caching tests

* Fix clang-format

* Added brief explanations to each test

* Fix review comments
2022-03-24 11:24:47 +03:00
Wilson Seok
0cc119cd86 Fix N/A reported ops in conformance test (#11165)
* fix N/A ops in OpImplCheck

* update RandomUniform with parameter

* modify Constant function
2022-03-24 09:59:37 +03:00
Vladislav Volkov
5e6fd8c721 Validation of invalid names in CC factory (#11170) 2022-03-24 09:47:10 +03:00
Gleb Kazantaev
714d7a79c7 Fix SimplifySecondInputOfReshape (#11128) 2022-03-23 23:00:39 +00:00
Vladimir Dudnik
3a8800cbd2 [Docs][IE Samples] fix hard links (#11144)
* fix hard links

* change encoding

* fix TM

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2022-03-23 23:48:08 +03:00
Irina Efode
001d098410 [IE TESTS][CONFORMANCE] Fix issue with checkmarks for op impl check (#11178) 2022-03-23 20:10:49 +03:00
Irina Efode
5847e3cb44 [IE TESTS][CONFORMANCE] Allow to serialize report to CSV (#11119)
* [IE TESTS][CONFORMANCE] Allow to serialize report to CSV

* Small improvement

* Move import
2022-03-23 17:36:07 +03:00
Wilson Seok
32886cf90f change to fp16 ov::Model (#11018) 2022-03-23 17:09:20 +03:00
Fedor Zharinov
fe3270c232 [benchmark_app]Exception handling in callback (#11123)
* Exception handling in callback

* stylefix
2022-03-23 16:55:41 +03:00
Fedor Zharinov
1315cfaa64 Add CHW/HWC heuristics for tensors with 3 dimensions (#10817) 2022-03-23 16:54:56 +03:00
Alexey Varyzgin
75bbfe336d [CPU] API2.0 input issues (#10222) 2022-03-23 16:41:59 +03:00
hyunback kim
fc2a8eb1a6 [GPU] Fix unused variable build issue (#11157)
* [GPU] Fix unused variable build issue

Signed-off-by: hyunback <hyunback.kim@intel.com>

* Update to remove unused variables.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-03-23 16:26:09 +03:00
Artur Kulikowski
3674ec403f Update ONNX to version 1.11.0 (#11046) 2022-03-23 13:52:33 +01:00
Ilya Churaev
a63e2080e1 Added group for transformation passes (#11102) 2022-03-23 14:08:04 +03:00
Chen Peter
066882579d [AUTO] Fix mess table in doc (#11158)
Signed-off-by: Peter Chen <peter.chen@intel.com>
2022-03-23 13:35:28 +03:00
Nikita Malinin
b50079143f Update IRReader with the ResultRenaming flag (#10988) 2022-03-23 13:16:40 +03:00
Ilya Churaev
eb80b28624 Added groups for core headers (#11080) 2022-03-23 12:19:07 +03:00
Alexey Lebedev
3de9189d50 [core][python] ov::serialize (#10945)
* add ov::serialize

* create python binding

* update python tools

* use ov::serialize in benchmark app

* remove serialize from python offline_transformations

* fix import

* revert pot

* update docs

* apply review comments

* add const

* make bin path optional

* Add docs

* add compare test
2022-03-23 11:44:00 +03:00
Yuan Hu
af874e7754 update AUTO Debug doc with snippets (#11117)
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-23 16:31:03 +08:00
Nikita Malinin
eb74afe417 Fix FBC missed spaces (#11091) 2022-03-23 11:17:42 +03:00
Liubov Talamanova
99f0093615 [POT] Fixed bug with out port for nodes with multiple outputs (#10932)
* Fixed bug with out port

* Add test

* Fix test

* Change test model
2022-03-23 11:17:06 +03:00
Edward Shogulin
cd361ecae1 [CPU] Code generation: Floor & Ceiling & Round implementation (#10666) 2022-03-23 10:32:25 +03:00
Ilya Churaev
9bc7ebda7b Add couple words about tensor names master (#11116)
* Added more information about tensor names

* Fixed comment and added documentation for extensions

* Fixed typo
2022-03-23 10:04:21 +03:00
Ilya Churaev
1ad4a99478 Port api reference (#11152) 2022-03-23 10:03:16 +03:00
Oleg Pipikin
6710d78e63 Move shared frontend tests (#11036)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-23 09:00:34 +03:00
Mingyu Kim
ac58e8eaf1 [GPU] Temporal WA to fix accuracy issue in a network (#11016)
resample_opt generates wrong result when feature depth is two.
2022-03-23 14:40:20 +09:00
Wang, Yang
e259548530 Add logic test case for auto batching enable (#10626)
* Add test case for the loadNetwork with Auto Batching.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Enable logic test case for GPU.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Enable property for config key 'AUTO_BATCH_DEVICE_CONFIG'.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Omit {}.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add commont test for the property ALLOW_AUTO_BATCHING.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add commont test for AUTO Batching plugin.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-03-23 09:30:19 +08:00
Alexey Suhov
4a47c95c04 Update release version in readme (#11145) 2022-03-23 01:06:08 +03:00
Yuan Xu
2693ff3e48 fix a reference link (#11050) 2022-03-22 22:51:56 +03:00
Egor Duplensky
090799a362 [Samples] Fix cpp benchmark_app help message (#11055) 2022-03-22 22:51:05 +03:00
Maksim Kutakov
3ae167b88e Updated note about latency, added note about mem usage with dynamic shapes (#11126) 2022-03-22 22:45:38 +03:00
Karol Blaszczak
8cac305b8f [DOCS]transition_guide_intro_language (#11134)
a few language suggestions and grammar issues
2022-03-22 22:27:19 +03:00
Irina Efode
eeea2de3eb Extend PDPD regex by legacy (#10994) 2022-03-22 15:09:33 +03:00
Ilya Lavrenov
2f46890444 Removed obsolete scri[ts (#11107) 2022-03-22 14:52:03 +03:00
Nikita Semaev
5dcb6c2cee Correcting a regular expression (#10822) 2022-03-22 13:41:23 +03:00
Dmitrii Khurtin
2ae7e45edb [GNA] Moving PWL op to GNA (#8207)
* Moving PWL to ngraph

* improving the running time of php_search; refactoring the pwl operation

* fixed erros & refactored code

* moved PWL op to GNA

* Update src/plugins/intel_gna/ops/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/reference/pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/ops/pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.hpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* Update src/plugins/intel_gna/transformations/transpose_to_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* fixed compilation error

* Update inference-engine/tests/unit/gna/ngraph/transformations/gna_pwl.cpp

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>

* added some tests; changed algorithm of checking accuracy of pwl; refactoring

* added first and last segments; added fq and fixed errors

* fixed after review & rewrote some tests on ngraph

* removed debug logs & fixed code style check error

* s/ngraph_helper/ngraph_util

* removed TRANSFORMATIONS_API in PWLApproximation class declaration

* removed OPENVINO_API in Pwl class declaration

* replaced the deprecated version of evaluate() with a new one

* fixed some problems after reviewing

* fixed a problem when a value of function of left point of segment is less than minimum of function

* corrected a value of the right point of last segments

* s/OPENVINO_RTTI/OPENVINO_OP

Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com>
2022-03-22 13:24:57 +03:00
Alexander Sesorov
fb7249d496 [bandit] Add nosec for subprocess (#10941) 2022-03-22 12:56:54 +03:00
Ekaterina Aidova
37adb6d8a9 Docs: update AC info in API 2.0 migration guide (master) (#11113) 2022-03-22 12:42:13 +03:00
Ekaterina Aidova
1b9fbf25fd [OMZ]: update submodule (#11069) 2022-03-22 12:41:59 +03:00
Anastasia Kazantaeva
46df794908 Add changes to contribution guide (#10675) 2022-03-22 11:51:53 +03:00
Andrey Zaytsev
e782cd18b7 Feature/azaytsev/img updates (#11110)
* Updated images

* Updated images
2022-03-22 11:13:17 +03:00
Ilya Churaev
2a43c64336 Removed indentation (#11112) 2022-03-22 10:09:24 +03:00
Oleg Pipikin
7ccc48110d Move common test utils (#11022)
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-22 09:52:38 +03:00
Yury Gaydaychuk
1b9c58125c Checking in setShape() method if blob was preallocated moved to the case of increased shape (#11009)
* allow setshape for preallocated  blob in the case of non-increased memory size

* codestyle

* test cases added

* pytest aligned
2022-03-22 09:35:37 +03:00
Sofya Balandina
dfac195ffe Fix saving report in dir with more than one non-existing level (#11088) 2022-03-22 09:26:08 +03:00
Jade Cho
a7df1531db [GPU] Code refactoring to choose between binary_add and sum (#10724)
+ Fix colorization-sig accuracy issue using oneDNN
	Memory crash in case reuse_eltwise_sum_post in oneDNN and memory_pool
	And print node in/out gpu_usm_mem addr at OV_GPU_Verbose >= 1
+ Check the size of z spatial axis for checking fulltensor.
+ Remove program_helpers's functions.

Co-authored-by: hyunback <hyunback.kim@intel.com>
2022-03-22 14:58:36 +09:00
Aleksandr Korolev
e8288eb31d [VPU] removal deprecated test (#10597)
* [VPU] removal deprecated test

* Adding ie plugin cache reset to avoid myriad device is not opened issue

* Review changes

* Review changes
2022-03-21 21:14:41 +03:00
Mikhail Nosov
d84d00e2d6 Fix issue with output's friendly name after post-processing (#11095)
Scenario:
- Node "Split" with multiple outputs (e.g. 3). All outputs are connected to "Result"s
- Add post-processing step (e.g. convert element type, can be also implicit)

Issue: after post-processing, 3 new results will be created, each will have "Split" friendly name - inconsistency with IRv10 rules
Fix:
- For nodes with multiple outputs, add '.<idx>' suffix to new output's friendly name
- If no post-processing is applied, return immediately, keeping original results as is

Tests:
- Split with 3 outputs where 2 outputs have post-processing.
- Split with 3 outputs, post-processing doesn't create any nodes
2022-03-21 20:26:24 +03:00
Sergey Lyubimtsev
dacdf67c2c Update Benchmark guides (#11076) (#11085)
* - Update Benchmark Tool usage message
- Remove not existed paths
- Fix examples
* remove reference on FPGA

(cherry picked from commit 3caa77eb30)

# Conflicts:
#	samples/cpp/benchmark_app/README.md
2022-03-21 19:31:17 +03:00
Maxim Gordeev
3ac6e95ead [IE Samples] Fixed hanging of samples if InferImpl() throws exception (#11075)
* [IE Samples] Fixed hanging of samples if InferImpl() throws exception

* improved re-throw an exception
2022-03-21 19:30:30 +03:00
Evgenya Stepyreva
2f0620600f Reshape documentation (#10901)
* Reshape documentation

* Converting Model : reshape metrined, Supported Devices: no shape inference mentioning

* demos removed
2022-03-21 19:13:24 +03:00
Liubov Talamanova
bdc89b1571 Moved quantization templates to openvino/tools/pot (#10814) 2022-03-21 14:17:55 +03:00
Fedor Zharinov
0b444ab2db [benchmark_app]Show network original I/O info (#10694)
* Show network original I/O info

* additional no-name case check

* stylefix

* Update samples/cpp/benchmark_app/main.cpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

* Update samples/cpp/benchmark_app/main.cpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
2022-03-21 13:57:56 +03:00
Alexey Lebedev
332d27ca82 Replace enforcebf16 and qb by infer_precision (#11007) 2022-03-21 13:36:00 +03:00
Daria Mityagina
51ee3f81cb [XLink] - run XLink tests in pre-commit (#10302)
* [XLink] - tests to smoke scope

* [XLink] - small change in XLink related file to trigger ie-tests-windows-myriadx

* [XLink] - azure windows and linux

* [XLink] - azure windows and linux

* [XLink] - azure windows and linux - change dir?

* [XLink] - azure windows and linux - change dir?

* [XLink] - azure windows and linux - install?

* [XLink] - azure windows and linux - xlink cmake

* [XLink] - azure windows and linux - XLinkTests because another target with the same name already exists

* [XLink] - azure windows and linux - XLinkTests because another target with the same name already exists

* [XLink] - azure windows and linux - install TARGETS given target XLinkTests which does not exist

* [XLink] - azure windows and linux - remove smoke
2022-03-21 13:28:44 +03:00
Yuan Xu
be9bbb676d sync the same updates with 22/1 (#11071)
* Add Overview page

* Revert "Add Overview page"

* updates

* update

* updates
2022-03-21 09:11:39 +00:00
Sergey Shlyapnikov
782ef6b42e [GPU] Add estimation of required memory for CAFFE_OPT_2 stage of GPU Detection Output (#11001) 2022-03-21 12:06:07 +03:00
Tomasz Dołbniak
b480a49d66 RandomUniform-8: shape inference fix (#11047)
* Shape inference fix

* Update src/core/src/op/random_uniform.cpp

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

* Update src/core/tests/type_prop/random_uniform.cpp

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>

Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-21 12:04:51 +03:00
Edward Shogulin
a887b41db6 [snippets] Tokenization fix: fused names (#10997) 2022-03-21 11:44:43 +03:00
hyunback kim
ad179980d9 [GPU] Fix implicit concat padding offset issue in OneDNN (#11062)
Inserting padding into oneDNN primitive has issue with implicit concat behavior.
Deconv onedNN initialized output buffer to 0 including padding area. Padding area should be reserved.
Use oneDNN offset from program_node in/out lower_padding instead of oneDNN memory desc.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-03-21 16:57:39 +09:00
Evgeny Kotov
d7005af4a5 [GNA] add 3D shape input support for StridedSlice (#10818)
* add 3D shape to test and rename crop4d to strided_slice

* remove ConvertStridedSliceToCropNegative2 since 3D is now supported

* add myriad functional tests to skip-list
2022-03-21 10:15:59 +03:00
Mateusz Tabaka
c18030207c [ONNX] Avoid allocating vector for constants if possible (#10860)
For FLOAT, DOUBLE, INT32, INT64, UINT64 we can get a pointer
to data from TensorProto and pass it to Constant constructor.
2022-03-20 14:07:50 +01:00
Ilya Lavrenov
5390aa7ebc Updated multi code snippets (#11037) 2022-03-20 15:44:33 +08:00
Yuan Hu
72e8661157 [Auto PLUGIN] update Auto docs (#10889)
* update Auto docs

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove vpu, fix a mistaken in python code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update MYRIAD device full name

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update API name

old API use name Inference Engine API
NEW API usen name OpenVINO Runtime API 2.0

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update tab name, and code format

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix AUTO4 format issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update set_property code

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* auto draft

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* mv code into .cpp and .py

modify the devicelist part accoding to the review

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove priority list in code and document

modify the begning of the document
remove perfomance data
remove old API
use compile_model instead of set_property
add a image about cpu accelerate

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix mis print and code is not match document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try to fix doc build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix snippets code compile issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-19 18:25:35 +03:00
Maksim Derbasov
76fde1f7b0 Optimization for ie_memcpy (#10996) 2022-03-19 14:33:21 +03:00
Vladimir Paramuzov
b2110b352c [GPU] Update data structures for conv/pool/deconv params (#9641) 2022-03-19 11:11:07 +03:00
Irina Efode
3322b74bd9 [IE TESTS] Align Impl status (#11045) 2022-03-18 19:48:14 +03:00
Ilya Lavrenov
e3098ece7e DOCS: port changes from releases/2022/1 (#11040)
* Added migration for deployment (#10800)

* Added migration for deployment

* Addressed comments

* more info after the What's new Sessions' questions (#10803)

* more info after the What's new Sessions' questions

* generalizing the optimal_batch_size vs explicit value message

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Perf Hints docs and General Opt Guide refactoring (#10815)

* Brushed the general optimization page

* Opt GUIDE, WIP

* perf hints doc placeholder

* WIP

* WIP2

* WIP 3

* added streams and few other details

* fixed titles, misprints etc

* Perf hints

* movin the runtime optimizations intro

* fixed link

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* some details on the FIL and other means when pure inference time is not the only factor

* shuffled according to general->use-case->device-specifics flow, minor brushing

* next iter

* section on optimizing for tput and latency

* couple of links to the features support matrix

* Links, brushing, dedicated subsections for Latency/FIL/Tput

* had to make the link less specific (otherwise docs compilations fails)

* removing the Temp/Should be moved to the Opt Guide

* shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins

-   openvino_docs_IE_DG_Model_caching_overview
-   openvino_docs_IE_DG_Int8Inference
-   openvino_docs_IE_DG_Bfloat16Inference
-   openvino_docs_OV_UG_NoDynamicShapes

* fixed toc for ov_dynamic_shapes.md

* referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors

* fixed main product TOC, removed ref from the second-level items

* reviewers remarks

* reverted the openvino_docs_OV_UG_NoDynamicShapes

* reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference

* "No dynamic shapes" to the "Dynamic shapes" as TOC

* removed duplication

* minor brushing

* Caching to the next level in TOC

* brushing

* more on the perf counters ( for latency and dynamic cases)

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Updated common IE pipeline infer-request section (#10844)

* Updated common IE pipeline infer-reqest section

* Update ov_infer_request.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* DOCS: Removed useless 4 spaces in snippets (#10870)

* Updated snippets

* Added link to encryption

* [DOCS] ARM CPU plugin docs (#10885)

* initial commit

ARM_CPU.md added
ARM CPU is added to the list of supported devices

* Update the list of supported properties

* Update Device_Plugins.md

* Update CODEOWNERS

* Removed quotes in limitations section

* NVIDIA and Android are added to the list of supported devices

* Added See Also section and reg sign to arm

* Added Preprocessing acceleration section

* Update the list of supported layers

* updated list of supported layers

* fix typos

* Added support disclaimer

* update trade and reg symbols

* fixed typos

* fix typos

* reg fix

* add reg symbol back

Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>

* Try to fix visualization (#10896)

* Try to fix visualization

* New try

* Update Install&Deployment for migration guide to 22/1 (#10933)

* updates

* update

* Getting started improvements (#10948)

* Onnx updates (#10962)

* onnx changes

* onnx updates

* onnx updates

* fix broken anchors api reference (#10976)

* add ote repo (#10979)

* DOCS: Increase content width (#10995)

* fixes

* fix

* Fixed compilation

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Aleksandr Voron <aleksandr.voron@intel.com>
Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
2022-03-18 17:48:45 +03:00
Irina Efode
2f5cb43cba Change CODEOWNERS file (#11041) 2022-03-18 17:28:53 +03:00
Taylor Yeonbok Lee
d539a04efb [GPU] Add multiple output support in kernel selector, helper, set_arguments (#10755) 2022-03-18 17:26:07 +03:00
Taylor Yeonbok Lee
e5ad30f194 [GPU] Added a new unittest to test the mapped external primitive of a loop is fused to other primitive (#10832) 2022-03-18 17:25:32 +03:00
Mateusz Bencer
fe406d1606 Cpp fix of python segfault, reverted pybind workaround (#10749)
* test fix of segfault

* styles applied

* added keep_alive to pybind

* remove redundant code

* fix json tests

* review remarks

* introduced correct path to dlls in CI

* removing passing path via env variable

* introduced cpp solution

* remove keep alive

* review remarks

* remove explicit removing model

* removed shared_objects from ir frontend

* core test updated

* unified approach to handle extensions by frontends

* added nullptr check

* Revert "added nullptr check"

This reverts commit 666f5e4489.

* Revert "unified approach to handle extensions by frontends"

This reverts commit bf85ac24a6.

* m_extensions declaration in Frontend

* added assert
2022-03-18 16:29:46 +03:00
Yuan Xu
a5362a4d58 Apply same changes from 22/1 to master (#11035)
* Add Overview page

* Revert "Add Overview page"

* fix errors & formatting

* fix article usage according to the styles

* fix errors

* update according to PXT comments

* CVS-80775

* update support matrix with Python version

* fix formatting

* fix formatting

* CVS-71745

* update formatting

* fix formatting

* fix formatting

* fix links & errors

* fix formatting

* update bullet points

* update

* adjust the order

* update

* update

* updates

* update references

* update

* update

* apply same updates with 22/1

* minor fix
2022-03-18 12:07:07 +00:00
Maxim Vafin
c8f4f9b7db [MO] Fix swish value infer (#10802) 2022-03-18 14:56:37 +03:00
Maksim Kutakov
dfdbdb4601 [CPU] CPU plugin docs refactoring (#10970)
* CPU device documentation refresh

* Bfloat16 inference page aligned with the new API

* Bfloat16 inference section moved to CPU main

* First review comments applied

* Second review step comments applied

* OneDNN reference changed to the GitHub page

* AvgPool added to the oneDNN ops list
2022-03-18 14:56:22 +03:00
Ilya Churaev
a4d164eda4 Fixed macro for plugin registration. Register only required plugins (#11031) 2022-03-18 13:13:33 +03:00
Irina Efode
931313c03b Move Subgraph Dumper tool to RTTI declaration (#10862) 2022-03-18 13:01:53 +03:00
Anton Dudchenko
4cbdf9e737 [VPU] Fix MyriadPlugin build with enabled options of Conditional Compilation (#10811)
Added initialization for streamPacketDesc_t
CVS-80386
2022-03-18 12:20:15 +03:00
Yuan Hu
967f056761 [AUTOPLUGIN] update multi plugin document for ov2.0 (#10688)
* update multi document

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update snippets ov::enableProfile

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix build issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* use Anymap in snippets

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* fix format and set property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update python

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* try fo fix test document issue

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* removed NEW IE-CENTRIC API and upated set_property

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* update ov::optimal_number_of_infer_requests

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-18 11:31:36 +03:00
David Nam
000723acd0 Add ReadValue and Assign to template plugin tests (#9132)
* Add readvalue, assign to templte plugin test

* Fix clang error

* Fix clang error

* Remove unnecessary comment

* Fix type-casting error

* Fix ci issue regarding const value

* Change Function to Model

* Fix op scope

* Change way to get variable

* Fix type-casting error

* Set variable id to const

* Fix side-effect in ieFuncTests

* Implement Assign-3, ReadValue-3 in evaluates_map

* Correct setting attribute

* Correct setting attribute

* Remove unnecessarily added method

* Roll back v6

* Use member variable for variable_id in assign-3, read_value-3

* Get data pointer from host tensor

* Remove visitor API test for ReadValue-6, Assign-6

* Implement visitor api test for read_value-6, assign-6

* Fix clang error

* Split read_value and assign into each file for visitor test

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-18 07:13:40 +00:00
Mikhail Nosov
c4f5bce3b0 If CMAKE_BUILD_TYPE is not set - set it to 'Release' by default (#11026)
This behavior is already used by default because ONNX is enabled by default and thirdparty/onnx/onnx/CMakeLists.txt forcing CMAKE_BUILD_TYPE to Release if it is not set

It fixes the following issues:
- When ONNX frontend is disabled - source is built for Debug, which is very unexpected comparing to Release with ONNX frontend enabled
- When ONNX frontend is disabled, even libopenvino.so could not be built due to some generated makefiles issues

It is set to 'Release' (not to 'Debug') to comply with default behavior when ONNX is enabled (it is default option working for most users)
2022-03-18 06:59:36 +03:00
Maxim Vafin
0be4bca954 Incremental improvement of MO user guide. (#11010)
* Incremental improvement of MO user guide.

* Apply feedback
2022-03-17 22:38:20 +03:00
Anton Romanov
73994d7c70 Fixed coverity in benchmark app (#11013) 2022-03-17 18:26:59 +03:00
Alina Kladieva
aedcf2cb9f Skip hanging Myriad canRun3AsyncRequestsConsistentlyFromThreads (#11011)
* Skip Myriad canRun3AsyncRequestsConsistentlyFromThreads

* Update skip_tests_config.cpp

Fix typo
2022-03-17 17:52:25 +03:00
Mikhail Nosov
4b3dd808df [First Inference] Read time improvements via using 'mmap/munmap' (#10907)
* Performance improvement for constant creation

The issue is that 'are_all_data_elements_bitwise_identical()' is called every time in Constant constructor, and it potentially checks all buffer which is O(N) complexity.
While it is needed only if client uses 'get_all_data_elements_bitwise_identical'

Solution:
- Defer calculation until first call of 'get_all_data_elements_bitwise_identical'
- Store calculated value in mutable class member to reuse it on next calls of 'get_all_data_elements_bitwise_identical'

Test verifies both cases:
a) that constant creation with shared memory data (now O(1)) is significantly faster than creation+bitwiseCheck O(N)
b) Than once calculated, value is taken from cache, which is significantly faster than re-calculation

* fix clang-format

* Stash - Linux implementation

* Windows mmap implementation + unicode

* Clang for windows

* removed debug print

* Add handling of empty bin file

* fix windows includes

* Fix python test

* Unit tests
Fix for Constant with size > 4GB

* Fix review comments
2022-03-17 17:16:06 +03:00
Indira Salyahova
3a8fd7135e [POT] Method сhange to get bias shape in bc and fbc algoritms (#10463)
* refactoring: get bias shape in bc and fbc algoritms

* use scipy to take most frequent shape

* pylint

* update reference

* pylint

* Update test_sanity.py

* update test_sanity.py

* Update test_sanity.py
2022-03-17 16:46:28 +03:00
Sergey Lyubimtsev
412f2190d1 Update for get started samples (#10975)
* Update for get started samples

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* formatting

* rewording

* fix links

* fix formatting

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/get_started/get_started_demos.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* replace squeezenet1.1 with googlenet-v1

* GoogleNet v1 Caffe* model

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-03-17 16:01:42 +03:00
Vladimir Zinoviev
f12ab5c182 [LPT] Tests on reshape with channel dim reducing (#10999) 2022-03-17 16:00:45 +03:00
Mikhail Nosov
857a1ad2af Caching snippets try/catch to make coverity happy (#11005) 2022-03-17 09:58:18 +00:00
Artyom Anokhov
2b4ee3d937 Fix Deployment Manager configs for MacOS and Win-HDDL target (master) (#11000)
* DM configs: Updated path for MacOS. Removed MovidiusDriver for HDDL target for Windows

* DM config MacOS: Updated name for libov_runtime
2022-03-17 12:45:07 +03:00
Andrey Noskov
0091d52c78 [GNA] Added SW_FP32 mode w/o SF for BasicLSTM (#10115)
* [GNA] Added SW_FP32 mode w/o SF for BasicLSTM

* deleted additional test
 added sw_fp32 mode for exisiting test
 changed reference output for new mode

* [GNA] Fixed according to review

* [GNA] Parametrized weights range

* fixed after review

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2022-03-17 11:32:46 +03:00
Anton Romanov
bfa0e3e1a4 Fix samples tests on azure (#10892)
* Fix samples tests on azure

* fixed bmp reader
2022-03-17 11:17:25 +03:00
Dawid Kożykowski
cf80156fcc Expose Output object bindings to Python (#10743) 2022-03-16 18:31:38 +01:00
Dawid Kożykowski
6f64de4c27 DynamicQuantizeLinear op support (#10565) 2022-03-16 18:30:15 +01:00
Karol Blaszczak
33d90c5c77 [DOCS] update HETERO execution (#10758)
* [DOCS] update HETERO execution

a basic language review

* Update docs/OV_Runtime_UG/hetero_execution.md

* Update docs/OV_Runtime_UG/hetero_execution.md

* Update docs/OV_Runtime_UG/hetero_execution.md

* Update docs/OV_Runtime_UG/hetero_execution.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-16 18:37:30 +03:00
Yegor Kruglov
7cfd8698ce [Master] Cascade RCNN res101 document model support (#10902)
* cascade rcnn model support

* fix typo

* specify model directory

* comments resolving
2022-03-16 18:16:28 +03:00
Vladimir Zinoviev
25e1f6ac97 [CODEOWNERS] LPT folder owner change to LPT team (#10993) 2022-03-16 17:38:29 +03:00
Vladislav Volkov
ed8c9d6f9a CPU Plugin refactoring: class names (#10639) 2022-03-16 17:16:29 +03:00
Yegor Kruglov
f7b2e3a8ca SoftSign-9 specification (#10690)
* SoftSign-9 specification

* opset9 update

* fix formula

* comments resolving
2022-03-16 14:58:26 +03:00
Vladimir Gavrilov
f7875da083 nGraph shell of operations RDFT and IRDFT (#10353)
* Written header files for the nGraph operations RDFT and IRDFT.

* Written nGraph shell for the operation RDFT.

* Added missed include.

* Added RDFT to opset9 table.

* Code style fixes.

* Written the nGraph shell of the operation IRDFT.

* Added IRDFT to opset9 table.

* Started to write shape infer tests for RDFT.

* Refactoring: shape infer functions of RDFT and IRDFT moved into separate files.

* Written shape infer tests for RDFT.

* Written shape infer tests for IRDFT operation.

* Fixed code style.

* Fixes in the shape infer function of RDFT.

* Fixes in the shape infer function of RDFT.

* Fixes in the shape infer function of IRDFT.

* Deleted redundant includes in include/ngraph/op/irdft.hpp and include/ngraph/op/rdft.hpp

* Deleted redundant includes in include/openvino/op/rdft.hpp and include/openvino/op/irdft.hpp.

* Deleted redundant includes in cpp-files of nGraph shells of operations IRDFT and RDFT.

* Code style fixes.

* Shape inference functions of operations RDFT and IRDFT moved to the namespace ov::op::util.

* Deleted RDFT and IRDFT from docs/template_plugin/backend/opset_int_tbl.hpp.

* Deleted 'using namespace ngraph' from cpp-files of nGraph shells of operations RDFT and IRDFT.

* Fixed typos.

* Merged some loops in shape inference functions of RDFT and IRDFT.

* Written visitor tests for RDFT and IRDFT.

* Small change.

* Common part of RDFT and IRDFT shape validation moved into the separate file.

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-16 14:54:32 +03:00
Nadezhda Ageeva
097006d97a [GNA] Update documentation (cherry-pick from release) (#10974)
* [GNA] Update documentation (release) (#10873)

* parent 5f755d5e4a
author Nadezhda Ageeva <nadezhda.ageeva@intel.com> 1646919359 +0300
committer Nadezhda Ageeva <nadezhda.ageeva@intel.com> 1647270928 +0300

[GNA] Updte documentation (release)

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Denis Orlov <denis.orlov@intel.com>

Apply comments

Move snippets to separate file

Add notes about POT and 2d convolutions

* Add lins to GNA setup

* cleanup after rebase

* [GNA] small docs fixes (#10959)

* [GNA] small docs fixes

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/GNA.md

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>

Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
2022-03-16 14:37:55 +03:00
Ilya Znamenskiy
848a824260 [GPU] Removed unneeded data from the code (#10987) 2022-03-16 14:22:54 +03:00
Chenhu Wang
3bb83b4ddd [CPU] Fixed undefined values in MVN and Interpolate nodes (#10829) 2022-03-16 14:11:07 +03:00
Alexandra Sidorova
68452e97e2 [CPU] Fixed default ellipsis in StridedSlice (#10713) 2022-03-16 14:09:42 +03:00
Chen Xu
49fb48e744 [CPU] Topk node extend to support horizontal sort for all layouts for top_k == 1 (#10700) 2022-03-16 14:09:14 +03:00
Luo Cheng
4d8adabcaa [CPU] Fusing in non-0 output port exception (#10837) 2022-03-16 14:08:28 +03:00
Alexander Zhogov
8a813d0da9 Update windows.yml
Azure CI: Enable IB again
2022-03-16 13:05:36 +03:00
Mikhail Nosov
7cea7dd4e6 Docs: model caching page update according to OpenVINO API 2.0 (#10981) 2022-03-16 12:22:33 +03:00
Ilya Znamenskiy
2687f6fb2e [GPU] Enabled IFM leftovers inside fully_connected_imad kernel (#10912) 2022-03-16 12:02:36 +03:00
Alexander Zhogov
06f55bd8e8 Azure CI: Disable IB 2022-03-16 08:57:45 +03:00
Yuan Hu
50adb2240c [AUTOPLUGIN] don't check dynamic shape when there is only one device (#10868)
* don't check dynamic shape when there is only one device

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* remove redundant if

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-16 09:56:23 +08:00
guozhong wang
8c9c592fcf Guozhong/add hint cumulative throughput (#10727)
* mod docs/_static/images/dataset.png and docs/_static/images/inputs.png

* add new hint cumulative_throughput

* clang format properties.hpp

* add set properties and get properties test case for CUMULATIVE_THROUGHPUT

* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png

* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png

* reset dataset.png and inputs.png

* reset dataset.png and inputs.png

* remove test value cumulative_throughput from gpuplugin and cpuplugin testcase

* rollback dataset.png and inputs.png to 41818a377
2022-03-16 09:56:05 +08:00
Yuan Hu
4f6dcc1d32 [AUTOPLUGIN] add fps log (#10662)
* add fps log

add format '%lf' for log
add INFO_RUN and DEBUG_RUN, code only run when greater than special log level
add fps log for device
print device config info with DEBUG_RUN
add mock test for DEBUG_RUN and INFO_RUN

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* use n / end -start instead of (n-1) / ((nst start) -(1st start))

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-03-16 09:55:41 +08:00
Andrey Zaytsev
ea8f1d0344 Cherry-picked Changes to the OpenVINO 2.0 Transition Guide (#10936) (#10978) 2022-03-15 21:09:33 +00:00
Nadezhda Ageeva
3c721b0f03 [nGraph] [MO] Add ngraph transformations for PRelu fusing (#10209)
* MO: add PRelu fusing pattern with Sub

* PRelu fusing

* Apply review comments

* Code style
2022-03-15 18:57:59 +03:00
Mikhail Nosov
93722fe101 Docs. Fix link in layout overview (#10968) 2022-03-15 15:35:02 +00:00
Sofya Balandina
499ffcaa59 [conformance] SetUp timeout per test (#10426) 2022-03-15 18:28:19 +03:00
azhogov
37e05afd12 GitHub org control: all ignored accounts are showed now 2022-03-15 17:27:08 +03:00
Nikita Malinin
e103af056e [POT] References & golds update (#10937)
* Update references

* Update golds & add stats dumping

* Statistics_data upd

* Enable densenet in nightly

* Pylint fixes

* Update try-except WA

* Update simplified gold
2022-03-15 16:55:48 +03:00
Vladimir Paramuzov
4e8e56e887 [GPU] Removed unused variable to fix android build (#10961) 2022-03-15 16:22:13 +03:00
Bartek Szmelczynski
aed3b59796 [DOCS] Python snippets for Query device page (#10789)
* create python snippets

* remove redundant space
2022-03-15 16:11:32 +03:00
Taylor Yeonbok Lee
13b6a3d86e [GPU] Fix batchability check of MAX_BATCH_SIZE (#10660)
* Fix batchability check of MAX_BATCH_SIZE

* Applied review comment
2022-03-15 21:31:03 +09:00
Vitaliy Urusovskij
2dbd60c1ae Mark get_type_info_static() op class methods as hidden (#10691)
* Mark `get_type_info_static()` as hidden

Each plugin linked with openvino library contains `type_info_static` symbols. In case when one of the libraries is unloaded and app tries to get opset, it leads to segfault. So mark `get_type_info_static()` as hidden to use only one implementation exactly from openvino lib

* Fix "'visibility' attribute ignored" issue by moving `TestPass` out of test scope

* Fix clang format

* Small update of `If` op

* Revert "fix 79520 (#10449)" to correctly compare DiscreteTypeInfo via `==`

This reverts commit 29883a152a.
2022-03-15 14:59:13 +03:00
Vladislav Golubev
060a149322 [nGraph] get_constant_from_source fix (#10787) 2022-03-15 14:50:21 +03:00
Jan Iwaszkiewicz
660c6a3e84 [DOCS] Python Exclusives overview (#10946)
* Add python docs

* Small fix

* Apply comments

* Fix style
2022-03-15 14:26:12 +03:00
Tomasz Dołbniak
29144d3a6b ONNX Expand extended support (#10833) 2022-03-15 12:21:30 +01:00
Bartek Szmelczynski
840e622da5 add snippets for automatic batching (#10910)
* add snippets for automatic batching

* Update docs/snippets/ov_auto_batching.py

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

* add missing bracket

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
2022-03-15 14:17:20 +03:00
Sergey Lyubimtsev
5f27c74d96 Add description for zsh: no matches found : openvino-dev[...] issue. (#10950)
* Add description for `zsh: no matches found : openvino-dev[...]` issue.

* spell check
2022-03-15 13:37:39 +03:00
Ilya Znamenskiy
fc5356cd3b [GPU] Spelling fixes (#10952) 2022-03-15 12:25:55 +03:00
Yuan Xu
e341cdf541 Add Python version for Docker installation (#10840)
* update support matrix with Python version

* fix formatting

* update formatting

* fix formatting

* fix formatting
2022-03-15 12:06:48 +03:00
Yuan Xu
e71a05d94d Minor updates to Linux/macOS/Windows installation guide (#10955)
* Add Overview page

* Revert "Add Overview page"

* CVS-71745

* fix links & errors

* fix formatting

* update bullet points
2022-03-15 12:06:30 +03:00
Mateusz Tabaka
c58ef365b5 Fuse FQ->Mul also if first FQ input can be constantfolded (#10712)
The change fixes FQ fusions for subgraphs like 'Const weights'->FQ->Transpose->Multiply.
After PullTransposeThroughFQUp transformation, we end up with following:
'Const weights'->Transpose->FQ->Multiply. Because of the Transpose on first
FakeQuantize inputs, Multiply could not be fused since FakeQuantizeMulFusion
expected that weights is a Constant node.

Ticket: 77785
2022-03-15 09:54:41 +01:00
Nikita Semaev
758d12b9cb The correct namespace LayerTestsDefinitions (#10821)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-15 11:34:33 +03:00
Vladimir Zinoviev
4a2d0f39dd [LPT] Turn back checks in reshape transformation when subtract is absent (#10939) 2022-03-15 11:34:12 +03:00
Mikhail Nosov
ef5ad90dd7 [Read time improvement] Avoid calling of 'are_all_data_elements_bitwise_identical()' during Constant creation (#10858)
* Performance improvement for constant creation

The issue is that 'are_all_data_elements_bitwise_identical()' is called every time in Constant constructor, and it potentially checks all buffer which is O(N) complexity.
While it is needed only if client uses 'get_all_data_elements_bitwise_identical'

Solution:
- Defer calculation until first call of 'get_all_data_elements_bitwise_identical'
- Store calculated value in mutable class member to reuse it on next calls of 'get_all_data_elements_bitwise_identical'

Test verifies both cases:
a) that constant creation with shared memory data (now O(1)) is significantly faster than creation+bitwiseCheck O(N)
b) Than once calculated, value is taken from cache, which is significantly faster than re-calculation

* fix clang-format

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-15 10:03:47 +03:00
Mingyu Kim
0a8f97f429 [GPU] Friendly exception msg for alloc failure (#10953) 2022-03-15 15:48:31 +09:00
Steve Yoo
a9ff10b365 Add SLT to Template Plugin: ROIAling-3 (#10625) 2022-03-15 07:31:27 +03:00
Mingyu Kim
0b556fd7a9 [GPU] More friendly exception message (#10721)
* [GPU] More friendly exception message

* Apply Ilya's comment

Co-authored-by: Ilya Znamenskiy <ilya.znamenskiy@intel.com>

Co-authored-by: Ilya Znamenskiy <ilya.znamenskiy@intel.com>
2022-03-15 09:57:22 +09:00
Karol Blaszczak
3b2b055bfd repair 2 png files (#10949) 2022-03-14 22:03:09 +03:00
Ilya Churaev
ad1c4a24c3 Deprecate version inside DiscreteTypeInfo (#10781)
* Deprecate version inside DiscreteTypeInfo

* Fixed code style

* Fixed openvino for macOS

* Fixed build for macOS

* Fixed errors for Windows build
2022-03-14 21:18:00 +03:00
Ryan Loney
575b2fad73 Update Binder URL on the tutorials landing page (#10877)
* Update Binder URL on the tutorials landing page

Binder URL was linking to a file. It should go to an actual Binder tutorial.

(Replaces https://github.com/openvinotoolkit/openvino/pull/10747)

* binder logo

* fixes

Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
2022-03-14 21:11:01 +03:00
Ilya Churaev
0d404633a9 Add OpenVINO as required component and remove clang-format from example (#10944) 2022-03-14 20:05:24 +03:00
Mateusz Tabaka
23eaa80325 Don't memset Constant's buffer if it's about to be filled with data (#10861)
* Don't memset Constant's buffer if it's about to be filled with data

* dont memset buffer in visit_attributes
2022-03-14 19:36:40 +03:00
Irina Efode
3aa525c003 SubgraphDumper: clone only not constant graph (#10867)
* SubgraphDumper: clone only not constant graph

* Apply comments
2022-03-14 18:49:11 +03:00
Maxim Vafin
4e27d936b5 Update Convert_YOLACT.md (#10942) 2022-03-14 15:21:43 +00:00
Mikhail Nosov
72fe6082ea [Preprocess] InputTensorInfo::set_from implementation (#10839)
* InputTensorInfo::from implementation

If user's application already has `ov::runtime::Tensor` object created,
it will be possible to reuse basic characteristics for input (shape, precision) from tensor using InputTensorInfo::from method

* Rename 'from' to 'set_from' as  in Python 'from' keyword is used for import modules
Python bindings: from ov.Tensor and from numpy array

* Style fix (quotes)

* Apply suggestions from code review

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Fix code style

* Use set_from in hello_classification CPP sample

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2022-03-14 18:02:51 +03:00
Vladimir Paramuzov
4c4581940a [GPU] Use int64_t type for axis in concat (#9790) 2022-03-14 18:02:21 +03:00
Anastasia Kuporosova
d0b4cae2f8 [Python API] move util under utils (#10923)
* [Python API] move util under utils

* fix importing

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-14 14:40:28 +03:00
Mikhail Nosov
96f954c704 [Preprocessing] Crop preprocessing support (#10805)
* Crop preprocessing support

Note: instead of 'ov::Coordinate' simple std::vector<int> is used because Coordinate don't support negative dimensions

Added unit tests, template reference tests, cpu and gpu tests

* Added python bindings
Fix review comments

* Fixed python code style

* Fix thresholds

* Fix python style

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-14 13:32:27 +03:00
Roman Kazantsev
6ec8e53183 Update Model Optimizer User Guide (#10759)
* Remove install prerequisites steps, order FWs, and move pre-processing details

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Introduction: examples of MO CLIs, references to parameters description pages

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Setting Input Shape section

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update Optimizing Preprocessing Computation page

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Revert location of Additional_Optimizations.md

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Describe layout and FP16 support in MO

* Fix docs issue

* Apply feedback

* Apply review feedback

* Clean-up Resources

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Mention FP16 compression in MO Introduction

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply the first portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply the second portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply review feedback

* Apply review feedback

* Apply the third portion of feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply feedback for FP16 compression documentation

* Apply review for FP16 page

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply feedback

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Address feedback about tutorials, input_shape option

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Rework Setting Input Shapes section

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update "See also" list

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Correct conversion documents for each FW

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Refactor TensorFlow converting document and expand Embedding Preprocessing document

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix a link to POT

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2022-03-14 13:08:58 +03:00
Mikhail Nosov
43cb3920fb Docs: preprocessing use case with saving model to IR (#10698)
* Docs: added preprocessing use case with saving resulting model to IR

* Enable Model Caching to 'application code' section

* Fix review comments
2022-03-14 12:12:20 +03:00
Alexey Suhov
ef00057c8e Change product version to 2022.2.0 (#10911)
* Change product version to 2022.2.0

* change OPENVINO_VERSION_MINOR
2022-03-14 11:42:03 +03:00
yanlan song
a6583965a5 try avoid timeout in batch plugin during transition in auto plugin (#10753)
* initial debug

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

* remove debug msg

Signed-off-by: fishbell <bell.song@intel.com>
2022-03-14 11:05:26 +03:00
Ilya Churaev
0bc6196d96 Migrate to new RTTI for all transformations and graph structures (#10703)
* Migrate to new RTTI for all transformations and graph structures

* Fixed code style
2022-03-14 06:57:21 +03:00
Maxim Gordeev
4be227f5a1 [IE Samples] Fixed rights for file with image (#10924) 2022-03-12 01:51:34 +03:00
Alexander Zhogov
45d34e7885 CODEOWNERS: Fix 3d party dependencies 2022-03-11 21:43:09 +03:00
Dawid Kożykowski
3b5f3d1957 Snippets for preprocessing migration page (#10899)
* add placeholder for python version of first snippet

* fix problem with placeholder

* fix wrong file name

* fix fragment name

* update python snippets

* move imports to the top of the code fragments
2022-03-11 21:19:07 +03:00
Przemyslaw Wysocki
de3088adce [Docs] Add Python snippets for configure devices (#10913)
* Add Python docs for configure devices

* Bugfixes

* Minor changes

* Minor changes

* Format changes

* Minor changes
2022-03-11 21:17:22 +03:00
Anastasia Kuporosova
23604ca28c [Python API] Update doc style (#10708)
* [Python API] Update doc style

* apply comments

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-11 19:18:20 +03:00
Irina Efode
7790f2036f [IE TESTS][CONFORMANCE] Fix for IRs with dynamic shapes (#10880) 2022-03-11 19:17:53 +03:00
Vladimir Zinoviev
eebe8c70f9 [LPT] Fix out of bounds access in reshape (#10791) 2022-03-11 18:04:14 +03:00
Mikhail Nosov
86322c916b Fix loading time issues for POT models (with lots of results) (#10898)
* Fix loading time issues for POT models (with lots of results)

* Same for 'optimized_strided_slice'
2022-03-11 17:44:36 +03:00
Andrey Noskov
6fdd983750 [GNA] Added multi crop test (#10459) 2022-03-11 15:05:14 +03:00
Andrey Noskov
caaacb2db4 [GNA] Moved single Lstm-cell test from deprecated tests (#10472)
* [GNA] Single lstm-cell test added

* Added additional config for test

* one more input and hidden shape

* Added cell with ReLU
Deleted deprecated test

* test added as lstm_cell_basic

* Enabled gna_compact_mode

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>

* enabled compact_mode in all tests

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2022-03-11 15:03:16 +03:00
Ilya Churaev
d93ce1e246 Added intro to transformation guide (#10894) 2022-03-11 14:27:11 +03:00
Vladimir Dudnik
f48b233629 update omz intel models, fix docs (#10843) 2022-03-11 12:34:55 +03:00
Vladislav Volkov
9d74f5cd76 Export/import fixed for param->result and const->result models (#10838)
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2022-03-11 11:10:56 +03:00
Nikolay Tyukaev
2940db0fb1 benchmark legal, snippet margin bottom (#10886) 2022-03-11 11:10:11 +03:00
Sergey Lyubimtsev
dd076264eb add pre-release description for wheels packages (2) (#10813)
* add pre-release description for wheels packages

* refactoring

* lines

* Revert "lines"

This reverts commit 01a74dc168.

* linters

* linters

* nighly revision of docs URL
2022-03-11 11:09:17 +03:00
Sergey Lyubimtsev
0dc2ab182b Update APT instructions according to repository configuration (#10869) 2022-03-11 10:45:31 +03:00
Alexey Lebedev
97efdb5020 [docs] python snippet for dynamic shapes (#10762)
* Create snipp

* link python snipp with doc

* fix docs

* Apply suggestions from code review

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

* Fix cpp comments

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
2022-03-11 08:42:33 +03:00
Elizaveta Lobanova
4e0a740eb3 [GNA] Support of overload correction for MatMul with 2 non-constant layers (#10447) 2022-03-10 15:16:17 +03:00
Vladimir Paramuzov
09246e2db8 [GPU] GPU plugin docs (#10734) 2022-03-10 15:01:52 +03:00
Anton Pankratov
a8a2640fb7 Added callback and wait migration guide (#10775)
* Added callback and wait migration guide

* Added start async

* Simplified wait

* Added selector for sync async

* fixed doc

* fixed build

* fixed doc

* fixed doc
2022-03-10 14:00:42 +03:00
Irina Efode
5566b67238 Frontend support in Subgraph dumper (#10765)
* Init

* Enable frontends

* Update read_ir_compare_with_refs.cpp

* Remove extra line

* Update CMakeLists.txt
2022-03-10 13:34:47 +03:00
Nikita Malinin
4746d0881b [POT] Update BC with the Parameter nodes connection (#10848)
* Update BC with the Parameter nodes connection

* Update test_sanity with octave
2022-03-10 10:28:47 +03:00
Tatiana Savina
d7372d678c [DOCS] fixes for nightly (#10842)
* fixes for nightly

* modify xfile

* change launcher ref
2022-03-10 09:10:54 +03:00
Katarzyna Mitrus
531fa9018d [DOCS] Python snippets for Hetero execution page (#10769)
* Update docs ov hetero snippets

* Add missing space

* Update precision hint

* Update hetero docs snippets with GPU profiling
2022-03-09 19:34:42 +03:00
Karol Blaszczak
44ec4661a4 Update Auto plugin docs (#10623)
* Update Auto plugin docs

Revise auto plugin and auto plugin debugging articles. Include necessary image files.

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update docs/OV_Runtime_UG/supported_plugins/AutoPlugin_Debugging.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Update AutoPlugin_Debugging.md

* include review corrections

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

* Update auto_device_selection.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
2022-03-09 18:09:37 +03:00
Serhii Pavlovskyi
948347f3dd ncc build fixes (#10367)
* fix .ncc_style target names

it was breaking configure on system with libclang-12-dev, clang-12,
ninja and cmake 3.17+(ninja complains about duplicate
target). with lower cmake version configure succeeds, but build exits
immediately with error. by replacing ninja with make error becomes
warning(it's still significant, make just skips duplicate rules, i.e.
doesn't check style of some source files, rule duplication is genuine
bug). without libclang-12-dev and clang-12 ENABLE_NCC_STYLE is OFF and
bug is not triggered

* silence uninitialized warning in core_integration

probably it was always initialized before use, but compiler wasn't made
aware of it

* fix function spelling to unbreak code style checks in benchmark_app

* include <thread> for std::this_thread

existing code was relying on namespace pollution by old libstdc++

* replace is_pod with is_standard_layout && is_trivial

is_pod is deprecated, it breaks build on current gcc

Co-authored-by: Serhii Pavlovskyi <spavlovskyi@lohika.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2022-03-09 13:42:06 +03:00
Vladimir Dudnik
d9976332b0 upd open-model-zoo, upd docs, upd ac cfgs (#10676) 2022-03-09 11:48:47 +03:00
Ilya Churaev
702f8cf223 Fixed duplicated words (#10827) 2022-03-09 08:06:12 +00:00
Taylor Yeonbok Lee
3e7e0d5651 [DRYRUN] Fix dryrun in partial build (#10761)
When partial build is called for dryrun, do constant propagate too.
In normal case, partial build is not doing constant propate for saving build time of internal program.
However, if partial build is called with dryrun, it will fail at transfer_constants due to the generic nodes which does not have impl.
2022-03-07 13:37:21 +09:00
Tatiana Savina
de47a3b4a4 POT documentation updates (#10578)
* POT changes

* change install

* change img size

* remove cli option
2022-03-06 09:14:39 +03:00
Nikita Malinin
41818a377f [POT] Update IEEngine with the Dynamic model support (#10717)
* Update IEEngine with the Dynamic models support

* Update with the batch

* Method naming fix

* Update image_loader & tests with dynamic models

* Update test_sanity.py

* Replace custom_mo_config from the model
2022-03-05 15:49:21 +03:00
Egor Duplensky
3b8e960b10 [CPU] Avoid using cache for constant inplace or multi-child edges (#10573) 2022-03-05 14:37:50 +03:00
Tatiana Savina
3b8ca9f0af [DOCS] Fixes for nightly (#10806)
* add img

* wb img for input

* dataset added

* add img

* wb img for input

* dataset added

* ov_fix

* more imgs

* new img

* new img

* nlp

* new img

* delete img
2022-03-05 13:03:46 +03:00
Maksim Kutakov
e87ea5d611 [CPU] Use raw pointer to share peer data for constants (#10744) 2022-03-05 12:32:11 +03:00
Andrey Zaytsev
0f8c599ce7 Re-structure Model Optimizer User Guide and Clean-up (#10801)
* Modified the workflow diagram

* Moved supported topology lists to separate topics

* Additional changes

* Removed Supported Topologies list and Deprecated pages

* Created the Model Conversion Tutorials section for instructions for specific models

* Topic names alignment, removed Default_Model_Optimizer_Optimizations.md

* Additional structural changes

* Fixed links

* heading fixes
2022-03-05 12:31:15 +03:00
Roman Kazantsev
0c20e7a3ca [MO] Remove IR frontend from available frontend list in MO (#10798)
* [MO] Remove IR frontend from available frontend list in MO

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue - forget to pass FEM

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Fix issue for TF with new FE and default legacy

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2022-03-04 20:50:02 +03:00
Yuan Xu
3b24ed032a Yuan install guide 22/1 (#10786)
* Add Overview page

* Revert "Add Overview page"

* fix errors & formatting

* fix article usage according to the styles

* fix errors

* update according to PXT comments
2022-03-04 19:32:10 +03:00
Ilya Churaev
cb9049076b Enabled clang-format for cc and itt libs (#10793) 2022-03-04 18:40:18 +03:00
Dmitry Pigasin
c28cebb2a6 [CPP Speech Sample] Fix result saving when batch size is not 1 (#10714)
* Fix result saving when batch size is not 1

* Remove useless if statement

* improved processing scores for model with more than one outputs

* added checking on count of model outputs

* improve if statements

* divide fix for model with several outputs to other PR

Co-authored-by: Maxim Gordeev <maxim.gordeev@intel.com>
2022-03-04 15:41:47 +03:00
Anuj Mittal
7e8bbf4968 installing-openvino-yocto.md: fix install instructions (#10785)
Change _ to : as per the new override syntax.

Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
2022-03-04 15:41:37 +03:00
Nikita Malinin
69ad9e80e1 [POT] Update OverflowCorrection algo for nodes without bias (#10687)
* Update OverflowCorrection algo for nodes without bias

* Pylint line fix

* Update OC with the last add name

* Pylint fix
2022-03-04 14:50:44 +03:00
Irina Efode
32edd596e3 [IE TESTS] Functional test review: Part 4 (#10772)
* [IE TESTS] Move specific import_export_tests to gna and myriad

* add
2022-03-04 14:42:16 +03:00
Ilya Churaev
ed702910bd Enable clang for transformations (#10778)
* Enable clang for transformations

* Fixed code style

* Fixed build

* Fixed macOS
2022-03-04 13:38:42 +03:00
Irina Efode
082ebbcbf8 [IE TESTS] Remove NgraphConversionTests (#10770) 2022-03-04 09:52:58 +00:00
Fedor Zharinov
043a773f61 [Benchmark_app]Check all I/O names (#10745)
* Check all I/O names

* stylefix
2022-03-04 09:49:03 +03:00
hyunback kim
5cee51e9c4 [GPU] update to check quantize fusing condition in oneDNN (#10680)
* [GPU] update the condition for minimize_local_reorders

* Update to check needs reorder condition in quantize.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-03-04 14:30:07 +09:00
yanlan song
8a2252b774 fix multi infer result corrupt issue (#10704)
* do not share blob

Signed-off-by: fishbell <bell.song@intel.com>

* build error

Signed-off-by: fishbell <bell.song@intel.com>

* remove comment codes

Signed-off-by: fishbell <bell.song@intel.com>
2022-03-04 08:13:12 +03:00
Mateusz Bencer
fd18632d89 Update --extenions MO doc (#10763) 2022-03-04 07:24:52 +03:00
Wang, Yang
78c9f5b0a2 Add coommon test of the key PERFORMANCE_HINT for AUTO plugin API 2.0. (#10505)
* Add coommont test of the key PERFORMANCE_HINT for AUTO plugin API 2.0.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Add common test case for config check.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Use the implemented property test case.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-03-04 10:04:48 +08:00
Alexander Kozlov
1bbd92a8f8 Revised Tuning For Performance and Model optimization docs (#10276)
* Revised Tuning for performance and Model optimization docs

* Fixed links

* Fixed link

* Applied comments

* Fixed one more comment
2022-03-03 18:58:58 +03:00
Ilya Churaev
554b50eb85 Remove redundant calls from set_argument (#10701)
* Remove redundant calls from set_argument

* Fixed tests
2022-03-03 18:01:59 +03:00
Vladimir Gavrilov
f8ce57319b Specifications of operations RDFT and IRDFT (#10242)
* Written the draft of the specification of the operation RFFT.

* Started to write the specification of the operation IRFFT.

* Small fix.

* Renamed RFFT operation as RDFT.

* Fix in Operations_specifications.md.

* Written the specification of the operation IRDFT.

* Fixes in examples.

* Fixes in opset9.md and Operations_specifications.md.

* Small fix.

* Replaced opset8 by opset9 in opset9.md.

* Deleted redundant sentences.

* Small fix.

* Replaced input_shape by data_shape.

* Fixed mistypes.

* Fixes of mistypes.

* Fixed typo.

* Fixed RDFT specification, in order to perform signal_size input as in TF and PyTorch.

* Fixes in examples for RDFT.

* Fixes in the output shape calculation of IRDFT. Now this calculation is as in TF and PyTorch.
2022-03-03 13:47:23 +00:00
Maxim Gordeev
f81f819ecd [IE Samples] Improved processing outputs for model with more than one output (#10737)
* Improved processing outputs for model with more than one output

* fixed condition

* added checking count of output/reference files
2022-03-03 16:35:41 +03:00
3642 changed files with 128338 additions and 48667 deletions

View File

@@ -13,7 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
ref: releases/2022/2
jobs:
- job: android_arm64
@@ -114,6 +114,7 @@ jobs:
-DENABLE_SAMPLES=ON
-DENABLE_INTEL_MYRIAD=OFF
-DBUILD_java_api=ON
-DBUILD_cuda_plugin=OFF
-DTHREADING=SEQ
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache

View File

@@ -13,13 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
ref: releases/2022/2
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
ref: releases/2022/2
jobs:
- job: Lin
@@ -111,7 +111,8 @@ jobs:
set -e
$(REPO_DIR)/install_build_dependencies.sh
# Move jdk into contrib
sudo apt --assume-yes install openjdk-11-jdk
# 'clang' compiler is to check that samples can be built using it
sudo apt --assume-yes install openjdk-11-jdk clang
# For opencv-python: python3-setuptools and pip upgrade
python3 -m pip install --upgrade pip
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements.txt
@@ -157,10 +158,10 @@ jobs:
-DENABLE_FASTER_BUILD=ON
-DENABLE_STRICT_DEPENDENCIES=OFF
-DENABLE_REQUIREMENTS_INSTALL=OFF
-DENABLE_OPENCV=ON
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DBUILD_cuda_plugin=OFF
$(REPO_DIR)
workingDirectory: $(BUILD_DIR)
@@ -214,7 +215,6 @@ jobs:
set -e
mkdir -p $(INSTALL_DIR)/opencv/
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
cp -R $(REPO_DIR)/temp/opencv_4.5.2_ubuntu20/opencv/* $(INSTALL_DIR)/opencv/
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
@@ -226,6 +226,14 @@ jobs:
displayName: 'Build cpp samples'
continueOnError: false
- script: |
export CC=clang
export CXX=clang++
$(INSTALL_DIR)/samples/cpp/build_samples.sh -i $(INSTALL_DIR)
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build cpp samples - clang'
continueOnError: false
- script: $(INSTALL_DIR)/samples/c/build_samples.sh -i $(INSTALL_DIR)
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build c samples'
@@ -247,7 +255,7 @@ jobs:
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) --junitxml=TEST-Pyngraph.xml --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) --junitxml=TEST-Pyngraph.xml --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_zoo_models.py --ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_onnx/test_backend.py -v
displayName: 'Python API 2.0 Tests'
continueOnError: false
@@ -255,7 +263,6 @@ jobs:
export MO_ROOT=$(INSTALL_DIR)/tools/mo
. $(SETUPVARS) -pyver 3.8 && python3 -m pytest -s $(INSTALL_DIR)/tests/mo/unit_tests --junitxml=TEST-ModelOptimizer.xml
displayName: 'Model Optimizer UT'
condition: false
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
@@ -263,10 +270,20 @@ jobs:
displayName: 'OV Core UT'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/onnx_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:TEST-ONNXImportUT.xml
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend UT'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:TEST-Paddle.xml
displayName: 'Paddle Frontend UT'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/onnx_frontend_tests --gtest_print_time=1 --gtest_output=xml:TEST-Paddle.xml
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend UT'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/tensorflow_tests --gtest_print_time=1 --gtest_output=xml:TEST-Tensorflow.xml
displayName: 'Tensorflow Frontend UT'
continueOnError: false
@@ -292,6 +309,10 @@ jobs:
displayName: 'VPU UT'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/XLinkTests --gtest_output=xml:TEST-XLinkTests.xml
displayName: 'XLink Tests'
continueOnError: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:TEST-ieMultiPluginUnitTests.xml
displayName: 'MULTI UT'
continueOnError: false
@@ -311,7 +332,7 @@ jobs:
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/cpuFuncTests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- script: |
export DATA_PATH=$(MODELS_PATH)

View File

@@ -13,7 +13,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
ref: releases/2022/2
jobs:
- job: linux_arm64
@@ -142,6 +142,7 @@ jobs:
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_SAMPLES=ON
-DBUILD_java_api=OFF
-DBUILD_cuda_plugin=OFF
-DENABLE_INTEL_MYRIAD=OFF
-DTHREADING=SEQ
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules

View File

@@ -21,7 +21,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build

View File

@@ -4,7 +4,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
ref: releases/2022/2
jobs:
- job: Lin

View File

@@ -4,6 +4,7 @@
# type: github
# endpoint: openvinotoolkit
# name: openvinotoolkit/testdata
# ref: releases/2022/2
jobs:
- job: Lin_lohika

View File

@@ -13,13 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
ref: releases/2022/2
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
ref: releases/2022/2
jobs:
- job: Mac
@@ -101,7 +101,7 @@ jobs:
export PATH="/usr/local/opt/cython/bin:$PATH"
export CC=gcc
export CXX=g++
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache $(REPO_DIR)
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache -DBUILD_cuda_plugin=OFF $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
@@ -151,12 +151,18 @@ jobs:
- script: ls -alR $(INSTALL_DIR)
displayName: 'List install files'
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid:IE_CPU/GRUSequenceOp.onnx_model_gru* --gtest_output=xml:TEST-NGraphUT.xml
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'OV Core UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/onnx_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU*:IE_CPU.onnx_model_sigmoid:IE_CPU/GRUSequenceOp.onnx_model_gru* --gtest_output=xml:TEST-ONNXImportUT.xml
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend UT'
continueOnError: false
enabled: false
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/InferenceEngineUnitTests --gtest_print_time=1 --gtest_filter=-MKLDNNGraphStructureTests.TestNoRedundantReordersBeforeDWConvolution:TestConvolution/MKLDNNGraphConvolutionTests.TestsConvolution/0:TestConvolutionDefaultPrimitivesPriority/MKLDNNGraphConvolutionTests.TestsConvolution/0 --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
displayName: 'IE UT old'
continueOnError: false

View File

@@ -13,13 +13,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/1
ref: releases/2022/2
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/1
ref: releases/2022/2
jobs:
- job: Win
@@ -32,7 +32,7 @@ jobs:
maxParallel: 2
# About 150% of total time
timeoutInMinutes: 180
timeoutInMinutes: 270 #Temporary change
pool:
name: WIN_VMSS_VENV_D8S_WU2
@@ -135,7 +135,7 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_GAPI_PREPROCESSING=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DBUILD_cuda_plugin=OFF $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
@@ -169,13 +169,6 @@ jobs:
workingDirectory: $(BUILD_SAMPLES_TESTS_DIR)
displayName: 'Install Samples Tests'
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
- script: dir $(INSTALL_DIR) /s
displayName: 'List install files'
- script: $(INSTALL_DIR)\samples\cpp\build_samples_msvc.bat -i $(INSTALL_DIR)
workingDirectory: $(BUILD_SAMPLES_DIR)
displayName: 'Build cpp samples'
@@ -200,9 +193,15 @@ jobs:
python -m pytest $(INSTALL_DIR)\tests\smoke_tests\ --env_conf $(INSTALL_DIR)\tests\smoke_tests\env_config.yml -s --junitxml=TEST-SamplesSmokeTests.xml
workingDirectory: $(INSTALL_DIR)
displayName: 'Samples Smoke Tests'
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'ON')
continueOnError: false
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install tests'
- script: dir $(INSTALL_DIR) /s
displayName: 'List install files'
- script: rd /Q /S $(BUILD_DIR)
displayName: 'Clean build dir'
continueOnError: false
@@ -212,10 +211,20 @@ jobs:
displayName: 'OV Core UT'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\onnx_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:TEST-ONNXImportUT.xml
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend UT'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\paddle_tests --gtest_print_time=1 --gtest_output=xml:TEST-Paddle.xml
displayName: 'Paddle Frontend UT'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\onnx_frontend_tests --gtest_print_time=1 --gtest_output=xml:TEST-ONNX.xml
workingDirectory: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend UT'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\tensorflow_tests --gtest_print_time=1 --gtest_output=xml:TEST-Tensorflow.xml
displayName: 'Tensorflow Frontend UT'
continueOnError: false
@@ -242,6 +251,10 @@ jobs:
displayName: 'VPU UT'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\XLinkTests --gtest_output=xml:TEST-XLinkTests.xml
displayName: 'XLink Tests'
continueOnError: false
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\onnxImporterUnitTests --gtest_output=xml:TEST-onnxImporterUnitTests.xml
displayName: 'ONNX Importer UT'
continueOnError: false
@@ -263,7 +276,7 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\cpuFuncTests --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
displayName: 'CPU FuncTests'
continueOnError: false
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
- script: |
set DATA_PATH=$(MODELS_PATH)

View File

@@ -21,7 +21,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: $(WORK_DIR)\build

View File

@@ -1,213 +0,0 @@
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
DOCKER_CONTAINER_NAME= "openvino-onnx-ci-container"
DOCKER_IMAGE_TAG = "openvino-onnx-ci-image"
ONNX_MODEL_ZOO_SHA = "d58213534f2a4d1c4b19ba62b3bb5f544353256e"
BACKEND_CONFIGURATIONS = [
[ name: "Release", build_type: "Release" ],
[ name: "Debug", build_type: "Debug" ],
]
// workaround for aborting previous builds on PR update
@NonCPS
def stopPreviousRunningBuilds() {
def jobname = env.JOB_NAME
if (jobname.startsWith("onnx-ci/openvino onnx ci/openvino/PR")){
def buildnum = env.BUILD_NUMBER.toInteger()
def job = Jenkins.instance.getItemByFullName(jobname)
def job_newest = job.builds.first()
for (build in job.builds.reverse()[0..<-1]) {
if (build.isBuilding()){
echo "Stop task = ${build} because newest #${job_newest} is on the way"
build.doStop();
continue;
}
}
}
}
def getGitPrInfo(String project, String workdir) {
def gitPrInfo = [
prAuthorEmail : "",
commitAuthorEmail : "",
commitHash : "",
commitSubject : ""
]
try {
dir ("${workdir}/${project}") {
gitPrInfo.prAuthorEmail = sh (script: 'git log -1 --pretty="format:%ae" ', returnStdout: true).trim()
gitPrInfo.commitAuthorEmail = sh (script: 'git log -1 --pretty="format:%ce" ', returnStdout: true).trim()
gitPrInfo.commitSubject = sh (script: 'git log -1 --pretty="format:%H" ', returnStdout: true).trim()
gitPrInfo.commitHash = sh (script: 'git log -1 --pretty="format:%s" ', returnStdout: true).trim()
}
}
catch(e) {
echo "Failed to retrieve ${project} git repository information!"
echo "ERROR: ${e}"
}
return gitPrInfo
}
def notifyByEmail(def gitPrInfo) {
stage('Notify') {
String notifyPeople = "${gitPrInfo.prAuthorEmail}, ${gitPrInfo.commitAuthorEmail}"
emailext (
subject: "OpenVino CI: PR ${CHANGE_ID} ${currentBuild.result}!",
body: """
Status: ${currentBuild.result}
Pull Request Title: ${CHANGE_TITLE}
Pull Request: ${CHANGE_URL}
Branch: ${CHANGE_BRANCH}
Commit Hash: ${gitPrInfo.commitSubject}
Commit Subject: ${gitPrInfo.commitHash}
Jenkins Build: ${RUN_DISPLAY_URL}
""",
to: "${notifyPeople}"
)
}
}
def gitSubmoduleUpdate(String repository_name, String workdir) {
dir ("${workdir}/${repository_name}") {
sh label: "Init ${repository_name} submodules",
script:
"""
git submodule init && git submodule update \
--init \
--no-fetch \
--recursive
"""
}
}
def prepare_repository(String workdir) {
dir("${workdir}") {
println "Preparing repository in directory: ${workdir}"
checkout scm
gitSubmoduleUpdate(PROJECT_NAME, workdir)
}
}
def updateModels() {
sh """
./src/bindings/python/tests/test_onnx/model_zoo_preprocess.sh -d ${HOME}/ONNX_CI/models_data -o -s ${ONNX_MODEL_ZOO_SHA}
"""
}
def get_docker_container_name(Map configuration){
println "RUN get_docker_container_name for ${configuration.name}"
String docker_container_name = "${DOCKER_CONTAINER_NAME}_${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}"
return docker_container_name
}
def buildDockerImage(Map configuration, String workdir) {
String docker_image_tag = "${DOCKER_IMAGE_TAG}_${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}".toLowerCase()
println "docker_image_tag: ${docker_image_tag}"
updateModels()
sh """
docker build --tag=${docker_image_tag} \
--build-arg BUILD_TYPE=${configuration.build_type} \
--file=.ci/openvino-onnx/Dockerfile \
--build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} .
"""
}
def runTests(Map configuration, String workdir) {
println "Run tests for ${configuration.name}"
String docker_image_tag = "${DOCKER_IMAGE_TAG}_${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}".toLowerCase()
String docker_container_name = get_docker_container_name(configuration)
// Run only basic unit tests in Debug configuration
if (configuration.build_type == "Debug") {
sh """
docker run --name ${docker_container_name} ${docker_image_tag}
"""
}
// Run unit-tests AND large model tests by default
else {
sh """
docker run --name ${docker_container_name} \
--volume ${HOME}/ONNX_CI/models_data/model_zoo/onnx_model_zoo_${ONNX_MODEL_ZOO_SHA}:/root/.onnx/model_zoo/onnx_model_zoo \
--volume ${HOME}/ONNX_CI/data/model_zoo/MSFT:/root/.onnx/model_zoo/MSFT \
${docker_image_tag} /bin/bash -c "tox && tox -e zoo_models"
"""
}
}
def getConfigurationsMap() {
def configurationsMap = [:]
for (backend in BACKEND_CONFIGURATIONS) {
def configuration = backend.clone()
configurationsMap[configuration.name] = {
stage(configuration.name) { CONFIGURATION_WORKFLOW(configuration) }
}
}
return configurationsMap
}
CONFIGURATION_WORKFLOW = { configuration ->
node("OpenVINO") {
String workdir = "${HOME}/workspace/${BUILD_NUMBER}_${env.CHANGE_ID}_${configuration.name}"
try {
PROJECT_NAME = "openvino"
stage("Clone repository") {
prepare_repository(workdir)
}
stage("Prepare Docker environment") {
dir("${workdir}") {
buildDockerImage(configuration, workdir)
}
}
stage("Run tests") {
timeout(time: 60, unit: 'MINUTES') {
runTests(configuration, workdir)
}
}
}
catch(e) {
// Set result to ABORTED if exception contains exit code of a process interrupted by SIGTERM
if ("$e".contains("143")) {
currentBuild.result = "ABORTED"
} else {
currentBuild.result = "FAILURE"
}
def gitPrInfo = getGitPrInfo(PROJECT_NAME, workdir)
notifyByEmail(gitPrInfo)
}
finally {
stage("Cleanup") {
String docker_container_name = get_docker_container_name(configuration)
sh """
docker rm -f ${docker_container_name}
rm -rf ${workdir}
"""
}
}
}
}
pipeline {
agent none
options {
skipDefaultCheckout true
timeout(activity: true, time: 120, unit: 'MINUTES')
}
stages {
stage('Parallel CI') {
steps {
stopPreviousRunningBuilds()
script {
parallelStagesMap = getConfigurationsMap()
parallel parallelStagesMap
}
}
}
}
}

View File

@@ -1,65 +0,0 @@
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
timeout(30)
{
node(LABEL) {
BUILD_WORKSPACE = "$WORKSPACE/$BUILD_NUMBER"
WATCHDOG_ROOT = "$BUILD_WORKSPACE/.ci/openvino-onnx/watchdog"
VENV_PATH = "${BUILD_WORKSPACE}/.wdvenv"
try {
stage("Clone repository") {
dir ("$BUILD_WORKSPACE") {
checkout([$class: 'GitSCM', branches: [[name: "*/$BRANCH"]],
doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'CloneOption', timeout: 30]], submoduleCfg: [],
userRemoteConfigs: [[credentialsId: "${GITHUB_KEY}", url: "${OPEN_VINO_URL}"]]])
}
}
stage("Prepare environment") {
sh """#!/bin/bash
if [ ! -d ${VENV_PATH} ]; then
python3 -m venv ${VENV_PATH}
source ${VENV_PATH}/bin/activate
pip install -r ${WATCHDOG_ROOT}/requirements.txt
fi
"""
}
stage("Run script") {
withCredentials([
usernamePassword(credentialsId: '7157091e-bc04-42f0-99fd-dc4da2922a55',
usernameVariable: 'username',
passwordVariable: 'password')])
{
dir ("$BUILD_WORKSPACE") {
sh """#!/bin/bash
source ${VENV_PATH}/bin/activate
export PYTHONHTTPSVERIFY=0
python ${WATCHDOG_ROOT}/src/main.py \
--msteams-url=${MSTEAMS_URL_FILE} \
--github-credentials '${username}' '${password}' \
--github-org=${GITHUB_ORG} \
--github-project=${GITHUB_PROJECT} \
--jenkins-token=${JENKINS_TOKEN_FILE} \
--jenkins-server=${JENKINS_SERVER} \
--jenkins-user=${JENKINS_USER} \
--ci-job=${CI_JOB_NAME} \
--watchdog-job=${WATCHDOG_JOB_NAME}
"""
}
}
}
} catch (e) {
echo "$e"
currentBuild.result = "FAILURE"
} finally {
stage("Cleanup") {
sh """
cd $BUILD_WORKSPACE
rm -rf ..?* .[!.]* *
"""
}
}
}
}

View File

@@ -1,6 +0,0 @@
python-jenkins==1.7.0
retrying==1.3.3
pygithub==1.51
timeout-decorator==0.4.1
requests==2.23.0
wheel

View File

@@ -1,108 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging
import timeout_decorator
from datetime import datetime
from retrying import retry
from github import Github, GithubException
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
_RETRY_LIMIT = 3
_RETRY_COOLDOWN_MS = 2000
_REQUEST_TIMEOUT_S = 10
class GitWrapper:
"""Class wrapping PyGithub API.
The purpose of this class is to wrap methods from PyGithub API used in Watchdog, for less error-prone and
more convenient use. Docs for used API, including wrapped methods can be found at:
https://pygithub.readthedocs.io/en/latest/introduction.html
:param github_credentials: Credentials used for GitHub
:param repository: GitHub repository name
:param project: GitHub project name
:type github_credentials: String
:type repository: String
:type project: String
"""
def __init__(self, github_credentials, repository, project):
self.git = Github(*github_credentials)
self.repository = repository
self.project = project
self.github_credentials = github_credentials
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_git_time(self):
"""Retrieve time from GitHub.
Used to reliably determine time during Watchdog run.
:return: Datetime object describing current time
:rtype: datetime
"""
try:
datetime_object = self._get_git_time()
except ValueError as e:
raise GitWrapperError(str(e))
except GithubException as e:
message = 'GitHub Exception during API status retrieval. Exception: {}'.format(str(e))
raise GitWrapperError(message)
except timeout_decorator.TimeoutError:
message = 'GitHub Exception during API status retrieval. Timeout during API request.'
raise GitWrapperError(message)
return datetime_object
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_pull_requests(self):
"""Retrieve paginated list of pull requests from GitHub.
:return: Paginated list of Pull Requests in GitHub repo
:rtype: github.PaginatedList.PaginatedList of github.PullRequest.PullRequest
"""
try:
prs = self._get_pull_requests()
except GithubException as e:
message = 'GitHub Exception during API status retrieval. Exception: {}'.format(str(e))
raise GitWrapperError(message)
return prs
@timeout_decorator.timeout(_REQUEST_TIMEOUT_S)
def _get_git_time(self):
"""Private method retrieving time from GitHub.
:return: Datetime object describing current time
:rtype: datetime
"""
datetime_string = self.git.get_api_status().raw_headers.get('date', '')
datetime_format = '%a, %d %b %Y %H:%M:%S %Z'
datetime_object = datetime.strptime(datetime_string, datetime_format)
return datetime_object
@timeout_decorator.timeout(_REQUEST_TIMEOUT_S)
def _get_pull_requests(self):
"""Private method retrieving pull requests from GitHub.
:return: Paginated list of Pull Requests in GitHub repo
:rtype: github.PaginatedList.PaginatedList of github.PullRequest.PullRequest
"""
return self.git.get_organization(self.repository).get_repo(self.project).get_pulls()
class GitWrapperError(Exception):
"""Base class for exceptions raised in GitWrapper.
:param message Explanation of the error
"""
def __init__(self, message):
self.message = message
log.exception(message)

View File

@@ -1,91 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests
import jenkins
import logging
from retrying import retry
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
_RETRY_LIMIT = 3
_RETRY_COOLDOWN_MS = 5000
class JenkinsWrapper:
"""Class wrapping Python-Jenkins API.
The purpose of this class is to wrap methods from Python-Jenkins API used in Watchdog, for less error-prone and
more convenient use. Docs for used API, including wrapped methods can be found at:
https://python-jenkins.readthedocs.io/en/latest/
:param jenkins_token: Token used for Jenkins
:param jenkins_user: Username used to connect to Jenkins
:param jenkins_server: Jenkins server address
:type jenkins_token: String
:type jenkins_user: String
:type jenkins_server: String
"""
def __init__(self, jenkins_token, jenkins_user, jenkins_server):
self.jenkins_server = jenkins_server
self.jenkins = jenkins.Jenkins(jenkins_server, username=jenkins_user,
password=jenkins_token)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_build_console_output(self, job_name, build_number):
return self.jenkins.get_build_console_output(job_name, build_number)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_job_info(self, job_name):
return self.jenkins.get_job_info(job_name)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_build_info(self, job_name, build_number):
return self.jenkins.get_build_info(job_name, build_number)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_queue_item(self, queue_id):
"""Attempt to retrieve Jenkins job queue item.
Exception communicating queue doesn't exist is expected,
in that case method returns empty dict.
:param queue_id: Jenkins job queue ID number
:type queue_id: int
:return: Dictionary representing Jenkins job queue item
:rtype: dict
"""
try:
return self.jenkins.get_queue_item(queue_id)
except Exception as e:
# Exception 'queue does not exist' is expected behaviour when job is running
if 'queue' in str(e) and 'does not exist' in str(e):
return {}
else:
raise
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_idle_ci_hosts(self):
"""Query Jenkins for idle servers.
Send GET request to Jenkins server, querying for idle servers labeled
for OpenVino-ONNX CI job.
:return: Number of idle hosts delegated to OpenVino-ONNX CI
:rtype: int
"""
jenkins_request_url = self.jenkins_server + 'label/ci&&onnx/api/json?pretty=true'
try:
log.info('Sending request to Jenkins: %s', jenkins_request_url)
r = requests.Request(method='GET', url=jenkins_request_url, verify=False)
response = self.jenkins.jenkins_request(r).json()
return int(response['totalExecutors']) - int(response['busyExecutors'])
except Exception as e:
log.exception('Failed to send request to Jenkins!\nException message: %s', str(e))
raise

View File

@@ -1,89 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import sys
from watchdog import Watchdog
DEFAULT_MSTEAMS_URL_FILE = '/home/lab_nerval/tokens/msteams_url'
DEFAULT_GITHUB_ORGANIZATION = 'openvinotoolkit'
DEFAULT_GITHUB_PROJECT = 'openvino'
DEFAULT_JENKINS_TOKEN_FILE = '/home/lab_nerval/tokens/crackerjack'
DEFAULT_JENKINS_SERVER = 'https://crackerjack.intel.com/'
DEFAULT_JENKINS_USER = 'lab_nerval'
DEFAULT_CI_JOB_NAME = 'onnx/OpenVino_CI'
DEFAULT_WATCHDOG_JOB_NAME = 'onnx/ci_watchdog'
def main(args):
"""
Read args passed to script, load tokens and run watchdog.
Keyword arguments:
:param args: arguments parsed by argparse ArgumentParser
:return: returns status code 0 on successful completion
"""
jenkins_server = args.jenkins_server.strip()
jenkins_user = args.jenkins_user.strip()
jenkins_token = open(args.jenkins_token).read().replace('\n', '').strip()
msteams_url = open(args.msteams_url).read().replace('\n', '').strip()
github_credentials = args.github_credentials
github_org = args.github_org
github_project = args.github_project
ci_job = args.ci_job.strip()
watchdog_job = args.watchdog_job.strip()
quiet = args.quiet
wd = Watchdog(jenkins_token=jenkins_token,
jenkins_server=jenkins_server,
jenkins_user=jenkins_user,
github_credentials=github_credentials,
git_org=github_org,
git_project=github_project,
msteams_url=msteams_url,
ci_job_name=ci_job,
watchdog_job_name=watchdog_job)
wd.run(quiet=quiet)
return 0
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--msteams-url', help='Path to MS Teams channel url to communicate messages.',
default=DEFAULT_MSTEAMS_URL_FILE, action='store', required=False)
parser.add_argument('--github-credentials', help='GitHub user credentials to access repo.',
nargs="+", required=True)
parser.add_argument('--github-org', help='Name of organization on GitHub.',
default=DEFAULT_GITHUB_ORGANIZATION, action='store', required=False)
parser.add_argument('--github-project', help='Name of project on GitHub.',
default=DEFAULT_GITHUB_PROJECT, action='store', required=False)
parser.add_argument('--jenkins-token', help='Path to Jenkins user token to access build info.',
default=DEFAULT_JENKINS_TOKEN_FILE, action='store', required=False)
parser.add_argument('--jenkins-server', help='Jenkins server address.',
default=DEFAULT_JENKINS_SERVER, action='store', required=False)
parser.add_argument('--jenkins-user', help='Jenkins user used to log in.',
default=DEFAULT_JENKINS_USER, action='store', required=False)
parser.add_argument('--ci-job', help='Jenkins CI job name.',
default=DEFAULT_CI_JOB_NAME, action='store', required=False)
parser.add_argument('--watchdog-job', help='Jenkins CI Watchdog job name.',
default=DEFAULT_WATCHDOG_JOB_NAME, action='store', required=False)
parser.add_argument('--quiet', help="Quiet mode - doesn\'t send message to communicator.",
action='store_true', required=False)
args = parser.parse_args()
sys.exit(main(args))

View File

@@ -1,128 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests
class MSTeamsCommunicator:
"""Class communicating with MSTeams using Incoming Webhook.
The purpose of this class is to use MSTeams API to send message.
Docs for used API, including wrapped methods can be found at:
https://docs.microsoft.com/en-us/outlook/actionable-messages/send-via-connectors
"""
def __init__(self, _ci_alerts_channel_url):
self._ci_alerts_channel_url = _ci_alerts_channel_url
self._queued_messages = {
self._ci_alerts_channel_url: [],
}
@property
def messages(self):
"""
Get list of queued messages.
:return: List of queued messages
:return type: List[String]
"""
return self._queued_messages.values()
def queue_message(self, message):
"""
Queue message to be sent later.
:param message: Message content
:type message: String
"""
self._queued_messages[self._ci_alerts_channel_url].append(message)
def _parse_text(self, watchdog_log, message):
"""
Parse text to display as alert.
:param watchdog_log: Watchdog log content
:param message: Unparsed message content
:type watchdog_log: String
:type message: String
"""
message_split = message.split('\n')
log_url = None
if len(message_split) == 3:
log_url = message_split[-1]
title = message_split[0]
text = message_split[1]
header = watchdog_log.split(' - ')
header_formatted = '{} - [Watchdog Log]({})'.format(header[0], header[1])
return title, log_url, '{}\n\n{}'.format(header_formatted, text)
def _json_request_content(self, title, log_url, text_formatted):
"""
Create final json request to send message to MS Teams channel.
:param title: Title of alert
:param log_url: URL to PR
:param text_formatted: General content of alert - finally formatted
:type title: String
:type title: String
:type title: String
"""
data = {
'@context': 'https://schema.org/extensions',
'@type': 'MessageCard',
'themeColor': '0072C6',
'title': title,
'text': text_formatted,
'potentialAction':
[
{
'@type': 'OpenUri',
'name': 'Open PR',
'targets':
[
{
'os': 'default',
'uri': log_url,
},
],
},
],
}
return data
def _send_to_channel(self, watchdog_log, message_queue, channel_url):
"""
Send MSTeams message to specified channel.
:param watchdog_log: Watchdog log content
:param message_queue: Queued messages to send
:param channel_url: Channel url
:type watchdog_log: String
:type message_queue: String
:type channel_url: String
"""
for message in message_queue:
title, log_url, text_formatted = self._parse_text(watchdog_log, message)
data = self._json_request_content(title, log_url, text_formatted)
try:
requests.post(url=channel_url, json=data)
except Exception as ex:
raise Exception('!!CRITICAL!! MSTeamsCommunicator: Could not send message '
'due to {}'.format(ex))
def send_message(self, watchdog_log, quiet=False):
"""
Send queued messages as single communication.
:param watchdog_log: Watchdog log content
:param quiet: Flag for disabling sending report through MS Teams
:type watchdog_log: String
:type quiet: Boolean
"""
for channel, message_queue in self._queued_messages.items():
if not quiet and message_queue:
self._send_to_channel(watchdog_log, message_queue, channel)

View File

@@ -1,505 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import datetime
import time
import re
import logging
import requests
from ms_teams_communicator import MSTeamsCommunicator
from jenkins_wrapper import JenkinsWrapper
from jenkins import NotFoundException
from git_wrapper import GitWrapper, GitWrapperError
import os
import json
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
# Watchdog static constant variables
_SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
_BUILD_DURATION_THRESHOLD = datetime.timedelta(minutes=60)
_CI_START_THRESHOLD = datetime.timedelta(minutes=30)
_AWAITING_JENKINS_THRESHOLD = datetime.timedelta(minutes=5)
_WATCHDOG_DIR = os.path.expanduser('~')
_PR_REPORTS_CONFIG_KEY = 'pr_reports'
_CI_BUILD_FAIL_MESSAGE = 'ERROR: py3: commands failed'
_CI_BUILD_SUCCESS_MESSAGE = 'py3: commands succeeded'
_GITHUB_CI_CHECK_NAME = 'OpenVINO-ONNX'
INTERNAL_ERROR_MESSAGE_HEADER = '!!! --- !!! INTERNAL WATCHDOG ERROR !!! --- !!!'
ERROR_MESSAGE_HEADER = '!!! OpenVino-ONNX CI Error !!!'
WARNING_MESSAGE_HEADER = 'OpenVino-ONNX CI WARNING'
INFO_MESSAGE_HEADER = 'OpenVino-ONNX CI INFO'
class Watchdog:
"""Class describing OpenVino-ONNX-CI Watchdog.
Watchdog connects to GitHub and retrieves the list of current pull requests (PRs) in
OpenVino repository. Then it connects to specified Jenkins server to
check CI jobs associated with every PR. Watchdog verifies time durations for Jenkins
initial response, job queue and execution against time treshold constants. Every fail
is logged and reported through MS Teams communicators.
:param jenkins_token: Token used for Jenkins
:param jenkins_server: Jenkins server address
:param jenkins_user: Username used to connect to Jenkins
:param github_credentials: Credentials used to connect to GitHub
:param msteams_url: URL used to connect to MS Teams channel
:param ci_job_name: OpenVino-ONNX CI job name used in Jenkins
:param watchdog_job_name: Watchdog job name used in Jenkins
:type jenkins_token: String
:type jenkins_server: String
:type jenkins_user: String
:type github_credentials: String
:type msteams_url: String
:type ci_job_name: String
:type watchdog_job_name: String
.. note::
Watchdog and OpenVino-ONNX CI job must be placed on the same Jenkins server.
"""
def __init__(self, jenkins_token, jenkins_server, jenkins_user, github_credentials, git_org,
git_project, msteams_url, ci_job_name, watchdog_job_name):
self._config_path = os.path.join(_WATCHDOG_DIR, '{}/.{}_ci_watchdog.json'.format(_WATCHDOG_DIR, git_project))
# Jenkins Wrapper object for CI job
self._jenkins = JenkinsWrapper(jenkins_token,
jenkins_user=jenkins_user,
jenkins_server=jenkins_server)
# Load GitHub token and log in, retrieve pull requests
self._git = GitWrapper(github_credentials, repository=git_org, project=git_project)
# Create MS Teams api object
self._msteams_hook = MSTeamsCommunicator(msteams_url)
self._ci_job_name = ci_job_name.lower()
self._watchdog_job_name = watchdog_job_name
# Read config file
self._config = self._read_config_file()
# Time at Watchdog initiation
self._now_time = datetime.datetime.now()
self._current_prs = {}
self._ms_teams_enabled = True
def run(self, quiet=False):
"""Run main watchdog logic.
Retrieve list of pull requests and pass it to the method responsible for checking them.
:param quiet: Flag for disabling sending report through communicator
:type quiet: Boolean
"""
try:
pull_requests = self._git.get_pull_requests()
except GitWrapperError:
message = 'Failed to retrieve Pull Requests!'
log.exception(message)
self._queue_message(message, message_severity='internal')
# Check all pull requests
for pr in pull_requests:
try:
self._check_pr(pr)
except Exception as e:
log.exception(str(e))
self._queue_message(str(e), message_severity='internal', pr=pr)
self._update_config()
self._send_message(quiet=quiet)
def _read_config_file(self):
"""Read Watchdog config file stored on the system.
The file stores every fail already reported along with timestamp. This
mechanism is used to prevent Watchdog from reporting same failure
multiple times. In case there's no config under the expected path,
appropriate data structure is created and returned.
:return: Returns dict of dicts with reported fails with their timestamps
:rtype: dict of dicts
"""
if os.path.isfile(self._config_path):
log.info('Reading config file in: {}'.format(self._config_path))
file = open(self._config_path, 'r')
data = json.load(file)
else:
log.info('No config file found in: {}'.format(self._config_path))
data = {_PR_REPORTS_CONFIG_KEY: {}}
return data
def _check_pr(self, pr):
"""Check pull request (if there's no reason to skip).
Retrieve list of statuses for every PR's last commit and interpret them. Filters out statuses
unrelated to OpenVino-ONNX Jenkins CI and passes relevant statuses to method that interprets them.
If no commit statuses related to Jenkins are available after time defined by
**_AWAITING_JENKINS_THRESHOLD** calls appropriate method to check for builds waiting in queue.
:param pr: GitHub Pull Requests
:type pr: github.PullRequest.PullRequest
"""
log.info('===============================================')
log.info('Checking PR#{}'.format(pr.number))
# Get last Jenkins status
last_status = self._get_last_status(pr)
# Append PR checked in current run for Watchdog config
self._current_prs[str(pr.number)] = self._get_pr_timestamps(pr, last_status)
if self._should_ignore(pr) or self._updated_since_last_run(pr):
log.info('Ignoring PR#{}'.format(pr.number))
return
# Calculate time passed since PR update (any commit, merge or comment)
pr_time_delta = self._now_time - pr.updated_at
if last_status:
# Interpret found CI statuses
log.info('Last status: {} at {}'.format(last_status.description, last_status.updated_at))
self._interpret_status(last_status, pr)
elif pr_time_delta > _CI_START_THRESHOLD:
# If there's no status after assumed time - check if build is waiting in queue
log.info('CI for PR {}: NO JENKINS STATUS YET'.format(pr.number))
self._check_missing_status(pr)
@staticmethod
def _get_pr_timestamps(pr, last_status):
"""Get dict containing PR timestamp and last status timestamp.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Dictionary with PR and last status update timestamps
:rtype: dict
"""
pr_timestamp = time.mktime(pr.updated_at.timetuple())
if last_status:
status_timestamp = time.mktime(last_status.updated_at.timetuple())
else:
status_timestamp = None
pr_dict = {'pr_timestamp': pr_timestamp,
'status_timestamp': status_timestamp}
return pr_dict
@staticmethod
def _get_last_status(pr):
"""Get last commit status posted from Jenkins.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Either last PR status posted from Jenkins or None
:rtype: github.CommitStatus.CommitStatus
"""
# Find last commit in PR
last_commit = pr.get_commits().reversed[0]
# Get statuses and filter them to contain only those related to Jenkins CI
# and check if CI in Jenkins started
statuses = last_commit.get_statuses()
jenk_statuses = [stat for stat in statuses if
_GITHUB_CI_CHECK_NAME in stat.context]
try:
last_status = jenk_statuses[0]
except IndexError:
last_status = None
return last_status
@staticmethod
def _should_ignore(pr):
"""Determine if PR should be ignored.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Returns True if PR should be ignored
:rtype: Bool
"""
# Ignore PR if it has WIP label or WIP in title
if 'WIP' in pr.title:
log.info('PR#{} should be ignored. WIP tag in title.'.format(pr.number))
return True
label_names = [label.name for label in pr.labels]
if 'WIP' in label_names:
log.info('PR#{} should be ignored. WIP label present.'.format(pr.number))
return True
# Ignore PR if base ref is not master
if 'master' not in pr.base.ref:
log.info('PR#{} should be ignored. Base ref is not master'.format(pr.number))
return True
# Ignore PR if mergeable state is 'dirty' or 'behind'.
# Practically this ignores PR in case of merge conflicts
ignored_mergeable_states = ['behind', 'dirty', 'draft']
if pr.mergeable_state in ignored_mergeable_states:
log.info('PR#{} should be ignored. Mergeable state is {}. '.format(pr.number, pr.mergeable_state))
return True
# If no criteria for ignoring PR are met - return false
return False
def _updated_since_last_run(self, pr):
# Ignore if PR was already checked and there was no update in meantime
pr_number = str(pr.number)
current_pr_timestamps = self._current_prs.get(pr_number)
last_pr_timestamps = self._config[_PR_REPORTS_CONFIG_KEY].get(pr_number)
if current_pr_timestamps == last_pr_timestamps:
log.info('PR#{} - No update since last check'.format(pr.number))
return True
else:
return False
def _check_missing_status(self, pr):
"""Verify if missing status is expected.
This method checks if CI build for last was scheduled and still waits in queue for
executor.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
"""
pr_time_delta = self._now_time - pr.updated_at
try:
build_number = self._build_scheduled(pr)
if self._build_in_queue(pr, build_number):
message = ('PR# {}: build waiting in queue after {} minutes.'
.format(pr.number, pr_time_delta.seconds / 60))
severity = 'warning'
else:
message = ('PR# {}: missing status on GitHub after {} minutes.'
.format(pr.number, pr_time_delta.seconds / 60))
severity = 'error'
self._queue_message(message, message_severity=severity, pr=pr)
except TypeError:
log.info('Committer outside of OpenVino organization')
def _build_scheduled(self, pr):
"""Check if Jenkins build corresponding to PR was scheduled.
This method takes last Jenkins build for given PR and compares hash from Jenkins console output
and sha from PR object to determine if CI build for appropriate commit was scheduled.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Returns build number or -1 if no build found
:rtype: int
"""
pr_number = str(pr.number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
try:
# Retrieve console output from last Jenkins build for job corresponding to this PR
last_build_number = self._jenkins.get_job_info(project_name_full)['lastBuild']['number']
console_output = self._jenkins.get_build_console_output(project_name_full, last_build_number)
# Check if CI build was scheduled - commit hash on GH must match hash in last Jenkins build console output
# Retrieve hash from Jenkins output
match_string = '(?:Obtained .ci/[a-zA-Z/]+Jenkinsfile from ([a-z0-9]{40}))'
retrieved_sha = re.search(match_string, console_output).group(1)
if retrieved_sha == pr.get_commits().reversed[0].sha:
return last_build_number
else:
return -1
except (NotFoundException, AttributeError, requests.exceptions.HTTPError):
message = ('PR #{}: Jenkins build corresponding to commit {} not found!'
.format(pr_number, pr.get_commits().reversed[0].sha))
self._queue_message(message, message_severity='error', pr=pr)
return -1
def _build_in_queue(self, pr, build_number):
"""Check if Jenkins build waits in queue.
This method verifies if CI build is waiting in queue based on console output.
:param pr: Single PR being currently checked
:param build_number: Jenkins build number to retrieve console output from
:type pr: github.PullRequest.PullRequest
:type build_number: int
:return: Returns True if CI build is waiting in queue
:rtype: Bool
"""
pr_number = str(pr.number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
# Retrieve console output
try:
console_output = self._jenkins.get_build_console_output(project_name_full, build_number)
except NotFoundException:
return False
# Check if build is waiting in queue (and not already running on an executor)
if 'Waiting for next available executor on' in console_output \
and 'Running on' not in console_output:
log.info('CI for PR %s: WAITING IN QUEUE', pr_number)
return True
else:
return False
def _interpret_status(self, status, pr):
"""
Verify GitHub status passed to the method.
This method verifies last commit status for given PR, calling appropriate methods
to further validate the status.
:param status: GitHub commit status
:param pr: Single PR being currently checked
:type status: github.CommitStatus.CommitStatus
:type pr: github.PullRequest.PullRequest
"""
try:
# Retrieve build number for Jenkins build related to this PR
build_number = self._retrieve_build_number(status.target_url)
# CI build finished - verify if expected output is present
finished_statuses = ['Build finished', 'This commit cannot be built', 'This commit looks good']
pending_statuses = ['This commit is being built', 'Testing in progress',
'This commit is scheduled to be built']
if any(phrase in status.description for phrase in finished_statuses):
self._check_finished(pr, build_number)
# CI build in progress - verify timeouts for build queue and duration
elif any(phrase in status.description for phrase in pending_statuses):
self._check_in_progress(pr, build_number)
else:
message = 'ONNX CI job for PR# {}: unrecognized status: {}'.format(pr.number, status.description)
self._queue_message(message, message_severity='error', pr=pr)
except Exception:
# Log Watchdog internal error in case any status can't be properly verified
message = 'Failed to verify status "{}" for PR# {}'.format(status.description, pr.number)
log.exception(message)
self._queue_message(message, message_severity='internal', pr=pr)
def _retrieve_build_number(self, url):
"""Retrieve Jenkins CI job build number from URL address coming from GitHub commit status.
:param url: URL address from GitHub commit status
:type url: String
:return: Returns build number
:rtype: int
"""
# Retrieve the build number from url string
match_obj = re.search('(?:/PR-[0-9]+/)([0-9]+)', url)
try:
number = int(match_obj.group(1))
return number
except Exception:
log.exception('Failed to retrieve build number from url link: %s', url)
raise
def _queue_message(self, message, message_severity='info', pr=None):
"""Add a message to message queue in communicator object.
The queued message is constructed based on message string passed as
a method argument and message header. Message header is mapped to message severity
also passed as an argument.
:param message: Message content
:param message_severity: Message severity level
:type message: String
:type message_severity: int
"""
log.info(message)
internal = False
if 'internal' in message_severity:
message_header = INTERNAL_ERROR_MESSAGE_HEADER
internal = True
elif 'error' in message_severity:
message_header = ERROR_MESSAGE_HEADER
elif 'warning' in message_severity:
message_header = WARNING_MESSAGE_HEADER
else:
message_header = INFO_MESSAGE_HEADER
# If message is related to PR attatch url
if pr:
message = message + '\n' + pr.html_url
send = message_header + '\n' + message
if self._ms_teams_enabled:
self._msteams_hook.queue_message(send)
def _check_finished(self, pr, build_number):
"""Verify if finished build output contains expected string for either fail or success.
:param pr: Single PR being currently checked
:param build_number: Jenkins CI job build number
:type pr: github.PullRequest.PullRequest
:type build_number: int
"""
pr_number = str(pr.number)
log.info('CI for PR %s: FINISHED', pr_number)
# Check if FINISH was valid FAIL / SUCCESS
project_name_full = self._ci_job_name + '/PR-' + pr_number
build_output = self._jenkins.get_build_console_output(project_name_full, build_number)
if _CI_BUILD_FAIL_MESSAGE not in build_output \
and _CI_BUILD_SUCCESS_MESSAGE not in build_output:
message = ('ONNX CI job for PR #{}: finished but no tests success or fail '
'confirmation is present in console output!'.format(pr_number))
self._queue_message(message, message_severity='error', pr=pr)
def _send_message(self, quiet=False):
"""Send messages queued in MS Teams objects to designated channel.
Queued messages are being sent as a single communication.
:param quiet: Flag for disabling sending report through communicator
:type quiet: Boolean
"""
if any(messages for messages in self._msteams_hook.messages):
try:
watchdog_build = self._jenkins.get_job_info(self._watchdog_job_name)['lastBuild']
watchdog_build_number = watchdog_build['number']
watchdog_build_link = watchdog_build['url']
except Exception:
watchdog_build_number = 'UNKNOWN'
watchdog_build_link = self._jenkins.jenkins_server
send = self._watchdog_job_name + '- build ' + str(
watchdog_build_number) + ' - ' + watchdog_build_link
if self._ms_teams_enabled:
self._msteams_hook.send_message(send, quiet=quiet)
else:
log.info('Nothing to report.')
def _check_in_progress(self, pr, build_number):
"""Check if CI build succesfully started.
Checks if build started within designated time threshold, and job is
currently running - it didn't cross the time threshold.
:param pr: Single PR being currently checked
:param build_number: Jenkins CI job build number
:type pr: github.PullRequest.PullRequest
:type build_number: int
"""
pr_number = str(pr.number)
log.info('CI for PR %s: TESTING IN PROGRESS', pr_number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
build_info = self._jenkins.get_build_info(project_name_full, build_number)
build_datetime = datetime.datetime.fromtimestamp(build_info['timestamp'] / 1000.0)
build_delta = self._now_time - build_datetime
log.info('Build %s: IN PROGRESS, started: %s minutes ago', str(build_number),
str(build_delta))
# If build still waiting in queue
if build_delta > _CI_START_THRESHOLD and self._build_in_queue(pr, build_number):
message = ('ONNX CI job build #{}, for PR #{} waiting in queue after {} '
'minutes'.format(build_number, pr_number, str(build_delta.seconds / 60)))
self._queue_message(message, message_severity='warning', pr=pr)
elif build_delta > _BUILD_DURATION_THRESHOLD:
# CI job take too long, possibly froze - communicate failure
message = ('ONNX CI job build #{}, for PR #{} started, '
'but did not finish in designated time of {} '
'minutes!'.format(build_number, pr_number,
str(_BUILD_DURATION_THRESHOLD.seconds / 60)))
self._queue_message(message, message_severity='error', pr=pr)
def _update_config(self):
"""Update Watchdog config file with PRs checked in current Watchdog run, remove old entries.
:param current_prs: List of PR numbers checked during current Watchdog run
:type current_prs: list of ints
"""
# Cleanup config of old reports
log.info('Writing to config file at: {}'.format(self._config_path))
new_config = {_PR_REPORTS_CONFIG_KEY: self._current_prs}
file = open(self._config_path, 'w+')
json.dump(new_config, file)

View File

@@ -7,7 +7,7 @@ updates:
directory: "/src/bindings/python"
schedule:
interval: weekly
day: monday
day: sunday
time: "13:00"
open-pull-requests-limit: 0
reviewers:
@@ -15,4 +15,3 @@ updates:
- akuporos
labels:
- "category: dependencies"

View File

@@ -5,11 +5,10 @@
"IGNORE_LOGINS": [
"openvino-ci",
"openvino-pushbot",
"lab-nerval",
"lab-nerval-onnx-ci",
"onnx-watchdog-agent",
"workbench-ci-bot",
"openvino-pot-ci"
"openvino-pot-ci",
"sysicvvpux",
"ote-ci-bot"
],
"MAX_MEMBERS_TO_REMOVE": 15,
"EMAILS_FILE_PATH": "dev_emails-test.txt",
@@ -28,7 +27,7 @@
"openvino-ie-gna-maintainers": "category: GNA",
"openvino-ie-gpu-maintainers": "category: GPU",
"openvino-ie-lpt-maintainers": "category: LP transformations",
"openvino-ie-multi-maintainers": "category: MULTI",
"openvino-ie-auto-multi-maintainers": "category: MULTI",
"openvino-ie-python-api-maintainers": "category: python api",
"openvino-ie-template-maintainers": "category: TEMPLATE",
"openvino-ie-tests-maintainers": "category: IE Tests",

View File

@@ -157,7 +157,7 @@ class GithubOrgApi:
self.github_users_by_email[email] = org_member
if not is_valid_name(org_member.name):
self.members_to_fix_name.add(org_member)
elif not is_user_ignored(org_member):
else:
self.members_to_remove.add(org_member)
print("\nOrg members - no Intel emails:")

View File

@@ -4,7 +4,7 @@ on: [push, pull_request]
jobs:
Build_Doc:
if: github.repository == 'openvinotoolkit/openvino'
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v2
@@ -17,11 +17,11 @@ jobs:
set -e
# install doc dependencies
sudo apt update
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive liblua5.2-0
cd docs
python -m pip install -r requirements.txt --user
python3 -m pip install -r requirements.txt --user
cd openvino_sphinx_theme
python setup.py install --user
python3 setup.py install --user
cd ../..
# install doxyrest
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz
@@ -43,7 +43,7 @@ jobs:
run: |
mkdir build
cd build
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DNGRAPH_PYTHON_BUILD_ENABLE=ON -DCMAKE_BUILD_TYPE=Release ..
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DCMAKE_BUILD_TYPE=Release ..
- name: Build doc
run: |

View File

@@ -3,7 +3,7 @@ on: [pull_request]
jobs:
Checks:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v2

View File

@@ -48,7 +48,7 @@ jobs:
path: build/code_style_diff.diff
ShellCheck:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@@ -73,7 +73,7 @@ jobs:
working-directory: build
NamingConventionCheck:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@@ -82,8 +82,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11
sudo apt --assume-yes install libclang-12-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

View File

@@ -3,7 +3,7 @@ on: [push, pull_request]
jobs:
Check_Files_Size:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2

View File

@@ -9,7 +9,7 @@ on:
jobs:
Pylint-UT:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:

View File

@@ -1,4 +1,4 @@
name: IE Python Checks
name: Python API Checks
on:
workflow_dispatch:
@@ -6,13 +6,15 @@ on:
paths:
- 'src/bindings/python/**'
- 'samples/python/**'
- '.github/workflows/py_checks.yml'
pull_request:
paths:
- 'src/bindings/python/**'
- 'samples/python/**'
- '.github/workflows/py_checks.yml'
jobs:
linters:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- name: Code checkout
uses: actions/checkout@v2
@@ -23,8 +25,9 @@ jobs:
with:
python-version: '3.6'
- name: Install dependencies
run: python -m pip install -r src/bindings/python/src/compatibility/openvino/requirements_dev.txt
- name: Run Flake on samples
run: python -m pip install -r src/bindings/python/requirements_test.txt
# samples code-style
- name: Run flake8 on samples
run: python -m flake8 ./ --config=setup.cfg
working-directory: samples/python
- name: Create code style diff for samples
@@ -38,21 +41,53 @@ jobs:
with:
name: samples_diff
path: samples_diff.diff
- name: Run Flake on src
# IE Python API Flake code-style
- name: Run flake8 on IE Python API
run: python -m flake8 ./ --config=setup.cfg
working-directory: src/bindings/python/src/compatibility/openvino
- name: Create code style diff for Python src
- name: Create code style diff for IE Python API
if: failure()
run: |
python -m black -l 160 -S ./
git diff > src_diff.diff
git diff > ie_python_diff.diff
working-directory: src/bindings/python/src/compatibility/openvino
- uses: actions/upload-artifact@v2
if: failure()
with:
name: src_diff
path: src_diff.diff
- name: Run Flake on wheel
name: ie_python_diff
path: ie_python_diff.diff
# nGraph Python API Flake code-style
- name: Run flake8 on nGraph Python API
run: python -m flake8 ./src/compatibility/ngraph --config=setup.cfg
working-directory: src/bindings/python
- name: Create code style diff for nGraph Python API
if: failure()
run: |
python -m black -l 160 -S ./
git diff > pyngraph_diff.diff
working-directory: src/bindings/python/src/compatibility/ngraph
- uses: actions/upload-artifact@v2
if: failure()
with:
name: pyngraph_diff
path: pyngraph_diff.diff
# Python API 2.0 Flake code-style
- name: Run flake8 on Python API 2.0
run: python -m flake8 ./src/openvino --config=setup.cfg
working-directory: src/bindings/python
- name: Create code style diff for Python API 2.0
if: failure()
run: |
python -m black -l 160 -S ./
git diff > pyopenvino_diff.diff
working-directory: src/bindings/python/src/openvino
- uses: actions/upload-artifact@v2
if: failure()
with:
name: pyopenvino_diff
path: pyopenvino_diff.diff
# wheel Flake code-style
- name: Run flake8 on wheel
run: python -m flake8 ./ --config=../setup.cfg
working-directory: src/bindings/python/wheel
- name: Create code style diff for wheel
@@ -66,12 +101,26 @@ jobs:
with:
name: wheel_diff
path: wheel_diff.diff
- name: Run MyPy
# Python API 2.0 tests Flake code-style
- name: Run flake8 on python tests
# ignore lack of docs in tests
run: python -m flake8 tests/ --config=setup.cfg
working-directory: src/bindings/python
# IE Python API mypy check
- name: Run mypy on IE Python API
run: python -m mypy ./ --config-file ./setup.cfg
working-directory: src/bindings/python/src/compatibility/openvino
# nGraph Python API mypy check
- name: Run mypy on nGraph Python API
run: python -m mypy ./src/compatibility/ngraph --config-file ./setup.cfg
working-directory: src/bindings/python
# Python API 2.0 mypy check
- name: Run mypy on Python API 2.0
run: python -m mypy ./src/openvino --config-file ./setup.cfg
working-directory: src/bindings/python
- name: Run Bandit
run: python -m bandit -r ./ -f screen
working-directory: src/bindings/python/src/compatibility/openvino

7
.gitignore vendored
View File

@@ -12,12 +12,10 @@ cmake-build*
# developer tools
*.idea
.vscode
cmake-build-*
.DS_Store
**/tags
compile_commands.json
bin/
build/
.local_vimrc
.gdb_history
.vimspector.json
@@ -37,14 +35,13 @@ docs/IE_PLUGIN_DG/html/
*.pydevproject
*.settings
*/gen/
__pycache__
*.swp
/config.xml
# Python-specific
*.env3
*.?env*
*.pyc
__pycache__
# Tests-specific
*.coverage
*htmlcov

4
.gitmodules vendored
View File

@@ -1,5 +1,5 @@
[submodule "src/plugins/intel_cpu/thirdparty/mkl-dnn"]
path = src/plugins/intel_cpu/thirdparty/mkl-dnn
[submodule "src/plugins/intel_cpu/thirdparty/onednn"]
path = src/plugins/intel_cpu/thirdparty/onednn
url = https://github.com/openvinotoolkit/oneDNN.git
ignore = dirty
[submodule "thirdparty/xbyak"]

View File

@@ -17,6 +17,10 @@ if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type" FORCE)
endif()
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
add_compile_options(-fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv)
endif()
find_package(IEDevScripts REQUIRED
PATHS "${OpenVINO_SOURCE_DIR}/cmake/developer_package"
NO_CMAKE_FIND_ROOT_PATH
@@ -51,6 +55,7 @@ file(REMOVE "${CMAKE_BINARY_DIR}/InferenceEngineTargets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
foreach(component IN LISTS openvino_export_components)
file(REMOVE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
unset(${component} CACHE)
endforeach()
unset(openvino_export_components CACHE)

View File

@@ -30,7 +30,7 @@ Jenkinsfile @openvinotoolkit/openvino-admins
# IE Core:
/inference-engine/ @openvinotoolkit/openvino-ie-maintainers
/src/bindings/python/ @openvinotoolkit/openvino-ie-python-api-maintainers
/src/common/transformations/ @GlebKazantaev @ilyachur
/src/common/transformations/ @openvinotoolkit/openvino-ie-transformations-maintainers
/src/common/legacy/ @openvinotoolkit/openvino-ngraph-maintainers
/src/common/ @openvinotoolkit/openvino-ie-maintainers
/src/core/ @openvinotoolkit/openvino-ngraph-maintainers
@@ -39,8 +39,10 @@ Jenkinsfile @openvinotoolkit/openvino-admins
# IE CPU:
/src/plugins/intel_cpu/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/src/common/low_precision_transformations/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/src/plugins/intel_cpu/thirdparty/mkl-dnn/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
/src/plugins/intel_cpu/thirdparty/onednn/ @openvinotoolkit/openvino-ie-cpu-maintainers @openvinotoolkit/openvino-ie-cpu-developers
#IE LPT
/src/common/low_precision_transformations/ @openvinotoolkit/openvino-ie-lpt-maintainers
# IE GPU:
/src/inference/include/ie/gpu/ @openvinotoolkit/openvino-ie-gpu-maintainers @openvinotoolkit/openvino-ie-gpu-developers
@@ -77,8 +79,8 @@ Jenkinsfile @openvinotoolkit/openvino-admins
/src/frontends/paddle/ @openvinotoolkit/openvino-ie-paddle-maintainers
# IE Tests:
/src/tests/ @openvinotoolkit/openvino-ie-tests-maintainers
/src/tests_deprecated/ @openvinotoolkit/openvino-ie-tests-maintainers
/src/tests/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ie-test-developers
/src/tests_deprecated/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ie-test-developers
/src/tests/functional/inference_engine/ngraph_reader/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers
/src/tests/functional/inference_engine/transformations/ @openvinotoolkit/openvino-ie-tests-maintainers @openvinotoolkit/openvino-ngraph-maintainers
@@ -88,6 +90,6 @@ Jenkinsfile @openvinotoolkit/openvino-admins
*.md @openvinotoolkit/openvino-docs-maintainers
# Control 3d party dependencies
**/*requirements*.* @openvino-configuration-mgmt
**/setup.py @openvino-configuration-mgmt
/scripts/install_dependencies/ @openvino-configuration-mgmt
**/*requirements*.* @openvinotoolkit/openvino-configuration-mgmt
**/setup.py @openvinotoolkit/openvino-configuration-mgmt
/scripts/install_dependencies/ @openvinotoolkit/openvino-configuration-mgmt

View File

@@ -1,68 +1,55 @@
# How to contribute to the OpenVINO repository
We suppose that you are an enthusiastic coder, want to contribute some code. For that purpose OpenVINO project now has a repository on the GitHub, to simplify everybody's life! All the bug fixes, new functionality, new tutorials etc. should be submitted via the GitHub's mechanism of pull requests.
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, practices for good pull requests, checking your changes with our tests and more.
If you are not familiar with the mechanism - do not worry, it's very simple. Keep reading.
## Before you start contributing you should
- Make sure you agree to contribute your code under [OpenVINO (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
- If you are submitting a new module, you should go into [openvino_contrib](https://github.com/openvinotoolkit/openvino_contrib) repository by default.
- If you are going to fix a bug, check that it's still exists. This can be done by building the latest [releases/2020/3](https://github.com/openvinotoolkit/openvino/tree/releases/2020/3) branch (LTS release) or the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches))
- Make sure that nobody beat you into fixing or reporting the issue by doing a search on the [Github OpenVINO issues](https://github.com/openvinotoolkit/openvino/issues) page, and making sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
- If you have a question about the software, then this is **NOT** the right place. You should open up a question at the [OpenVINO forum](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit). In order to post a decent question from the start, feel free to read the official forum guidelines.
- Make sure you agree to contribute your code under [OpenVINO (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
- Figure out what youre going to contribute. If you dont know what you are going to work on, navigate to the [Github "Issues" tab](https://github.com/openvinotoolkit/openvino/issues). Make sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
- If you are going to fix a bug, check that it's still exists in the latest release. This can be done by building the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
Before you open up anything on the OpenVINO GitHub page, be sure that you are at the right place with your problem.
## "Fork & Pull Request model" for code contribution
### [](https://github.com/openvinotoolkit/openvino/wiki/Contribute#the-instruction-in-brief)The instruction in brief
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
- Register at GitHub. Create your fork of OpenVINO repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Register at GitHub. Create your fork of OpenVINO repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in a Git configuration according to GitHub account (see [https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It could be a bugfix or some new code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (with a meaningful name) from the base branch you chose.
- Modify / add the code following our [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines) and [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation).
- Modify / add the code following our [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines).
- If you want to add a new sample, please look at this [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- If you contribute to the documentation and want to add a new guide:
- Create a new markdown file in an appropriate folder.
- **REQUIRED:** The document title must contain a document label in a form: `{#openvino_docs_<name>}`. For example: `Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ {#openvino_docs_MO_DG_IR_and_opsets}`.
- Add your file to the documentation structure. Open the documentation structure file [`docs/doxygen/ie_docs.xml`](https://github.com/openvinotoolkit/openvino/blob/master/docs/doxygen/ie_docs.xml) and add your file path to the appropriate section.
- When you are done, make sure that your branch is to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`), push your branch to your GitHub fork; then create a pull request from your branch to the base branch (see [https://help.github.com/articles/using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- Before pushing your PR to the repository, make sure that it builds perfectly fine on your local system.
- Add enough information, like a meaningful title, the reason why you made the commit and a link to the issue page if you opened one for this PR.
- Scope your PR to one issue. Before submitting, make sure the diff contains no unrelated changes. If you want to cover more than one issue, submit your changes for each as separate pull requests.
- If you have added new functionality, you should update/create the relevant documentation, as well as add tests for it to the testsuite.
- Try not to include "oops" commits - ones that just fix an error in the previous commit. If you have those, then before submitting [squash](https://github.com/openvinotoolkit/openvino/wiki/Contribute#https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History#Squashing-Commits) those fixes directly into the commits where they belong.
- Make sure to choose the right base branch and to follow the [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines) for your code or [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) you are changing documentation files.
- Make sure to add test for new functionality or test that reproduces fixed bug with related test data. Please do not add extra images or videos, if some of existing media files are suitable.
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLines) for your code.
- Update documentation using [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) if needed.
- Cover your changes with test.
- Add license at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add enough information: a meaningful title, the reason why you made the commit and a link to the issue page if exists.
- Remove unrelated to PR changes.
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
## Testing and merging pull requests
- Your pull request will be automatically tested by OpenVINO's precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you should fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
- Once all the builders are "green", one of OpenVINO developers will review your code. Reviewer could ask you to modify your pull request. Please provide timely response for reviewers (within weeks, not months), otherwise you submission could be postponed or even rejected.
Your pull request will be automatically tested by OpenVINO's precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you need fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
## PR review good practices
- Originator is responsible for driving the review of changes and should ping reviewers periodically.
- Originator should close comments from the Reviewer when it is resolved. The Reviewer may re-open the comment if he does not agree with the resolution.
- Originator should request re-review from the Reviewer when all comments are resolved by pushing the button in the “Reviewers” section.
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
- Do **NOT** rewrite history (push -f) once you converted draft PR into regular one, add new commits instead. Looking at diffs makes review easier.
- Write meaningful description of commits resulting from review. _"Addressing review comments"_ is **NOT** a good description! Having a quick look at good descriptions can tell you much what is going on in PR without a need to go through all of resolved comments.
## Merging PR
As soon as the reviewer is fine with the pull request and Precommit likes your code and shows "green" status, the "Approved" review status is put, which signals OpenVINO maintainers that they can merge your pull request.
© Copyright 2018-2022, OpenVINO team
As soon as the reviewer is fine with the pull request and precommit shows "green" status, the "Approved" review status is put, which signals OpenVINO maintainers that they can merge your pull request.

200
README.md
View File

@@ -1,43 +1,203 @@
# OpenVINO™ Toolkit
<div align="center">
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
</div>
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level OpenVINO™ Runtime C++ and Python APIs integrated with application logic.
## Contents:
This open source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
- [What is OpenVINO?](#what-is-openvino-toolkit)
- [Components](#components)
- [Supported Hardware matrix](#supported-hardware-matrix)
- [License](#license)
- [Documentation](#documentation)
- [Tutorials](#tutorials)
- [Products which use OpenVINO](#products-which-use-openvino)
- [System requirements](#system-requirements)
- [How to build](#how-to-build)
- [How to contribute](#how-to-contribute)
- [Get a support](#get-a-support)
- [See also](#see-also)
## What is OpenVINO toolkit?
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
- Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
- Use models trained with popular frameworks like TensorFlow, PyTorch and more
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
## Repository components
* [OpenVINO™ Runtime]
* [Model Optimizer]
* [Post-Training Optimization Tool]
* [Samples]
### Components
* [OpenVINO™ Runtime] - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
* [core](https://github.com/openvinotoolkit/openvino/tree/master/src/core) - provides the base API for model representation and modification.
* [inference](https://github.com/openvinotoolkit/openvino/tree/master/src/inference) - provides an API to infer models on device.
* [transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
* [low precision transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/low_precision_transformations) - contains the set of transformations which are used in low precision models
* [bindings](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings) - contains all awailable OpenVINO bindings which are maintained by OpenVINO team.
* [c](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/c) - provides C API for OpenVINO™ Runtime
* [python](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/python) - Python API for OpenVINO™ Runtime
* [Plugins](https://github.com/openvinotoolkit/openvino/tree/master/src/plugins) - contains OpenVINO plugins which are maintained in open-source by OpenVINO team. For more information please taje a look to the [list of supported devices](#supported-hardware-matrix).
* [Frontends](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends) - contains available OpenVINO frontends which allow to read model from native framework format.
* [Model Optimizer] - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
* [Post-Training Optimization Tool] - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
* [Samples] - applications on C, C++ and Python languages which shows basic use cases of OpenVINO usages.
## Supported Hardware matrix
The OpenVINO™ Runtime can infer models on different hardware devices. This section provides the list of supported devices.
<table>
<thead>
<tr>
<th>Device</th>
<th>Plugin</th>
<th>Library</th>
<th>ShortDescription</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
<tr>
<td>VPU</td>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
<td>Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X</td>
</tr>
</tbody>
</table>
Also OpenVINO™ Toolkit contains several plugins which should simplify to load model on several hardware devices:
<table>
<thead>
<tr>
<th>Plugin</th>
<th>Library</th>
<th>ShortDescription</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
</tbody>
</table>
## License
OpenVINO™ Toolkit is licensed under [Apache License Version 2.0](LICENSE).
By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
## Resources
* Docs: https://docs.openvino.ai/
* Wiki: https://github.com/openvinotoolkit/openvino/wiki
* Issue tracking: https://github.com/openvinotoolkit/openvino/issues
* Storage: https://storage.openvinotoolkit.org/
* Additional OpenVINO™ toolkit modules: https://github.com/openvinotoolkit/openvino_contrib
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
## Documentation
### User documentation
The latest documentation for OpenVINO™ Toolkit is availabe [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all important information which could be needed if you create an application which is based on binary OpenVINO distribution or own OpenVINO version without source code modification.
### Developer documentation
[Developer documentation](#todo-add) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
## Tutorials
The list of OpenVINO tutorials:
- [Jupiter notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
## Products which use OpenVINO
- [OpenCV](https://opencv.org/)
- [ONNX Runtime](https://onnxruntime.ai/)
- [OpenVINO™ Integration with TensorFlow](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html)
- [TNN](https://github.com/Tencent/TNN/tree/master)
## System requirements
The full information about system requirements depends on platform and is available on dedicated pages:
- [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_raspbian.html)
## How to build
Please take a look to [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about OpenVINO build process.
## How to contribute
See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
## Get a support
## Support
Please report questions, issues and suggestions using:
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
## See also
* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
* Additional OpenVINO™ toolkit modules:
* [openvino_contrib](https://github.com/openvinotoolkit/openvino_contrib)
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - An alternative, web-based version of OpenVINO designed to make production of pretrained deep learning models significantly easier.
* [Computer Vision Annotation Tool (CVAT)](https://github.com/openvinotoolkit/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
---
\* Other names and brands may be claimed as the property of others.
@@ -46,5 +206,3 @@ Please report questions, issues and suggestions using:
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino

View File

@@ -7,6 +7,7 @@ set(CMAKE_SYSTEM_PROCESSOR armv7l)
set(CMAKE_C_COMPILER arm-linux-gnueabihf-gcc)
set(CMAKE_CXX_COMPILER arm-linux-gnueabihf-g++)
set(PKG_CONFIG_EXECUTABLE arm-linux-gnueabihf-pkg-config CACHE PATH "Path to ARM pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)

View File

@@ -7,6 +7,7 @@ set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
set(PKG_CONFIG_EXECUTABLE aarch64-linux-gnu-pkg-config CACHE PATH "Path to ARM64 pkg-config")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)

View File

@@ -84,6 +84,11 @@ ie_coverage_extract(INPUT "openvino" OUTPUT "core"
ie_coverage_genhtml(INFO_FILE "core"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
ie_coverage_extract(INPUT "openvino" OUTPUT "openvino_all"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/*" "${OV_COVERAGE_BASE_DIRECTORY}/docs/template_plugin/*")
ie_coverage_genhtml(INFO_FILE "openvino_all"
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
if(ENABLE_OV_ONNX_FRONTEND)
ie_coverage_extract(INPUT "openvino" OUTPUT "onnx"
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/frontends/onnx/*"

View File

@@ -317,8 +317,8 @@ if(ENABLE_INTEL_GNA)
GNA_LIB_DIR
libGNA_INCLUDE_DIRS
libGNA_LIBRARIES_BASE_PATH)
set(GNA_VERSION "03.00.00.1455.0")
set(GNA_HASH "99891696269d8fa10116c96e6b7bda4362736881f0df8df8b56c751ee18e5820")
set(GNA_VERSION "03.00.00.1455.2")
set(GNA_HASH "e52785d3f730fefb4e794bb7ab40c8676537ef2f7c69c5b4bb89a5d3cc0bbe60")
set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
if(WIN32)

View File

@@ -14,8 +14,8 @@ set(CMAKE_MODULE_PATH "${IEDevScripts_DIR}")
function(set_ci_build_number)
set(repo_root "${CMAKE_SOURCE_DIR}")
include(version)
foreach(var CI_BUILD_NUMBER IE_VERSION IE_VERSION_BUILD
IE_VERSION_MAJOR IE_VERSION_MINOR IE_VERSION_PATCH)
foreach(var CI_BUILD_NUMBER OpenVINO_VERSION OpenVINO_VERSION_BUILD
OpenVINO_VERSION_MAJOR OpenVINO_VERSION_MINOR OpenVINO_VERSION_PATCH)
if(NOT DEFINED ${var})
message(FATAL_ERROR "${var} version component is not defined")
endif()
@@ -186,6 +186,8 @@ endif()
# Use solution folders
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
# cmake_dependent_option() supports full Condition Syntax
set(CMAKE_POLICY_DEFAULT_CMP0127 NEW)
# Enable CMAKE_<LANG>_COMPILER_ID AppleClang
set(CMAKE_POLICY_DEFAULT_CMP0025 NEW)

View File

@@ -76,8 +76,8 @@ function(addIeTarget)
# remove unnecessary directories
foreach(excludedDir ${ARG_EXCLUDED_SOURCE_PATHS})
list(FILTER includes EXCLUDE REGEX "${excludedDir}*")
list(FILTER sources EXCLUDE REGEX "${excludedDir}*")
list(FILTER includes EXCLUDE REGEX "${excludedDir}.*")
list(FILTER sources EXCLUDE REGEX "${excludedDir}.*")
endforeach()
source_group("include" FILES ${includes})

View File

@@ -28,7 +28,6 @@ if(ENABLE_CLANG_FORMAT AND NOT TARGET clang_format_check_all)
add_custom_target(clang_format_fix_all)
set_target_properties(clang_format_check_all clang_format_fix_all
PROPERTIES FOLDER clang_format)
set(CLANG_FORMAT_ALL_OUTPUT_FILES "" CACHE INTERNAL "All clang-format output files")
endif()
function(add_clang_format_target TARGET_NAME)
@@ -88,14 +87,10 @@ function(add_clang_format_target TARGET_NAME)
"[clang-format] ${source_file}"
VERBATIM)
list(APPEND all_input_sources "${source_file}")
list(APPEND all_output_files "${output_file}")
endforeach()
set(CLANG_FORMAT_ALL_OUTPUT_FILES
${CLANG_FORMAT_ALL_OUTPUT_FILES} ${all_output_files}
CACHE INTERNAL
"All clang-format output files")
add_custom_target(${TARGET_NAME}
DEPENDS ${all_output_files}
COMMENT "[clang-format] ${TARGET_NAME}")
@@ -104,11 +99,11 @@ function(add_clang_format_target TARGET_NAME)
COMMAND
"${CMAKE_COMMAND}"
-D "CLANG_FORMAT=${CLANG_FORMAT}"
-D "INPUT_FILES=${CLANG_FORMAT_FOR_SOURCES}"
-D "INPUT_FILES=${all_input_sources}"
-D "EXCLUDE_PATTERNS=${CLANG_FORMAT_EXCLUDE_PATTERNS}"
-P "${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
DEPENDS
"${CLANG_FORMAT_FOR_SOURCES}"
"${all_input_sources}"
"${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
COMMENT
"[clang-format] ${TARGET_NAME}_fix"

View File

@@ -82,10 +82,11 @@ unset(protobuf_installed CACHE)
#
# ov_add_frontend(NAME <IR|ONNX|...>
# FILEDESCRIPTION <description>
# [LINKABLE_FRONTEND]
# [SKIP_INSTALL]
# [PROTOBUF_LITE]
# FILEDESCRIPTION <description> # used on Windows to describe DLL file
# [LINKABLE_FRONTEND] # whether we can use FE API directly or via FEM only
# [SKIP_INSTALL] # private frontend, not for end users
# [PROTOBUF_LITE] # requires only libprotobuf-lite
# [SKIP_NCC_STYLE] # use custom NCC rules
# [LINK_LIBRARIES <lib1 lib2 ...>])
#
macro(ov_add_frontend)
@@ -106,6 +107,17 @@ macro(ov_add_frontend)
set(FRONTEND_NAMES "${FRONTEND_NAMES}" CACHE INTERNAL "" FORCE)
file(GLOB_RECURSE LIBRARY_SRC ${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp)
if (WIN32)
# Remove linux specific files
file(GLOB_RECURSE LIN_FILES ${CMAKE_CURRENT_SOURCE_DIR}/src/os/lin/*.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/os/lin/*.hpp)
list(REMOVE_ITEM LIBRARY_SRC "${LIN_FILES}")
else()
# Remove windows specific files
file(GLOB_RECURSE WIN_FILES ${CMAKE_CURRENT_SOURCE_DIR}/src/os/win/*.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/os/win/*.hpp)
list(REMOVE_ITEM LIBRARY_SRC "${WIN_FILES}")
endif()
file(GLOB_RECURSE LIBRARY_HEADERS ${CMAKE_CURRENT_SOURCE_DIR}/src/*.hpp)
file(GLOB_RECURSE LIBRARY_PUBLIC_HEADERS ${CMAKE_CURRENT_SOURCE_DIR}/include/*.hpp)
@@ -231,7 +243,7 @@ macro(ov_add_frontend)
endif()
if(OV_FRONTEND_LINKABLE_FRONTEND)
# install -dev part
# install library development files
install(DIRECTORY ${${TARGET_NAME}_INCLUDE_DIR}/openvino
DESTINATION ${FRONTEND_INSTALL_INCLUDE}/
COMPONENT core_dev

View File

@@ -9,26 +9,46 @@ endif()
set(ncc_style_dir "${IEDevScripts_DIR}/ncc_naming_style")
set(ncc_style_bin_dir "${CMAKE_CURRENT_BINARY_DIR}/ncc_naming_style")
# try to find_package(Clang QUIET)
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
# installed, then find_package fails with errors even in QUIET mode
configure_file("${ncc_style_dir}/try_find_clang.cmake"
"${ncc_style_bin_dir}/source/CMakeLists.txt" COPYONLY)
execute_process(
COMMAND
"${CMAKE_COMMAND}" -S "${ncc_style_bin_dir}/source"
-B "${ncc_style_bin_dir}/build"
RESULT_VARIABLE clang_find_result
OUTPUT_VARIABLE output_var
ERROR_VARIABLE error_var)
# find python3
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "find_package(Clang) output: ${output_var}")
message(WARNING "find_package(Clang) error: ${error_var}")
find_package(PythonInterp 3 QUIET)
if(NOT PYTHONINTERP_FOUND)
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
if(PYTHON_VERSION_MINOR EQUAL 6)
set(clang_version 10)
elseif(PYTHON_VERSION_MINOR EQUAL 8)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 9)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 10)
set(clang_version 14)
endif()
if(ENABLE_NCC_STYLE)
# try to find_package(Clang QUIET)
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
# installed, then find_package fails with errors even in QUIET mode
configure_file("${ncc_style_dir}/try_find_clang.cmake"
"${ncc_style_bin_dir}/source/CMakeLists.txt" COPYONLY)
execute_process(
COMMAND "${CMAKE_COMMAND}" -S "${ncc_style_bin_dir}/source"
-B "${ncc_style_bin_dir}/build"
RESULT_VARIABLE clang_find_result
OUTPUT_VARIABLE output_var
ERROR_VARIABLE error_var)
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install `apt-get install clang-${clang_version} libclang-${clang_version}-dev` package (required for ncc naming style check)")
message(TRACE "find_package(Clang) output: ${output_var}")
message(TRACE "find_package(Clang) error: ${error_var}")
set(ENABLE_NCC_STYLE OFF)
endif()
endif()
# Since we were able to find_package(Clang) in a separate process
# let's try to find in current process
if(ENABLE_NCC_STYLE)
@@ -37,19 +57,11 @@ if(ENABLE_NCC_STYLE)
get_target_property(libclang_location libclang LOCATION)
message(STATUS "Found libclang: ${libclang_location}")
else()
message(WARNING "libclang is not found (required for ncc naming style check)")
message(WARNING "libclang-${clang_version} is not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
endif()
# find python3
find_package(PythonInterp 3 QUIET)
if(NOT PYTHONINTERP_FOUND)
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
set(ENABLE_NCC_STYLE OFF)
endif()
# check python requirements_dev.txt
set(ncc_script_py "${ncc_style_dir}/ncc/ncc.py")
@@ -106,7 +118,6 @@ function(ov_ncc_naming_style)
"${NCC_STYLE_SOURCE_DIRECTORY}/*.cpp")
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
# without it sources with same name from different directories will map to same .ncc_style target
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})

View File

@@ -1,5 +1,5 @@
# custom OpenVINO values
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN|OPENVINO_OP)$'
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN)$'
ClassName: '^([A-Z][\w]+|b?float16|numeric_limits|ngraph_error|stopwatch|unsupported_op)$'
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair)$'
FunctionName: '^(operator\W+|[a-z_\d]+)|PrintTo$'

View File

@@ -1,2 +1,5 @@
clang==11.0
clang==10.0.1; python_version == '3.6'
clang==12.0.1; python_version == '3.8'
clang==12.0.1; python_version == '3.9'
clang==14.0; python_version == '3.10'
pyyaml

View File

@@ -11,6 +11,11 @@ macro (ie_option variable description value)
list(APPEND IE_OPTIONS ${variable})
endmacro()
# Usage: ov_option(<option_variable> "description" <initial value or boolean expression> [IF <condition>])
macro (ov_option variable description value)
ie_option(${variable} "${description}" ${value})
endmacro()
macro (ie_dependent_option variable description def_value condition fallback_value)
cmake_dependent_option(${variable} "${description}" ${def_value} "${condition}" ${fallback_value})
list(APPEND IE_OPTIONS ${variable})

View File

@@ -69,8 +69,8 @@ macro(ie_cpack)
endif()
foreach(ver IN LISTS MAJOR MINOR PATCH)
if(DEFINED IE_VERSION_${ver})
set(CPACK_PACKAGE_VERSION_${ver} ${IE_VERSION_${ver}})
if(DEFINED OpenVINO_VERSION_${ver})
set(CPACK_PACKAGE_VERSION_${ver} ${OpenVINO_VERSION_${ver}})
endif()
endforeach()

View File

@@ -13,8 +13,8 @@ function(ie_plugin_get_file_name target_name library_name)
set("${library_name}" "${LIB_PREFIX}${target_name}${LIB_SUFFIX}" PARENT_SCOPE)
endfunction()
if(NOT TARGET ie_plugins)
add_custom_target(ie_plugins)
if(NOT TARGET ov_plugins)
add_custom_target(ov_plugins)
endif()
#
@@ -27,11 +27,12 @@ endif()
# [OBJECT_LIBRARIES <object_libs>]
# [VERSION_DEFINES_FOR <source>]
# [SKIP_INSTALL]
# [SKIP_REGISTRATION] Skip creation of <device>.xml
# [ADD_CLANG_FORMAT]
# )
#
function(ie_add_plugin)
set(options SKIP_INSTALL ADD_CLANG_FORMAT AS_EXTENSION)
set(options SKIP_INSTALL ADD_CLANG_FORMAT AS_EXTENSION SKIP_REGISTRATION)
set(oneValueArgs NAME DEVICE_NAME VERSION_DEFINES_FOR PSEUDO_PLUGIN_FOR)
set(multiValueArgs DEFAULT_CONFIG SOURCES OBJECT_LIBRARIES CPPLINT_FILTERS)
cmake_parse_arguments(IE_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
@@ -101,7 +102,7 @@ function(ie_add_plugin)
add_cpplint_target(${IE_PLUGIN_NAME}_cpplint FOR_TARGETS ${IE_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
endif()
add_dependencies(ie_plugins ${IE_PLUGIN_NAME})
add_dependencies(ov_plugins ${IE_PLUGIN_NAME})
if(TARGET openvino_gapi_preproc)
if(BUILD_SHARED_LIBS)
add_dependencies(${IE_PLUGIN_NAME} openvino_gapi_preproc)
@@ -146,25 +147,27 @@ function(ie_add_plugin)
endif()
endif()
# check that plugin with such name is not registered
# Enable for static build to generate correct plugins.hpp
if(NOT IE_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
# check that plugin with such name is not registered
foreach(plugin_entry IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
list(GET plugin_entry -1 library_name)
list(GET plugin_entry 0 plugin_name)
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
endif()
endforeach()
foreach(plugin_entry IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
list(GET plugin_entry -1 library_name)
list(GET plugin_entry 0 plugin_name)
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
endif()
endforeach()
# append plugin to the list to register
# append plugin to the list to register
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
endif()
endfunction()
function(ov_add_plugin)
@@ -172,13 +175,12 @@ function(ov_add_plugin)
endfunction()
#
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>
# POSSIBLE_PLUGINS <list of plugins which can be build by this repo>)
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>)
#
macro(ie_register_plugins_dynamic)
set(options)
set(oneValueArgs MAIN_TARGET)
set(multiValueArgs POSSIBLE_PLUGINS)
set(multiValueArgs)
cmake_parse_arguments(IE_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_REGISTER_MAIN_TARGET)
@@ -261,6 +263,15 @@ macro(ie_register_plugins)
endif()
endmacro()
#
# ov_register_plugins()
#
macro(ov_register_plugins)
if(BUILD_SHARED_LIBS)
ie_register_plugins_dynamic(${ARGN})
endif()
endmacro()
#
# ie_target_link_plugins(<TARGET_NAME>)
#

View File

@@ -6,6 +6,17 @@ include(CMakeParseArguments)
find_host_program(shellcheck_PROGRAM NAMES shellcheck DOC "Path to shellcheck tool")
if(shellcheck_PROGRAM)
execute_process(COMMAND "${shellcheck_PROGRAM}" --version
RESULT_VARIABLE shellcheck_EXIT_CODE
OUTPUT_VARIABLE shellcheck_VERSION_STRING)
if(shellcheck_EXIT_CODE EQUAL 0)
if(shellcheck_VERSION_STRING MATCHES "version: ([0-9]+)\.([0-9]+).([0-9]+)")
set(shellcheck_VERSION "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}.${CMAKE_MATCH_3}" CACHE STRING "shellcheck version")
endif()
endif()
endif()
function(ie_shellcheck_process)
if(NOT shellcheck_PROGRAM)
message(WARNING "shellcheck tool is not found")
@@ -33,7 +44,7 @@ function(ie_shellcheck_process)
set(output_file "${output_file}.txt")
get_filename_component(script_name "${script}" NAME)
add_custom_command(OUTPUT ${output_file}
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D IE_SHELLCHECK_PROGRAM=${shellcheck_PROGRAM}
-D IE_SHELL_SCRIPT=${script}

View File

@@ -19,27 +19,35 @@ function (commitHash VAR)
message(FATAL_ERROR "repo_root is not defined")
endif()
execute_process(
COMMAND git rev-parse HEAD
COMMAND git rev-parse --short=11 HEAD
WORKING_DIRECTORY ${repo_root}
OUTPUT_VARIABLE GIT_COMMIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE)
set (${VAR} ${GIT_COMMIT_HASH} PARENT_SCOPE)
endfunction()
macro(ie_parse_ci_build_number)
set(IE_VERSION_BUILD 000)
macro(ov_parse_ci_build_number)
set(OpenVINO_VERSION_BUILD 000)
if(CI_BUILD_NUMBER MATCHES "^([0-9]+)\.([0-9]+)\.([0-9]+)\-([0-9]+)\-.*")
set(IE_VERSION_MAJOR ${CMAKE_MATCH_1})
set(IE_VERSION_MINOR ${CMAKE_MATCH_2})
set(IE_VERSION_PATCH ${CMAKE_MATCH_3})
set(IE_VERSION_BUILD ${CMAKE_MATCH_4})
set(OpenVINO_VERSION_MAJOR ${CMAKE_MATCH_1})
set(OpenVINO_VERSION_MINOR ${CMAKE_MATCH_2})
set(OpenVINO_VERSION_PATCH ${CMAKE_MATCH_3})
set(OpenVINO_VERSION_BUILD ${CMAKE_MATCH_4})
set(the_whole_version_is_defined_by_ci ON)
elseif(CI_BUILD_NUMBER MATCHES "^[0-9]+$")
set(OpenVINO_VERSION_BUILD ${CI_BUILD_NUMBER})
# only build number is defined by CI
set(the_whole_version_is_defined_by_ci OFF)
elseif(CI_BUILD_NUMBER)
message(FATAL_ERROR "Failed to parse CI_BUILD_NUMBER which is ${CI_BUILD_NUMBER}")
endif()
if(NOT DEFINED repo_root)
message(FATAL_ERROR "repo_root is not defined")
endif()
macro(ie_get_hpp_version)
macro(ov_get_hpp_version)
if(NOT DEFINED OpenVINO_SOURCE_DIR)
return()
endif()
@@ -59,11 +67,12 @@ macro(ie_parse_ci_build_number)
foreach(suffix MAJOR MINOR PATCH)
set(ie_version_name "IE_VERSION_${suffix}")
set(ov_version_name "OPENVINO_VERSION_${suffix}")
set(ov_version_name "OpenVINO_VERSION_${suffix}")
set(ov_version_name_hpp "OPENVINO_VERSION_${suffix}")
string(REGEX REPLACE ".+${ie_version_name}[ ]+([0-9]+).*" "\\1"
${ie_version_name}_HPP "${IE_VERSION_PARTS}")
string(REGEX REPLACE ".+${ov_version_name}[ ]+([0-9]+).*" "\\1"
string(REGEX REPLACE ".+${ov_version_name_hpp}[ ]+([0-9]+).*" "\\1"
${ov_version_name}_HPP "${OV_VERSION_PARTS}")
if(NOT ${ie_version_name}_HPP EQUAL ${ov_version_name}_HPP)
@@ -72,42 +81,54 @@ macro(ie_parse_ci_build_number)
endif()
endforeach()
set(ie_hpp_version_is_found ON)
set(ov_hpp_version_is_found ON)
endmacro()
# detect OpenVINO version via ie_version.hpp
ie_get_hpp_version()
# detect OpenVINO version via openvino/core/version.hpp and ie_version.hpp
ov_get_hpp_version()
if(ie_hpp_version_is_found)
foreach(var IE_VERSION_MAJOR IE_VERSION_MINOR IE_VERSION_PATCH)
if(ov_hpp_version_is_found)
foreach(var OpenVINO_VERSION_MAJOR OpenVINO_VERSION_MINOR OpenVINO_VERSION_PATCH)
if(DEFINED ${var} AND NOT ${var} EQUAL ${var}_HPP)
message(FATAL_ERROR "${var} parsed from CI_BUILD_NUMBER (${${var}}) \
and from ie_version.hpp (${${var}_HPP}) are different")
and from openvino/core/version.hpp (${${var}_HPP}) are different")
else()
# CI_BUILD_NUMBER is not defined well, take info from ie_verison.hpp as a baseline
# CI_BUILD_NUMBER is not defined well, take info from openvino/core/version.hpp as a baseline
set(${var} ${${var}_HPP})
endif()
endforeach()
endif()
set(IE_VERSION "${IE_VERSION_MAJOR}.${IE_VERSION_MINOR}.${IE_VERSION_PATCH}")
message(STATUS "OpenVINO version is ${IE_VERSION}")
set(OpenVINO_VERSION "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}")
message(STATUS "OpenVINO version is ${OpenVINO_VERSION} (Build ${OpenVINO_VERSION_BUILD})")
if(NOT the_whole_version_is_defined_by_ci)
# create CI_BUILD_NUMBER
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
if(NOT GIT_BRANCH STREQUAL "master")
set(GIT_BRANCH_POSTFIX "-${GIT_BRANCH}")
endif()
set(CI_BUILD_NUMBER "${OpenVINO_VERSION}-${OpenVINO_VERSION_BUILD}-${GIT_COMMIT_HASH}${GIT_BRANCH_POSTFIX}")
unset(GIT_BRANCH_POSTFIX)
unset(GIT_BRANCH)
unset(GIT_COMMIT_HASH)
else()
unset(the_whole_version_is_defined_by_ci)
endif()
endmacro()
# provides OpenVINO version
# 1. If CI_BUILD_NUMBER is defined, parses this information
# 2. Otherwise, parses openvino/core/version.hpp
if (DEFINED ENV{CI_BUILD_NUMBER})
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
else()
branchName(GIT_BRANCH)
commitHash(GIT_COMMIT_HASH)
set(custom_build "custom_${GIT_BRANCH}_${GIT_COMMIT_HASH}")
set(CI_BUILD_NUMBER "${custom_build}")
endif()
# provides Inference Engine version
# 1. If CI_BUILD_NUMBER is defined, parses this information
# 2. Otherwise, parses ie_version.hpp
ie_parse_ci_build_number()
ov_parse_ci_build_number()
macro (addVersionDefines FILE)
set(__version_file ${FILE})

View File

@@ -2,9 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
#
set(IE_VS_VER_FILEVERSION_QUAD "${IE_VERSION_MAJOR},${IE_VERSION_MINOR},${IE_VERSION_PATCH},0")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${IE_VERSION_MAJOR},${IE_VERSION_MINOR},${IE_VERSION_PATCH},0")
set(IE_VS_VER_FILEVERSION_STR "${IE_VERSION_MAJOR}.${IE_VERSION_MINOR}.${IE_VERSION_PATCH}.0")
set(IE_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(IE_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")

View File

@@ -6,15 +6,19 @@ function(ie_generate_dev_package_config)
# dummy check that OpenCV is here
find_package(OpenCV QUIET)
set(all_dev_targets gflags ov_runtime_libraries)
foreach(component IN LISTS openvino_export_components)
# export all targets with prefix and use them during extra modules build
export(TARGETS ${${component}} NAMESPACE IE::
APPEND FILE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
APPEND FILE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
list(APPEND all_dev_targets ${${component}})
endforeach()
add_custom_target(ie_dev_targets DEPENDS ${all_dev_targets})
# if we've found system gflags
if(gflags_DIR)
set(gflags_BINARY_DIR "${gflags_DIR}")
endif()
configure_package_config_file("${OpenVINO_SOURCE_DIR}/cmake/templates/InferenceEngineDeveloperPackageConfig.cmake.in"
"${CMAKE_BINARY_DIR}/InferenceEngineDeveloperPackageConfig.cmake"
INSTALL_DESTINATION share # not used
@@ -30,18 +34,22 @@ function(ov_generate_dev_package_config)
# dummy check that OpenCV is here
find_package(OpenCV QUIET)
set(all_dev_targets gflags ov_runtime_libraries)
foreach(component IN LISTS openvino_export_components)
string(FIND "${component}" "_legacy" index)
if (index EQUAL -1)
if(index EQUAL -1)
# export all targets with prefix and use them during extra modules build
export(TARGETS ${${component}} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
APPEND FILE "${CMAKE_BINARY_DIR}/ov_${component}_dev_targets.cmake")
list(APPEND all_dev_targets ${${component}})
endif()
endforeach()
add_custom_target(ov_dev_targets DEPENDS ${all_dev_targets})
# if we've found system gflags
if(gflags_DIR)
set(gflags_BINARY_DIR "${gflags_DIR}")
endif()
configure_package_config_file("${OpenVINO_SOURCE_DIR}/cmake/templates/OpenVINODeveloperPackageConfig.cmake.in"
"${CMAKE_BINARY_DIR}/OpenVINODeveloperPackageConfig.cmake"
INSTALL_DESTINATION share # not used
@@ -59,14 +67,14 @@ endfunction()
function(register_extra_modules)
# post export
openvino_developer_export_targets(COMPONENT core TARGETS inference_engine)
openvino_developer_export_targets(COMPONENT core TARGETS ngraph)
openvino_developer_export_targets(COMPONENT core_legacy TARGETS inference_engine)
openvino_developer_export_targets(COMPONENT core_legacy TARGETS ngraph)
set(InferenceEngineDeveloperPackage_DIR "${CMAKE_CURRENT_BINARY_DIR}/runtime")
set(OpenVINODeveloperPackage_DIR "${CMAKE_BINARY_DIR}/runtime")
function(generate_fake_dev_package NS)
if (NS STREQUAL "openvino")
if(NS STREQUAL "openvino")
set(devconfig_file "${OpenVINODeveloperPackage_DIR}/OpenVINODeveloperPackageConfig.cmake")
else()
set(devconfig_file "${InferenceEngineDeveloperPackage_DIR}/InferenceEngineDeveloperPackageConfig.cmake")
@@ -81,10 +89,6 @@ function(register_extra_modules)
file(APPEND "${devconfig_file}" "add_library(${NS}::${target} ALIAS ${target})\n")
endif()
endforeach()
if ("${NS}" STREQUAL "openvino")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime ALIAS openvino)\n")
file(APPEND "${devconfig_file}" "add_library(${NS}::runtime::dev ALIAS openvino_dev)\n")
endif()
endfunction()
generate_fake_dev_package("openvino")
@@ -137,7 +141,7 @@ ie_generate_dev_package_config()
ov_generate_dev_package_config()
# extra modules must be registered after inference_engine library
# and all other IE common libraries (ov_runtime_libraries) are creared
# and all other OpenVINO Core libraries are creared
# because 'register_extra_modules' creates fake InferenceEngineDeveloperPackageConfig.cmake
# with all imported developer targets
register_extra_modules()

View File

@@ -59,7 +59,7 @@ cmake_dependent_option (ENABLE_WHEEL "Build wheel packages for PyPi" OFF
# Inference Engine specific options
#
# "MKL-DNN library based on OMP or TBB or Sequential implementation: TBB|OMP|SEQ"
# "OneDNN library based on OMP or TBB or Sequential implementation: TBB|OMP|SEQ"
if(X86 OR ARM OR (MSVC AND (ARM OR AARCH64)) )
set(THREADING_DEFAULT "SEQ")
else()

View File

@@ -2,9 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
#
set(PACKAGE_VERSION_MAJOR @IE_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @IE_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @IE_VERSION_PATCH@)
set(PACKAGE_VERSION_MAJOR @OpenVINO_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @OpenVINO_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @OpenVINO_VERSION_PATCH@)
set(PACKAGE_VERSION "${PACKAGE_VERSION_MAJOR}.${PACKAGE_VERSION_MINOR}.${PACKAGE_VERSION_PATCH}")
set(PACKAGE_VERSION_EXACT False)

View File

@@ -12,19 +12,20 @@ set_and_check(OpenVINO_MAIN_SOURCE_DIR "@OpenVINO_SOURCE_DIR@") # KMB
# Variables to export in plugin's projects
set(ie_options "@IE_OPTIONS@;CMAKE_BUILD_TYPE;CMAKE_SKIP_RPATH")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER)
set(ie_options "@IE_OPTIONS@")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from Inference Engine Developer package")
message("")
message(" ")
foreach(option IN LISTS ie_options)
if(NOT DEFINED "${option}")
load_cache("${cache_path}" READ_WITH_PREFIX "" ${option})
endif()
message(" ${option}: ${${option}}")
endforeach()
message("")
message(" ")
# for samples in 3rd party projects
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
@@ -48,11 +49,6 @@ find_dependency(ngraph
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
find_dependency(OpenVINODeveloperPackage
PATHS "${CMAKE_CURRENT_LIST_DIR}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
if(TARGET openvino::runtime AND NOT TARGET IE::runtime)
add_library(IE::runtime INTERFACE IMPORTED)
set_target_properties(IE::runtime PROPERTIES
@@ -70,6 +66,18 @@ foreach(component @openvino_export_components@)
include("${CMAKE_CURRENT_LIST_DIR}/${component}_dev_targets.cmake")
endforeach()
if(TARGET IE::ov_core_dev AND NOT TARGET openvino::core::dev)
add_library(openvino::core::dev INTERFACE IMPORTED)
set_target_properties(openvino::core::dev PROPERTIES
INTERFACE_LINK_LIBRARIES IE::ov_core_dev)
endif()
if(TARGET IE::runtime::dev AND NOT TARGET openvino::runtime::dev)
add_library(openvino::runtime::dev INTERFACE IMPORTED)
set_target_properties(openvino::runtime::dev PROPERTIES
INTERFACE_LINK_LIBRARIES IE::runtime::dev)
endif()
if(ENABLE_SYSTEM_PUGIXML)
find_dependency(PugiXML)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
@@ -86,13 +94,11 @@ endif()
# Extra Compile Flags
#
if(NOT MSVC)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
endif()

View File

@@ -2,9 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
#
set(PACKAGE_VERSION_MAJOR @IE_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @IE_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @IE_VERSION_PATCH@)
set(PACKAGE_VERSION_MAJOR @OpenVINO_VERSION_MAJOR@)
set(PACKAGE_VERSION_MINOR @OpenVINO_VERSION_MINOR@)
set(PACKAGE_VERSION_PATCH @OpenVINO_VERSION_PATCH@)
set(PACKAGE_VERSION "${PACKAGE_VERSION_MAJOR}.${PACKAGE_VERSION_MINOR}.${PACKAGE_VERSION_PATCH}")
set(PACKAGE_VERSION_EXACT False)

View File

@@ -150,10 +150,15 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
if(NOT enable_system_tbb)
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
# see https://stackoverflow.com/questions/28070810/cmake-generate-error-on-windows-as-it-uses-as-escape-seq
if(DEFINED ENV{TBBROOT})
# see https://stackoverflow.com/questions/28070810/cmake-generate-error-on-windows-as-it-uses-as-escape-seq
file(TO_CMAKE_PATH $ENV{TBBROOT} ENV_TBBROOT)
endif()
if(DEFINED ENV{TBB_DIR})
file(TO_CMAKE_PATH $ENV{TBB_DIR} ENV_TBB_DIR)
endif()
set(find_package_tbb_extra_args
CONFIG
PATHS
@@ -161,7 +166,7 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
"${ENV_TBBROOT}/lib64/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/TBB"
"${ENV_TBBROOT}/lib/cmake/tbb"
# "$ENV{TBB_DIR}"
"${ENV_TBB_DIR}"
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
"${TBBROOT}/cmake"
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO
@@ -206,7 +211,8 @@ if(NOT TARGET openvino)
set(_ov_as_external_package ON)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
# TODO: WA for cmake version < 3.16
# WA for cmake version < 3.16 which does not export
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
foreach (type RELEASE DEBUG RELWITHDEBINFO MINSIZEREL)
set_property(TARGET openvino::runtime APPEND PROPERTY IMPORTED_LINK_DEPENDENT_LIBRARIES_${type} "TBB::tbb;TBB::tbbmalloc")

View File

@@ -10,19 +10,20 @@ set_and_check(OpenVINO_SOURCE_DIR "@OpenVINO_SOURCE_DIR@")
# Variables to export in plugin's projects
set(ie_options "@IE_OPTIONS@;CMAKE_BUILD_TYPE;CMAKE_SKIP_RPATH")
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER)
set(ov_options "@IE_OPTIONS@")
list(APPEND ov_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX)
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
message(STATUS "The following CMake options are exported from OpenVINO Developer package")
message("")
foreach(option IN LISTS ie_options)
message(" ")
foreach(option IN LISTS ov_options)
if(NOT DEFINED "${option}")
load_cache("${cache_path}" READ_WITH_PREFIX "" ${option})
endif()
message(" ${option}: ${${option}}")
endforeach()
message("")
message(" ")
# for samples in 3rd party projects
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
@@ -51,10 +52,10 @@ endforeach()
if(ENABLE_SYSTEM_PUGIXML)
find_dependency(PugiXML)
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
add_library(IE::pugixml ALIAS pugixml)
add_library(openvino::pugixml ALIAS pugixml)
endif()
# inherit OpenCV from main IE project if enabled
# inherit OpenCV from main OpenVINO project if enabled
if ("@OpenCV_FOUND@")
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)
find_dependency(OpenCV)
@@ -64,13 +65,11 @@ endif()
# Extra Compile Flags
#
if(NOT MSVC)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
ie_add_compiler_flags(-Wno-error=unused-but-set-variable)
if(SUGGEST_OVERRIDE_SUPPORTED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-suggest-override")
endif()
endif()

View File

@@ -108,7 +108,7 @@ if(ENABLE_TESTS)
message(STATUS "pip version is ${pip3_version}")
set(args --quiet)
if(pip3_version VERSION_GREATER 20.2.2)
if(pip3_version VERSION_GREATER 20.2.2 AND pip3_version VERSION_LESS 20.3.0)
list(APPEND args --use-feature=2020-resolver)
endif()

View File

@@ -0,0 +1,8 @@
# Introduction to OpenVINO™ Deployment {#openvino_docs_deployment_guide_introduction}
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
* [Run inference and develop your app with OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
* [Deploy your model online with the OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).

View File

@@ -0,0 +1,111 @@
# OpenVINO™ Deep Learning Workbench Overview {#workbench_docs_Workbench_DG_Introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
workbench_docs_Workbench_DG_Install
workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets
Tutorials <workbench_docs_Workbench_DG_Tutorials>
User Guide <workbench_docs_Workbench_DG_User_Guide>
workbench_docs_Workbench_DG_Troubleshooting
@endsphinxdirective
Deep Learning Workbench (DL Workbench) is an official OpenVINO™ graphical interface designed to make the production of pretrained deep learning Computer Vision and Natural Language Processing models significantly easier.
Minimize the inference-to-deployment workflow timing for neural models right in your browser: import a model, analyze its performance and accuracy, visualize the outputs, optimize and make the final model deployment-ready in a matter of minutes. DL Workbench takes you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.
![](../img/openvino_dl_wb.png)
@sphinxdirective
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
:type: ref
:text: Run DL Workbench in Intel® DevCloud
:classes: btn-primary btn-block
@endsphinxdirective
DL Workbench enables you to get a detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
DL Workbench also provides the [JupyterLab environment](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Jupyter_Notebooks.html#doxid-workbench-docs-workbench-d-g-jupyter-notebooks) that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.
## Video
@sphinxdirective
.. list-table::
* - .. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="560"
src="https://www.youtube.com/embed/on8xSSTKCt8">
</iframe>
* - **DL Workbench Introduction**. Duration: 1:31
@endsphinxdirective
## User Goals
DL Workbench helps achieve your goals depending on the stage of your deep learning journey.
If you are a beginner in the deep learning field, the DL Workbench provides you with
learning opportunities:
* Learn what neural networks are, how they work, and how to examine their architectures.
* Learn the basics of neural network analysis and optimization before production.
* Get familiar with the OpenVINO™ ecosystem and its main components without installing it on your system.
If you have enough experience with neural networks, DL Workbench provides you with a
convenient web interface to optimize your model and prepare it for production:
* Measure and interpret model performance.
* Tune the model for enhanced performance.
* Analyze the quality of your model and visualize output.
## General Workflow
The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:
![](../img/openvino_dl_wb_diagram_overview.svg)
Get a quick overview of the workflow in the DL Workbench User Interface:
![](../img/openvino_dl_wb_workflow.gif)
## OpenVINO™ Toolkit Components
The intuitive web-based interface of the DL Workbench enables you to easily use various
OpenVINO™ toolkit components:
Component | Description
|------------------|------------------|
| [Open Model Zoo](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| Get access to the collection of high-quality pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [Intel-trained](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html) models trained to resolve a variety of different tasks.
| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) |Optimize and transform models trained in supported frameworks to the IR format. <br>Supported frameworks include TensorFlow\*, Caffe\*, Kaldi\*, MXNet\*, and ONNX\* format.
| [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| Estimate deep learning model inference performance on supported devices.
| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html)| Evaluate the accuracy of a model by collecting one or several metric values.
| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models. |
@sphinxdirective
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
:type: ref
:text: Run DL Workbench in Intel® DevCloud
:classes: btn-outline-primary
@endsphinxdirective
## Contact Us
* [DL Workbench GitHub Repository](https://github.com/openvinotoolkit/workbench)
* [DL Workbench on Intel Community Forum](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit)
* [DL Workbench Gitter Chat](https://gitter.im/dl-workbench/general?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&content=body)

View File

@@ -0,0 +1,12 @@
# Introduction to Model Processing {#openvino_docs_model_processing_introduction}
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* [Browse a database of models for use in your projects](../model_zoo.md).
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).

View File

@@ -0,0 +1,76 @@
# OpenVINO™ Ecosystem Overview {#openvino_ecosystem}
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
### OpenVINO™ Model Server (OVMS)
OpenVINO Model Server is a scalable, high-performance solution for serving deep learning models optimized for Intel® architectures. The server uses Inference Engine libraries as a backend and exposes gRPC and HTTP/REST interfaces for inference that are fully compatible with TensorFlow Serving.
More resources:
* [OpenVINO documentation](https://docs.openvino.ai/latest/openvino_docs_ovms.html)
* [Docker Hub](https://hub.docker.com/r/openvino/model_server)
* [GitHub](https://github.com/openvinotoolkit/model_server)
* [Red Hat Ecosystem Catalog](https://catalog.redhat.com/software/container-stacks/detail/60649e41ccfb383fe395a167)
### Neural Network Compression Framework (NNCF)
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
More resources:
* [Documentation](@ref docs_nncf_introduction)
* [GitHub](https://github.com/openvinotoolkit/nncf)
* [PyPI](https://pypi.org/project/nncf/)
### OpenVINO™ Security Add-on
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
More resources:
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
* [GitHub]https://github.com/openvinotoolkit/security_addon)
### OpenVINO™ integration with TensorFlow (OVTF)
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
More resources:
* [documentation](https://github.com/openvinotoolkit/openvino_tensorflow)
* [PyPI](https://pypi.org/project/openvino-tensorflow/)
* [GitHub](https://github.com/openvinotoolkit/openvino_tensorflow)
### DL Streamer
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/dlstreamer_gst/)
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
### DL Workbench
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting [Intel® DevCloud for the Edge](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html) and launching DL Workbench on-line.
More resources:
* [documentation](dl_workbench_overview.md)
* [Docker Hub](https://hub.docker.com/r/openvino/workbench)
* [PyPI](https://pypi.org/project/openvino-workbench/)
### OpenVINO™ Training Extensions (OTE)
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
More resources:
* [GitHub](https://github.com/openvinotoolkit/training_extensions)
### Computer Vision Annotation Tool (CVAT)
An online, interactive video and image annotation tool for computer vision purposes.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/cvat/docs/)
* [web application](https://cvat.org/)
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
* [GitHub](https://github.com/openvinotoolkit/cvat)
### Dataset Management Framework (Datumaro)
A framework and CLI tool to build, transform, and analyze datasets.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/datumaro/docs/)
* [PyPI](https://pypi.org/project/datumaro/)
* [GitHub](https://github.com/openvinotoolkit/datumaro)

View File

@@ -0,0 +1,44 @@
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
This is all you need:
```bash
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
```
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
- Intel® CPUs
- Intel® integrated GPUs
- Intel® Movidius™ Vision Processing Units - referred to as VPU
- Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL
> **NOTE**: For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated [GitHub repository](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs).
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples folder](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples) in our GitHub repository.
Sample tutorials are also hosted on [Intel® DevCloud](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html). The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
## License
**OpenVINO™ integration with TensorFlow** is licensed under [Apache License Version 2.0](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE).
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
## Support
Submit your questions, feature requests and bug reports via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
## How to Contribute
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
* Share your proposal via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
* Submit a [pull request](https://github.com/openvinotoolkit/openvino_tensorflow/pulls).
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
---
\* Other names and brands may be claimed as the property of others.

View File

@@ -9,13 +9,13 @@
openvino_docs_Extensibility_UG_add_openvino_ops
openvino_docs_Extensibility_UG_Frontend_Extensions
openvino_docs_Extensibility_UG_GPU
openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel
openvino_docs_Extensibility_UG_VPU_Kernel
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
@endsphinxdirective
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations is different for
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
@@ -52,7 +52,7 @@ Depending on model format used for import, mapping of custom operation is implem
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be

View File

@@ -1,4 +1,4 @@
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel}
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_Extensibility_UG_VPU_Kernel}
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.

View File

@@ -37,8 +37,8 @@ The implementation `CompileNetwork` is fully device-specific.
The function accepts a const shared pointer to `ngraph::Function` object and performs the following steps:
1. Applies ngraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
2. Maps the transformed graph to a backend specific graph representation (for example, to MKLDNN graph for Intel CPU).
1. Applies nGraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
2. Maps the transformed graph to a backend specific graph representation (for example, to CPU plugin internal graph representation).
3. Allocates and fills memory for graph weights, backend specific memory handles and so on.
@snippet src/template_executable_network.cpp executable_network:map_graph

View File

@@ -2,7 +2,7 @@
Inference Engine Plugin usually represents a wrapper around a backend. Backends can be:
- OpenCL-like backend (e.g. clDNN library) for GPU devices.
- MKLDNN backend for Intel CPU devices.
- oneDNN backend for Intel CPU devices.
- NVIDIA cuDNN for NVIDIA GPUs.
The responsibility of Inference Engine Plugin:

View File

@@ -9,10 +9,10 @@
<tab type="user" title="Attributes" url="@ref openvino_docs_OV_UG_lpt_attributes">
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved"/>
<tab type="user" title="IntervalsAlignment" url="@ref openvino_docs_OV_UG_lpt_IntervalsAlignment"/>
<tab type="user" title="PerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_PerTensorQuantization"/>
<tab type="user" title="PrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_PrecisionPreserved"/>
<tab type="user" title="Precisions" url="@ref openvino_docs_OV_UG_lpt_Precisions"/>
<tab type="user" title="QuantizationAlignment" url="@ref openvino_docs_OV_UG_lpt_QuantizationAlignment"/>
<tab type="user" title="QuantizationGranularity" url="@ref openvino_docs_OV_UG_lpt_QuantizationGranularity"/>
</tab>
<tab type="user" title="Step 1. Prerequisites transformations" url="@ref openvino_docs_OV_UG_lpt_step1_prerequisites">
<tab type="user" title="LinOpSequenceFusion" url="@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion"/>

View File

@@ -1,11 +0,0 @@
# PerTensorQuantization attribute {#openvino_docs_OV_UG_lpt_PerTensorQuantization}
ngraph::PerTensorQuantizationAttribute class represents the `PerTensorQuantization` attribute.
The attribute defines if the operation input port requires per-tensor quantization.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation, input ports |
| Properties | |

View File

@@ -0,0 +1,11 @@
# QuantizationGranularity attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.
The attribute defines quantization granularity of operation inputs.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | No |
| Defined | Input ports |
| Properties | Quantization granularity |

View File

@@ -8,29 +8,30 @@
:hidden:
AvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved>
IntervalsAlignment <openvino_docs_OV_UG_lpt_IntervalsAlignment>
PerTensorQuantization <openvino_docs_OV_UG_lpt_PerTensorQuantization>
IntervalsAlignment <openvino_docs_OV_UG_lpt_IntervalsAlignment>
PrecisionPreserved <openvino_docs_OV_UG_lpt_PrecisionPreserved>
Precisions <openvino_docs_OV_UG_lpt_Precisions>
QuantizationAlignment <openvino_docs_OV_UG_lpt_QuantizationAlignment>
QuantizationGranularity <openvino_docs_OV_UG_lpt_QuantizationGranularity>
@endsphinxdirective
## Introduction
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_OV_UG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PerTensorQuantization](@ref openvino_docs_OV_UG_lpt_PerTensorQuantization) | Precision | Yes | No |
| [PrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_OV_UG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_OV_UG_lpt_QuantizationAlignment) | Quantization alignment | Yes | Yes |
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|--------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_OV_UG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_OV_UG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_OV_UG_lpt_QuantizationAlignment) | Quantization granularity | Yes | Yes |
| [QuantizationGranularity](@ref openvino_docs_OV_UG_lpt_QuantizationGranularity) | Quantization granularity | Yes | No |
> `Target` attribute group defines attribute usage during model transformation for the best performance:
> - `Precision` - the attribute defines the most optimal output port precision.
> - `Quantization interval` - the attribute defines quantization interval.
> - `Quantization alignment` - the attribute defines quantization alignment: per-channel or per-tensor quantization.
> - `Quantization alignment` - the attribute defines quantization granularity in runtime: per-channel or per-tensor quantization.
> - `Quantization granularity` - the attribute is set by plugin to define quantization granularity: per-channel or per-tensor quantization.
>
> `Required` attribute group defines if attribute usage is required to get an optimal model during transformation:
> - `Yes` - the attribute is used by all OpenVINO plugins for low-precision optimization.

View File

@@ -1,4 +1,4 @@
# Convert model with Model Optimizer {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Converting Models with Model Optimizer {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
@sphinxdirective
@@ -9,6 +9,7 @@
:hidden:
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
openvino_docs_MO_DG_FP16_Compression
@@ -24,54 +25,49 @@
@endsphinxdirective
## Introduction
Model Optimizer is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Using Model Optimizer tool assumes you already have a deep learning model trained using one of the supported frameworks: TensorFlow, PyTorch, PaddlePaddle, MXNet, Caffe, Kaldi, or represented in ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the model, which can be inferred with [OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with [OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
> **NOTE**: Model Optimizer does not infer models. Model Optimizer is an offline tool that converts a model into IR and optimizes before the inference takes place.
Note that Model Optimizer does not infer models.
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
The figure below illustrates the typical workflow for deploying a trained deep learning model:
![](img/BASIC_FLOW_MO_simplified.svg)
The IR is a pair of files describing the model:
where IR is a pair of files describing the model:
* <code>.xml</code> - Describes the network topology
* <code>.xml</code> - Describes the network topology.
* <code>.bin</code> - Contains the weights and biases binary data.
> **NOTE**: The generated IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
The generated IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
> that applies post-training quantization methods.
> **TIP**: You also can work with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html) (DL Workbench).
> [DL Workbench](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html) is a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models.
> **TIP**: You can also work with Model Optimizer in OpenVINO™ [Deep Learning Workbench (DL Workbench)](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html), which is a web-based tool with GUI for optimizing, fine-tuning, analyzing, visualizing, and comparing performance of deep learning models.
## Run Model Optimizer
## How to Run Model Optimizer
To convert the model to IR, run Model Optimizer:
To convert a model to IR, you can run Model Optimizer by using the following command:
```sh
mo --input_model INPUT_MODEL
```
If out-of-the-box conversion (only the `--input_model` parameter is specified) is not succeed,
try to use parameters for overriding input shapes and cutting the model, mentioned below.
If the out-of-the-box conversion (only the `--input_model` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
To override original input shapes for model conversion, Model Optimizer provides two parameters: `--input` and `--input_shape`.
For more information about these parameters, refer to [Setting Input Shapes](prepare_model/convert_model/Converting_Model.md).
- Model Optimizer provides two parameters to override original input shapes for model conversion: `--input` and `--input_shape`.
For more information about these parameters, refer to the [Setting Input Shapes](prepare_model/convert_model/Converting_Model.md) guide.
To cut off unwanted parts of a model, such as unsupported operations and training sub-graphs,
the `--input` and `--output` parameters can be used, defining new inputs and outputs of the converted model.
For a more detailed description, refer to [Cutting Off Parts of a Model](prepare_model/convert_model/Cutting_Model.md).
- To cut off unwanted parts of a model (such as unsupported operations and training sub-graphs),
use the `--input` and `--output` parameters to define new inputs and outputs of the converted model.
For a more detailed description, refer to the [Cutting Off Parts of a Model](prepare_model/convert_model/Cutting_Model.md) guide.
Also, you can insert additional input pre-processing sub-graphs into the converted model using
You can also insert additional input pre-processing sub-graphs into the converted model by using
the `--mean_values`, `scales_values`, `--layout`, and other parameters described
in [Embedding Preprocessing Computation](prepare_model/Additional_Optimizations.md).
in the [Embedding Preprocessing Computation](prepare_model/Additional_Optimizations.md) article.
Model Optimizer's compression parameter `--data_type` allows to generate IR of the `FP16` data type. For more details,
please refer to [Compression of a Model to FP16](prepare_model/FP16_Compression.md).
The `--data_type` compression parameter in Model Optimizer allows generating IR of the `FP16` data type. For more details, refer to the [Compression of a Model to FP16](prepare_model/FP16_Compression.md) guide.
To get the full list of conversion parameters available in Model Optimizer, run the following command:
@@ -81,54 +77,51 @@ mo --help
## Examples of CLI Commands
Below is a list of separate examples for different frameworks and Model Optimizer parameters.
Below is a list of separate examples for different frameworks and Model Optimizer parameters:
1. Launch Model Optimizer for a TensorFlow MobileNet model in the binary protobuf format.
1. Launch Model Optimizer for a TensorFlow MobileNet model in the binary protobuf format:
```sh
mo --input_model MobileNet.pb
```
Launch Model Optimizer for a TensorFlow BERT model in the SavedModel format, with three inputs. Explicitly specify input shapes
where the batch size and the sequence length equal 2 and 30 respectively.
Launch Model Optimizer for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly
where the batch size and the sequence length equal 2 and 30 respectively:
```sh
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
```
For more information on TensorFlow model conversion,
refer to [Converting a TensorFlow Model](prepare_model/convert_model/Convert_Model_From_TensorFlow.md).
For more information, refer to the [Converting a TensorFlow Model](prepare_model/convert_model/Convert_Model_From_TensorFlow.md) guide.
2. Launch Model Optimizer for an ONNX OCR model and explicitly specify new output.
2. Launch Model Optimizer for an ONNX OCR model and specify new output explicitly:
```sh
mo --input_model ocr.onnx --output probabilities
```
For more information on ONNX model conversion,
please refer to [Converting an ONNX Model](prepare_model/convert_model/Convert_Model_From_ONNX.md).
Note that PyTorch models must be exported to the ONNX format before its conversion into IR.
More details can be found in [Converting a PyTorch Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md).
For more information, refer to the [Converting an ONNX Model (prepare_model/convert_model/Convert_Model_From_ONNX.md) guide.
3. Launch Model Optimizer for a PaddlePaddle UNet model and apply mean-scale normalization to the input.
> **NOTE**: PyTorch models must be exported to the ONNX format before conversion into IR. More information can be found in [Converting a PyTorch Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md).
3. Launch Model Optimizer for a PaddlePaddle UNet model and apply mean-scale normalization to the input:
```sh
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
```
For more information on PaddlePaddle model conversion, please refer to
[Converting a PaddlePaddle Model](prepare_model/convert_model/Convert_Model_From_Paddle.md).
For more information, refer to the [Converting a PaddlePaddle Model](prepare_model/convert_model/Convert_Model_From_Paddle.md) guide.
4. Launch Model Optimizer for an MXNet SSD Inception V3 model and specify first-channel layout for the input.
4. Launch Model Optimizer for an Apache MXNet SSD Inception V3 model and specify first-channel layout for the input:
```sh
mo --input_model ssd_inception_v3-0000.params --layout NCHW
```
For more information on MXNet models conversion, please refer to [Converting an MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md).
For more information, refer to the [Converting an Apache MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md) guide.
5. Launch Model Optimizer for a Caffe AlexNet model with input channels in the RGB format, which needs to be reversed.
5. Launch Model Optimizer for a Caffe AlexNet model with input channels in the RGB format which needs to be reversed:
```sh
mo --input_model alexnet.caffemodel --reverse_input_channels
```
For more information on Caffe model conversion, please refer to [Converting a Caffe Model](prepare_model/convert_model/Convert_Model_From_Caffe.md).
For more information, refer to the [Converting a Caffe Model](prepare_model/convert_model/Convert_Model_From_Caffe.md) guide.
6. Launch Model Optimizer for a Kaldi LibriSpeech nnet2 model.
6. Launch Model Optimizer for a Kaldi LibriSpeech nnet2 model:
```sh
mo --input_model librispeech_nnet2.mdl --input_shape [1,140]
```
For more information on Kaldi model conversion,
refer to [Converting a Kaldi Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md).
For more information, refer to the [Converting a Kaldi Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md) guide.
To get conversion recipes for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models,
refer to [Model Conversion Tutorials](prepare_model/convert_model/Convert_Model_Tutorials.md).
- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, Apache MXNet, and Kaldi models,
refer to the [Model Conversion Tutorials](prepare_model/convert_model/Convert_Model_Tutorials.md).
- For more information about IR, see [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](IR_and_opsets.md).

View File

@@ -1,46 +1,44 @@
# Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ {#openvino_docs_MO_DG_IR_and_opsets}
This document provides essential information on the format used for representation of deep learning models in OpenVINO toolkit and supported operation sets.
This article provides essential information on the format used for representation of deep learning models in OpenVINO toolkit and supported operation sets.
## Overview of Artificial Neural Networks Representation
This paragraph provides an overview of how a deep learning network is represented in various deep learning frameworks.
A deep learning network is usually represented as a directed graph describing the flow of data from the network input data to the inference results.
Input data can be represented as a photograph, video, audio information or some preprocessed data that represent object from the target area of interest in a convenient way.
Input data can be in the form of images, video, audio, or preprocessed information representing objects from the target area of interest.
Here is an illustration of a small graph representing a model that consists of a single Convolutional layer and activation function:
![](img/small_IR_graph_demonstration.png)
Vertices in the graph represent layers or operation instances, like convolution, pooling or element-wise operations with tensors.
Layer and operation terms are used interchangeably along the OpenVINO documentation and define how input data is processed to produce output data for a node in a graph.
Vertices in the graph represent layers or operation instances such as convolution, pooling, and element-wise operations with tensors.
The terms of "layer" and "operation" are used interchangeably within OpenVINO documentation and define how input data is processed to produce output data for a node in a graph.
An operation node in a graph may consume data at one or multiple input ports.
For example, element-wise addition operation has two input ports which accepts tensors that are added together.
Some operations don't have any input ports, for example Const operation which knowns the data to be produced without any input.
An edge between operations represent data flow or data dependency implied from one operation node to another operation node.
For example, an element-wise addition operation has two input ports which accept tensors that are to be summed.
Some operations do not have any input ports, for example the `Const` operation, which knows the data to be produced without any input.
An edge between operations represents data flow or data dependency implied from one operation node to another.
Each operation produces data on one or multiple output ports. For example, convolution produces output tensor with activations at a single output port. Split operation usually has multiple output ports each producing part of an input tensor.
Each operation produces data on one or multiple output ports. For example, convolution produces output tensor with activations at a single output port. Split operation usually has multiple output ports, each producing part of an input tensor.
Depending on a deep learning framework, the graph can also contain extra nodes that explicitly represent tensors between operations.
In such representations, operation nodes are not connected directly to each other, rather using data nodes as intermediate stops for data flow.
In such representations, operation nodes are not connected to each other directly. They are rather using data nodes as intermediate stops for data flow.
If data nodes are not used, the produced data is associated with an output port of a corresponding operation node that produces the data.
A set of various operations used in a network is usually fixed for each deep learning framework.
It determines expressiveness and level of representation available in that framework.
It may happen that a network that can be represented in one framework is hard or impossible to be represented in another one or should use significantly different graph because operation sets used in those two frameworks do not match.
Sometimes, a network that can be represented in one framework is hard or impossible to be represented in another one or should use significantly different graph, because operation sets used in those two frameworks do not match.
## Intermediate Representation Used in OpenVINO
## Intermediate Representation Used in OpenVINO
OpenVINO toolkit introduces its own format of graph representation and its own operation set.
OpenVINO toolkit introduces its own format of graph representation and its own operation set.
A graph is represented with two files: an XML file and a binary file.
This representation is commonly referred to as the *Intermediate Representation* or *IR*.
The XML file describes a network topology using a `<layer>` tag for an operation node and an `<edge>` tag for a data-flow connection.
Each operation has a fixed number of attributes that define operation flavor used for a node.
For example, `Convolution` operation has such attributes as `dilation`, `stride`, `pads_begin` and `pads_end`.
For example, the `Convolution` operation has such attributes as `dilation`, `stride`, `pads_begin`, and `pads_end`.
The XML file doesn't have big constant values, like convolution weights.
The XML file does not have big constant values like convolution weights.
Instead, it refers to a part of the accompanying binary file that stores such values in a binary format.
Here is an example of a small IR XML file that corresponds to a graph from the previous section:
@@ -151,48 +149,48 @@ Here is an example of a small IR XML file that corresponds to a graph from the p
</net>
```
The IR doesn't use explicit data nodes described in the previous section.
The IR does not use explicit data nodes described in the previous section.
In contrast, properties of data such as tensor dimensions and their data types are described as properties of input and output ports of operations.
## Operation Set
## Operation Sets
Operations in the OpenVINO Operation Set are selected based on capabilities of supported deep learning frameworks and hardware capabilities of the target inference device.
Operations in OpenVINO Operation Sets are selected based on capabilities of supported deep learning frameworks and hardware capabilities of the target inference device.
It consists of several groups of operations:
* Conventional deep learning layers like Convolution, MaxPool, MatMul (also known as FullyConnected).
* Conventional deep learning layers such as `Convolution`, `MaxPool`, and `MatMul` (also known as `FullyConnected`).
* Various activation functions, e.g. ReLU, Tanh, PReLU.
* Various activation functions such as `ReLU`, `Tanh`, and `PReLU`.
* Generic element-wise arithmetic tensor operations like Add, Subtract, Multiply.
* Generic element-wise arithmetic tensor operations such as `Add`, `Subtract`, and `Multiply`.
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example Less, Equeal, Greater.
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, `Less`, `Equeal`, `Greater`.
* Logical operations that are dealing with boolean tensors, like And, Xor, Not.
* Logical operations that are dealing with boolean tensors, for example, `And`, `Xor`, `Not`.
* Data movement operations which are dealing with parts of tensors: Concat, Split, StridedSlice, Select.
* Data movement operations which are dealing with parts of tensors, for example, `Concat`, `Split`, `StridedSlice`, `Select`.
* Specialized operations that implement complex algorithms dedicated for models of specific type: DetectionOutput, RegionYolo, PriorBox.
* Specialized operations that implement complex algorithms dedicated for models of specific type, for example, `DetectionOutput`, `RegionYolo`, `PriorBox`.
Refer to the complete description of the supported operation sets in the [Available Operation Sets](../ops/opset.md) document.
For more information, refer to the complete description of the supported operation sets in the [Available Operation Sets](../ops/opset.md) article.
## IR Versions vs Operation Set Versions
The expressiveness of operations in OpenVINO is highly dependent on the supported frameworks and target hardware capabilities.
The expressiveness of operations in OpenVINO is highly dependent on the supported frameworks and target hardware capabilities.
As the frameworks and hardware capabilities grow over time, the operation set is constantly evolving to support new models.
To maintain backward compatibility and growing demands, both IR format and operation set have versioning.
Version of IR specifies the rules which are used to read the XML and binary files that represent a model. It defines an XML schema and compatible operation set that can be used to describe operations.
Historically, there are two major IR version epochs.
Historically, there are two major IR version epochs:
1. The older one includes IR versions from version 1 to version 7 without versioning of the operation set. During that epoch, the operation set has been growing evolutionally accumulating more layer types and extending existing layer semantics. Changing of the operation set for those versions meant increasing of IR version.
2. OpenVINO 2020.1 is the starting point of the next epoch. With IR version 10 introduced in OpenVINO 2020.1, the versioning of the operation set is tracked separately from the IR versioning. Also, the operation set was significantly reworked as the result of nGraph integration to the OpenVINO.
2. OpenVINO 2020.1 is the starting point of the next epoch. With IR version 10 introduced in OpenVINO 2020.1, the versioning of the operation set is tracked separately from the IR versioning. Also, the operation set was significantly reworked as the result of nGraph integration to the OpenVINO.
The first supported operation set in the new epoch is `opset1`.
The number after `opset` is going to be increased each time when new operations are added or old operations deleted at the release cadence.
The number after `opset` is going to be increased each time new operations are added or old operations deleted at the release cadence.
The operations from the new epoch cover more TensorFlow* and ONNX* operators in a form that is closer to the original operation semantics from the frameworks in comparison to the operation set used in former versions of IR (7 and lower).
The operations from the new epoch cover more TensorFlow and ONNX operators in a form that is closer to the original operation semantics from the frameworks in comparison to the operation set used in former versions of IR (7 and lower).
The name of the opset is specified for each operation in IR.
The IR version is specified once per whole IR.
@@ -215,31 +213,30 @@ Here is an example from the IR snippet:
...
```
The attributes `type="Parameter"` and `version="opset1"` in the example above mean "use that version of operation `Parameter` that is included into the operation set `opset1`".
The `type="Parameter"` and `version="opset1"` attributes in the example above mean "use that version of the `Parameter` operation that is included in the `opset1` operation set. "
When a new operation set is introduced, the significant part of the operations remains unchanged and it is just aliased from the previous operation set within a new one.
The goal of operation set versions evolution is adding new operations, and probably changing of small fraction of existing operations (fixing bugs and extending semantics).
However such changes affect only new versions of operations from a new operation set, while old operations are used by specifying an appropriate `version`.
When the old `version` is specified, the behavior is kept unchanged from that specified version to provide the backward compatibility with older IRs.
When a new operation set is introduced, most of the operations remain unchanged and are just aliased from the previous operation set within a new one.
The goal of operation set version evolution is to add new operations, and probably change small fractions of existing operations (fixing bugs and extending semantics).
However, such changes affect only new versions of operations from a new operation set, while old operations are used by specifying an appropriate `version`.
When an old `version` is specified, the behavior will be kept unchanged from that specified version to provide backward compatibility with older IRs.
A single `xml` file with IR may contain operations from different opsets.
An operation that is included into several opsets may be referred to with `version` which points to any opset that includes that operation.
For example, the same `Convolution` can be used with `version="opset1"` and `version="opset2"` because both opsets have the same operations `Convolution`.
An operation that is included in several opsets may be referred to with `version` which points to any opset that includes that operation.
For example, the same `Convolution` can be used with `version="opset1"` and `version="opset2"` because both opsets have the same `Convolution` operations.
## How to Read the Specification
## How to Read Opset Specification
In the [Available Operation Sets](../ops/opset.md) there are opsets and there are operations.
Each opset specification has a list of links to operations descriptions that are included into that specific opset.
Two or more opsets may refer to the same operation.
That means an operation is kept unchanged from one operation set to another.
Each operation description has a field `Versioned name`.
For example, `ReLU` entry point in [`opset1`](../ops/opset1.md) refers to [`ReLU-1`](../ops/activation/ReLU_1.md) as the versioned name.
And `ReLU` in `opset2` refers to the same `ReLU-1` and both `ReLU` operations are the same operation and it has a single [description](../ops/activation/ReLU_1.md).
So `opset1` and `opset2` share the same operation `ReLU`.
The description of each operation has a `Versioned name` field.
For example, the `ReLU` entry point in [`opset1`](../ops/opset1.md) refers to [`ReLU-1`](../ops/activation/ReLU_1.md) as the versioned name.
Meanwhile, `ReLU` in `opset2` refers to the same `ReLU-1` and both `ReLU` operations are the same operation and it has a single [description](../ops/activation/ReLU_1.md), which means that `opset1` and `opset2` share the same operation `ReLU`.
To differentiate versions of the same operation type, like `ReLU`, the suffix `-N` is used in a versioned name of the operation.
`N` usually refers to the first `opsetN` where this version of the operation is introduced.
It is not guaranteed that new operations will be named according to that rule, the naming convention might be changed, but not for old operations which are frozen completely.
To differentiate versions of the same operation type such as `ReLU`, the `-N` suffix is used in a versioned name of the operation.
The `N` suffix usually refers to the first occurrence of `opsetN` where this version of the operation is introduced.
There is no guarantee that new operations will be named according to that rule. The naming convention might be changed, but not for old operations which are frozen completely.

View File

@@ -2,100 +2,101 @@
Input data for inference can be different from the training dataset and requires additional preprocessing before inference.
To accelerate the whole pipeline including preprocessing and inference, Model Optimizer provides special parameters such as `--mean_values`,
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates IR with additionally
inserted sub-graph that performs the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data,
reverting data along channel dimension, and changing the data layout. For more details about these parameters, refer to the paragraphs below.
The same functionality is also available in runtime, please refer to [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md)
for more information.
## When to Specify Layout
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates OpenVINO IR with additionally
inserted sub-graphs to perform the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data,
reverting data along channel dimension, and changing the data layout.
See the following sections for details on the parameters, or the [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md) for the same functionality in OpenVINO Runtime.
You may need to set input layouts, as it is required by some preprocessing, for example, setting a batch,
applying mean or scales, and reversing input channels (BGR<->RGB).
## Specifying Layout
You may need to set input layouts, as it is required by some preprocessing, for example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
Layout defines the meaning of dimensions in shape and can be specified for both inputs and outputs. Some preprocessing requires to set input layouts, for example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
Layout defines the meaning of dimensions in shape and can be specified for both inputs and outputs.
For the layout syntax, check the [Layout API overview](../../OV_Runtime_UG/layout_overview.md).
To specify the layout, you can use `--layout` option followed by the layout value.
To specify the layout, you can use the `--layout` option followed by the layout value.
For example, for Tensorflow\* `nasnet_large` model that was exported to ONNX format and thus has input with `NHWC` layout:
For example, the following command specifies the `NHWC` layout for a Tensorflow `nasnet_large` model that was exported to the ONNX format:
```
mo --input_model tf_nasnet_large.onnx --layout nhwc
```
Additionally, if a model has more than one input or needs both input and output layouts specified,
you need to provide the name of each input or output to which you apply the layout.
Additionally, if a model has more than one input or needs both input and output layouts specified, you need to provide the name of each input or output to apply the layout.
For example, for ONNX\* `Yolo v3 Tiny` model that has first input `input_1` in `NCHW` layout and second input `image_shape`
with 2 dimensions: batch and size of the image which can be expressed as `N?` layout:
For example, the following command specifies the layout for an ONNX `Yolo v3 Tiny` model with its first input `input_1` in `NCHW` layout and second input `image_shape` having two dimensions: batch and size of the image expressed as the `N?` layout:
```
mo --input_model yolov3-tiny.onnx --layout input_1(nchw),image_shape(n?)
```
## How to Change Layout of a Model Inputs and Outputs
## Changing Model Layout
Changing the model layout may be necessary if it differs from the one presented by input data.
To change the layout, you can use either `--layout` or `--source_layout` with `--target_layout`.
Use either `--layout` or `--source_layout` with `--target_layout` to change the layout.
For example, for the same `nasnet_large` that were mentioned previously we may want to provide data in `NCHW` layout:
For example, for the same `nasnet_large` model mentioned previously, you can use the following commands to provide data in the `NCHW` layout:
```
mo --input_model tf_nasnet_large.onnx --source_layout nhwc --target_layout nchw
mo --input_model tf_nasnet_large.onnx --layout "nhwc->nchw"
```
Again, if a model has more than one input or needs both input and output layouts specified, you need to provide the name of each input or output to which you apply the layout.
Again, if a model has more than one input or needs both input and output layouts specified, you need to provide the name of each input or output to apply the layout.
For example, to provide data in the `NHWC` layout for the `Yolo v3 Tiny` model mentioned earlier:
For example, to provide data in the `NHWC` layout for the `Yolo v3 Tiny` model mentioned earlier, use the following commands:
```
mo --input_model yolov3-tiny.onnx --source_layout "input_1(nchw),image_shape(n?)" --target_layout "input_1(nhwc)"
mo --input_model yolov3-tiny.onnx --layout "input_1(nchw->nhwc),image_shape(n?)"
```
## When to Specify Mean and Scale Values
Usually neural network models are trained with the normalized input data. This means that the input data values are converted to be in a specific range,
for example, `[0, 1]` or `[-1, 1]`. Sometimes the mean values (mean images) are subtracted from the input data values as part of the pre-processing.
There are two cases of how the input data pre-processing is implemented.
* The input pre-processing operations are a part of a model. In this case, the application does not pre-process the input data as a separate step: everything is embedded into the model itself.
* The input pre-processing operations are not a part of a model and the pre-processing is performed within the application which feeds the model with input data.
## Specifying Mean and Scale Values
Neural network models are usually trained with the normalized input data. This means that the input data values are converted to be in a specific range,
for example, `[0, 1]` or `[-1, 1]`. Sometimes, the mean values (mean images) are subtracted from the input data values as part of the preprocessing.
In the first case, the Model Optimizer generates the IR with required pre-processing operations and no `mean` and `scale` parameters are required.
There are two cases of how the input data preprocessing is implemented.
* The input preprocessing operations are a part of a model.
In this case, the application does not perform a separate preprocessing step: everything is embedded into the model itself. Model Optimizer will generate the OpenVINO IR format with required preprocessing operations, and no `mean` and `scale` parameters are required.
* The input preprocessing operations are not a part of a model and the preprocessing is performed within the application which feeds the model with input data.
In this case, information about mean/scale values should be provided to Model Optimizer to embed it to the generated OpenVINO IR format.
In the second case, information about mean/scale values should be provided to the Model Optimizer to embed it to the generated IR.
Model Optimizer provides command-line parameters to specify the values: `--mean_values`, `--scale_values`, `--scale`.
Using these parameters, Model Optimizer embeds the corresponding preprocessing block for mean-value normalization of the input data
and optimizes this block so that the preprocessing takes negligible time for inference.
For example, run the Model Optimizer for the PaddlePaddle* UNet model and apply mean-scale normalization to the input data.
For example, the following command runs Model Optimizer for the PaddlePaddle UNet model and applies mean-scale normalization to the input data:
```sh
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
```
## When to Reverse Input Channels <a name="when_to_reverse_input_channels"></a>
Sometimes input images for your application can be of the RGB (BGR) format and the model is trained on images of the BGR (RGB) format,
the opposite color channel order. In this case, it is important to preprocess the input images by reverting the color channels before inference.
To embed this preprocessing step into IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
## Reversing Input Channels <a name="when_to_reverse_input_channels"></a>
Sometimes, input images for your application can be of the RGB (or BGR) format and the model is trained on images of the BGR (or RGB) format,
which is in the opposite order of color channels. In this case, it is important to preprocess the input images by reverting the color channels before inference.
The `--reverse_input_channels` parameter applies to an input of the model in two cases.
To embed this preprocessing step into OpenVINO IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
The `--reverse_input_channels` parameter can be used to preprocess the model input in the following cases:
* Only one dimension in the input shape has a size equal to 3.
* One dimension has an undefined size and is marked as `C` channel using `layout` parameters.
Using the `--reverse_input_channels` parameter, Model Optimizer embeds the corresponding preprocessing block for reverting
the input data along channel dimension and optimizes this block so that the preprocessing takes negligible time for inference.
the input data along channel dimension and optimizes this block so that the preprocessing takes only negligible time for inference.
For example, launch the Model Optimizer for the TensorFlow* AlexNet model and embed `reverse_input_channel` preprocessing block into IR.
For example, the following command launches Model Optimizer for the TensorFlow AlexNet model and embeds the `reverse_input_channel` preprocessing block into OpenVINO IR:
```sh
mo --input_model alexnet.pb --reverse_input_channels
```
> **NOTE**: If both mean and scale values are specified, the mean is subtracted first and then the scale is applied regardless of the order of options
in the command line. Input values are *divided* by the scale value(s). If also `--reverse_input_channels` option is used, the `reverse_input_channels`
in the command-line. Input values are *divided* by the scale value(s). If the `--reverse_input_channels` option is also used, `reverse_input_channels`
will be applied first, then `mean` and after that `scale`. The data flow in the model looks as follows:
`Parameter -> ReverseInputChannels -> Mean apply-> Scale apply -> the original body of the model`.
## See Also
## Additional Resources
* [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md)

View File

@@ -1,4 +1,4 @@
# Compression of a Model to FP16 {#openvino_docs_MO_DG_FP16_Compression}
# Compressing a Model to FP16 {#openvino_docs_MO_DG_FP16_Compression}
Model Optimizer can convert all floating-point weights to `FP16` data type. The resulting IR is called
compressed `FP16` model.
@@ -10,11 +10,12 @@ To compress the model, use the `--data_type` option:
```
> **NOTE**: Using `--data_type FP32` will give no result and will not force `FP32`
> precision in the model. If the model was `FP16` it will have `FP16` precision in IR as well.
> precision in the model. If the model was `FP16`, it will have `FP16` precision in IR as well.
The resulting model will occupy about twice as less space in the file system, but it may have some accuracy drop,
although for the majority of models accuracy degradation is negligible. For details on how plugins handle
compressed `FP16` models refer to [Working with devices](../../OV_Runtime_UG/supported_plugins/Device_Plugins.md) page.
The resulting model will occupy about twice as less space in the file system, but it may have some accuracy drop.
The resulting model will occupy about half of the previous space in the file system, but lose some of its accuracy.
For most models, the accuracy drop is negligible.
For details on how plugins handle compressed `FP16` models, see [Working with devices](../../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
> **NOTE**: `FP16` compression is sometimes used as initial step for `INT8` quantization, please refer to
> [Post-training optimization](../../../tools/pot/docs/Introduction.md) for more information about that.
> **NOTE**: `FP16` compression is sometimes used as the initial step for `INT8` quantization.
> Refer to the [Post-training optimization](../../../tools/pot/docs/Introduction.md) guide for more information about that.

View File

@@ -1,28 +1,28 @@
# Getting Performance Numbers {#openvino_docs_MO_DG_Getting_Performance_Numbers}
This guide introduces things to notice and how to use the benchmark_app to get performance numbers. It also explains how the performance numbers are reflected through internal inference performance counters and execution graphs. In the last section, it includes information on using ITT and Intel® VTune™ Profiler to get performance insights.
## Tip 1. Measure the Proper Set of Operations
## Tip 1: Select Proper Set of Operations to Measure
When evaluating performance of your model with the OpenVINO Runtime, you must measure the proper set of operations. To do so, consider the following tips:
When evaluating the performance of a model with OpenVINO Runtime, it is required to measure proper set of operations. Remember the following tips:
- Avoid including one-time costs such as model loading.
- Avoid including one-time costs like model loading.
- Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.
- Track separately the operations that happen outside the OpenVINO Runtime, like video decoding.
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common).
> **NOTE**: Some image pre-processing can be baked into the IR and accelerated accordingly. For more information, refer to [Embedding the Preprocessing](Additional_Optimizations.md). Also consider [Runtime Optimizations of the Preprocessing](../../optimization_guide/dldt_deployment_optimization_common).
## Tip 2: Try to Get Credible Data
## Tip 2. Getting Credible Performance Numbers
You need to build your performance conclusions on reproducible data. Do the performance measurements with a large number of invocations of the same routine. Since the first iteration is almost always significantly slower than the subsequent ones, you can use an aggregated value for the execution time for final projections:
Performance conclusions should be build upon reproducible data. As for the performance measurements, they should be done with a large number of invocations of the same routine. Since the first iteration is almost always significantly slower than the subsequent ones, an aggregated value can be used for the execution time for final projections:
- If the warm-up run does not help or execution time still varies, you can try running a large number of iterations and then average or find a mean of the results.
- For time values that range too much, consider geomean.
- Beware of the throttling and other power oddities. A device can exist in one of several different power states. When optimizing your model, for better performance data reproducibility consider fixing the device frequency. However the end to end (application) benchmarking should be also performed under real operational conditions.
- If the time values range too much, consider geomean.
- Be aware of the throttling and other power oddities. A device can exist in one of several different power states. When optimizing your model, consider fixing the device frequency for better performance data reproducibility. However, the end-to-end (application) benchmarking should also be performed under real operational conditions.
## Tip 3. Measure Reference Performance Numbers with OpenVINO's benchmark_app
## Using benchmark_app to Measure Reference Performance Numbers
To get performance numbers, use the dedicated [Benchmark App](../../../samples/cpp/benchmark_app/README.md) sample which is the best way to produce the performance reference.
It has a lot of device-specific knobs, but the primary usage is as simple as:
To get performance numbers, use the dedicated [OpenVINO Benchmark app](../../../samples/cpp/benchmark_app/README.md) sample, which is the most-recommended solution to produce performance reference.
It includes a lot of device-specific knobs, but the primary usage is as simple as in the following command to measure the performance of the model on GPU:
```bash
$ ./benchmark_app d GPU m <model> -i <input>
```
@@ -33,28 +33,28 @@ $ ./benchmark_app d CPU m <model> -i <input>
```
to execute on the CPU instead.
Each of the [OpenVINO supported devices](../../OV_Runtime_UG/supported_plugins/Supported_Devices.md) offers performance settings that have command-line equivalents in the [Benchmark App](../../../samples/cpp/benchmark_app/README.md).
While these settings provide really low-level control and allow to leverage the optimal model performance on the _specific_ device, we suggest always starting the performance evaluation with the [OpenVINO High-Level Performance Hints](../../OV_Runtime_UG/performance_hints.md) first:
Each of the [OpenVINO supported devices](../../OV_Runtime_UG/supported_plugins/Supported_Devices.md) offers performance settings that contain command-line equivalents in the [Benchmark app](../../../samples/cpp/benchmark_app/README.md).
While these settings provide really low-level control and allow leveraging the optimal model performance on the _specific_ device, it is recommended to always start the performance evaluation with the [OpenVINO High-Level Performance Hints](../../OV_Runtime_UG/performance_hints.md) first:
- benchmark_app **-hint tput** -d 'device' -m 'path to your model'
- benchmark_app **-hint latency** -d 'device' -m 'path to your model'
## Comparing Performance with Native/Framework Code
## Notes for Comparing Performance with Native/Framework Code
When comparing the OpenVINO Runtime performance with the framework or another reference code, make sure that both versions are as similar as possible:
- Wrap exactly the inference execution (refer to the [Benchmark App](../../../samples/cpp/benchmark_app/README.md) for examples).
- Wrap the exact inference execution (refer to the [Benchmark app](../../../samples/cpp/benchmark_app/README.md) for examples).
- Do not include model loading time.
- Ensure the inputs are identical for the OpenVINO Runtime and the framework. For example, beware of random values that can be used to populate the inputs.
- Consider [Image Pre-processing and Conversion](../../OV_Runtime_UG/preprocessing_overview.md), while any user-side pre-processing should be tracked separately.
- When applicable, leverage the [Dynamic Shapes support](../../OV_Runtime_UG/ov_dynamic_shapes.md)
- Ensure that the inputs are identical for OpenVINO Runtime and the framework. For example, watch out for random values that can be used to populate the inputs.
- In situations when any user-side pre-processing should be tracked separately, consider [image pre-processing and conversion](../../OV_Runtime_UG/preprocessing_overview.md).
- When applicable, leverage the [Dynamic Shapes support](../../OV_Runtime_UG/ov_dynamic_shapes.md).
- If possible, demand the same accuracy. For example, TensorFlow allows `FP16` execution, so when comparing to that, make sure to test the OpenVINO Runtime with the `FP16` as well.
## Internal Inference Performance Counters and Execution Graphs <a name="performance-counters"></a>
Further, finer-grained insights into inference performance breakdown can be achieved with device-specific performance counters and/or execution graphs.
Both [C++](../../../samples/cpp/benchmark_app/README.md) and [Python](../../../tools/benchmark_tool/README.md) versions of the `benchmark_app` supports a `-pc` command-line parameter that outputs internal execution breakdown.
## Data from Internal Inference Performance Counters and Execution Graphs <a name="performance-counters"></a>
More detailed insights into inference performance breakdown can be achieved with device-specific performance counters and/or execution graphs.
Both [C++](../../../samples/cpp/benchmark_app/README.md) and [Python](../../../tools/benchmark_tool/README.md) versions of the `benchmark_app` support a `-pc` command-line parameter that outputs internal execution breakdown.
For example, below is the part of performance counters for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on [CPU Plugin](../../OV_Runtime_UG/supported_plugins/CPU.md).
Notice that since the device is CPU, the layers wall clock `realTime` and the `cpu` time are the same. Information about layer precision is also stored in the performance counters.
For example, the table shown below is the part of performance counters for quantized [TensorFlow implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on [CPU Plugin](../../OV_Runtime_UG/supported_plugins/CPU.md).
Keep in mind that since the device is CPU, the `realTime` wall clock and the `cpu` time layers are the same. Information about layer precision is also stored in the performance counters.
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |
| --------------------------------------------------------- | ---------- | ------------ | -------------------- | ------------- | ------------ |
@@ -68,22 +68,23 @@ Notice that since the device is CPU, the layers wall clock `realTime` and the `c
| resnet\_model/add\_5/fq\_input\_1 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
The `exeStatus` column of the table includes possible values:
- `EXECUTED` - layer was executed by standalone primitive,
- `NOT_RUN` - layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
The `exeStatus` column of the table includes the following possible values:
- `EXECUTED` - the layer was executed by standalone primitive.
- `NOT_RUN` - the layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
The `execType` column of the table includes inference primitives with specific suffixes. The layers have the following marks:
* Suffix `I8` for layers that had 8-bit data type input and were computed in 8-bit precision
* Suffix `FP32` for layers computed in 32-bit precision
The `execType` column of the table includes inference primitives with specific suffixes. The layers could have the following marks:
* The `I8` suffix is for layers that had 8-bit data type input and were computed in 8-bit precision.
* The `FP32` suffix is for layers computed in 32-bit precision.
All `Convolution` layers are executed in int8 precision. Rest layers are fused into Convolutions using post operations optimization technique, which is described in [Internal CPU Plugin Optimizations](../../OV_Runtime_UG/supported_plugins/CPU.md).
This contains layers name (as seen in IR), layers type and execution statistics.
All `Convolution` layers are executed in `int8` precision. The rest of the layers are fused into Convolutions using post-operation optimization, as described in [CPU Device](../../OV_Runtime_UG/supported_plugins/CPU.md).
This contains layer names (as seen in OpenVINO IR), type of the layer, and execution statistics.
Both benchmark_app versions also support "exec_graph_path" command-line option governing the OpenVINO to output the same per-layer execution statistics, but in the form of the plugin-specific [Netron-viewable](https://netron.app/) graph to the specified file.
Both `benchmark_app` versions also support the `exec_graph_path` command-line option. It requires OpenVINO to output the same execution statistics per layer, but in the form of plugin-specific [Netron-viewable](https://netron.app/) graph to the specified file.
Notice that on some devices, the execution graphs/counters may be pretty intrusive overhead-wise.
Also, especially when performance-debugging the [latency case](../../optimization_guide/dldt_deployment_optimization_latency.md) notice that the counters do not reflect the time spent in the plugin/device/driver/etc queues. If the sum of the counters is too different from the latency of an inference request, consider testing with less inference requests. For example running single [OpenVINO stream](../../optimization_guide/dldt_deployment_optimization_tput.md) with multiple requests would produce nearly identical counters as running single inference request, yet the actual latency can be quite different.
Especially when performance-debugging the [latency](../../optimization_guide/dldt_deployment_optimization_latency.md), note that the counters do not reflect the time spent in the `plugin/device/driver/etc` queues. If the sum of the counters is too different from the latency of an inference request, consider testing with less inference requests. For example, running single [OpenVINO stream](../../optimization_guide/dldt_deployment_optimization_tput.md) with multiple requests would produce nearly identical counters as running a single inference request, while the actual latency can be quite different.
Finally, the performance statistics with both performance counters and execution graphs is averaged, so such a data for the [dynamically-shaped inputs](../../OV_Runtime_UG/ov_dynamic_shapes.md) should be measured carefully (ideally by isolating the specific shape and executing multiple times in a loop, to gather the reliable data).
Lastly, the performance statistics with both performance counters and execution graphs are averaged, so such data for the [inputs of dynamic shapes](../../OV_Runtime_UG/ov_dynamic_shapes.md) should be measured carefully, preferably by isolating the specific shape and executing multiple times in a loop, to gather the reliable data.
OpenVINO in general and individual plugins are heavily instrumented with Intel® instrumentation and tracing technology (ITT), so another option is to compile the OpenVINO from the source code with the ITT enabled and using tools like [Intel® VTune™ Profiler](https://software.intel.com/en-us/vtune) to get detailed inference performance breakdown and additional insights in the application-level performance on the timeline view.
## Using ITT to Get Performance Insights
In general, OpenVINO and its individual plugins are heavily instrumented with Intel® Instrumentation and Tracing Technology (ITT). Therefore, you can also compile OpenVINO from the source code with ITT enabled and use tools like [Intel® VTune™ Profiler](https://software.intel.com/en-us/vtune) to get detailed inference performance breakdown and additional insights in the application-level performance on the timeline view.

View File

@@ -0,0 +1,65 @@
# Model Optimization Techniques {#openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques}
Optimization offers methods to accelerate inference with the convolution neural networks (CNN) that do not require model retraining.
* * *
## Linear Operations Fusing
Many convolution neural networks includes `BatchNormalization` and `ScaleShift` layers (for example, Resnet\*, Inception\*) that can be presented as a sequence of linear operations: additions and multiplications. For example ScaleShift layer can be presented as Mul → Add sequence. These layers can be fused into previous `Convolution` or `FullyConnected` layers, except when Convolution comes after an Add operation (due to Convolution paddings).
### Usage
In the Model Optimizer, this optimization is turned on by default. To disable it, you can pass `--disable_fusing` parameter to the Model Optimizer.
### Optimization Description
This optimization method consists of three stages:
1. `BatchNormalization` and `ScaleShift` decomposition: in this stage, `BatchNormalization` layer is decomposed to `Mul → Add → Mul → Add` sequence, and `ScaleShift` layer is decomposed to `Mul → Add` layers sequence.
2. Linear operations merge: in this stage, the `Mul` and `Add` operations are merged into a single `Mul → Add` instance.
For example, if there is a `BatchNormalization → ScaleShift` sequence in the topology, it is replaced with `Mul → Add` in the first stage. In the next stage, the latter is replaced with a `ScaleShift` layer if there is no available `Convolution` or `FullyConnected` layer to fuse into next.
3. Linear operations fusion: in this stage, the tool fuses `Mul` and `Add` operations to `Convolution` or `FullyConnected` layers. Note that it searches for `Convolution` and `FullyConnected` layers both backward and forward in the graph (except for `Add` operation that cannot be fused to `Convolution` layer in forward direction).
### Usage Examples
The picture below shows the depicted part of Caffe Resnet269 topology where `BatchNorm` and `ScaleShift` layers will be fused to `Convolution` layers.
![Caffe ResNet269 block before and after optimization generated with Netscope*](../img/optimizations/resnet_269.png)
* * *
## ResNet optimization (stride optimization)
ResNet optimization is a specific optimization that applies to Caffe ResNet topologies such as ResNet50, ResNet101, ResNet152 and to ResNet-based topologies. This optimization is turned on by default, and can be disabled with the `--disable_resnet_optimization` key.
### Optimization Description
In the picture below, you can see the original and optimized parts of a Caffe ResNet50 model. The main idea of this optimization is to move the stride that is greater than 1 from Convolution layers with the kernel size = 1 to upper Convolution layers. In addition, the Model Optimizer adds a Pooling layer to align the input shape for a Eltwise layer, if it was changed during the optimization.
![ResNet50 blocks (original and optimized) from Netscope](../img/optimizations/resnet_optimization.png)
In this example, the stride from the `res3a_branch1` and `res3a_branch2a` Convolution layers moves to the `res2c_branch2b` Convolution layer. In addition, to align the input shape for `res2c` Eltwise, the optimization inserts the Pooling layer with kernel size = 1 and stride = 2.
* * *
## Grouped Convolution Fusing
Grouped convolution fusing is a specific optimization that applies for TensorFlow topologies. The main idea of this optimization is to combine convolutions results for the `Split` outputs and then recombine them using `Concat` operation in the same order as they were out from `Split`.
![Split→Convolutions→Concat block from TensorBoard*](../img/optimizations/groups.png)
* * *
## Disabling Fusing
Model Optimizer allows to disable optimizations for specified nodes via `--finegrain_fusing <node_name1>,<node_name2>,...` (regex is also supported). Using this key, you mark nodes that will noy be touched by any optimizations.
### Examples of usage
On the picture below you can see two visualized Intermediate Representations (IR) of TensorFlow InceptionV4 topology.
The first one is original IR that will be produced by the Model Optimizer.
The second one will be produced by the Model Optimizer with key `--finegrain_fusing InceptionV4/InceptionV4/Conv2d_1a_3x3/Conv2D`, where you can see that `Convolution` was not fused with `Mul1_3752` and `Mul1_4061/Fused_Mul_5096/FusedScaleShift_5987` operations.
![TF InceptionV4 block without/with key --finegrain_fusing (from IR visualizer)](../img/optimizations/inception_v4.png)

View File

@@ -4,25 +4,25 @@ If your question is not covered by the topics below, use the [OpenVINO&trade; Su
#### 1. What does the message "[ ERROR ]: Current caffe.proto does not contain field" mean? <a name="question-1"></a>
Internally, the Model Optimizer uses a protobuf library to parse and load Caffe\* models. This library requires a file grammar and a generated parser. For a Caffe fallback, the Model Optimizer uses a Caffe-generated parser for a Caffe-specific `.proto` file (which is usually located in the `src/caffe/proto` directory). So, if you have Caffe installed on your machine with Python* interface available, make sure that this is exactly the version of Caffe that was used to create the model.
Internally, Model Optimizer uses a protobuf library to parse and load Caffe models. This library requires a file grammar and a generated parser. For a Caffe fallback, Model Optimizer uses a Caffe-generated parser for a Caffe-specific `.proto` file (which is usually located in the `src/caffe/proto` directory). Make sure that you install exactly the same version of Caffe (with Python interface) as that was used to create the model.
If you just want to experiment with the Model Optimizer and test a Python extension for working with your custom
If you just want to experiment with Model Optimizer and test a Python extension for working with your custom
layers without building Caffe, add the layer description to the `caffe.proto` file and generate a parser for it.
For example, to add the description of the `CustomReshape` layer, which is an artificial layer not present in any `caffe.proto` files:
1. Add the following lines to of the `caffe.proto` file:
1. Add the following lines to the `caffe.proto` file:
```shell
package mo_caffe; // to avoid conflict with system Caffe* it is highly recommended to specify different package name
package mo_caffe; // To avoid conflict with Caffe system, it is highly recommended to specify different package name.
...
message LayerParameter {
// other layers parameters description
// Other layers parameters description.
...
optional CustomReshapeParameter custom_reshape_param = 546; // 546 - ID is any number not present in caffe.proto
optional CustomReshapeParameter custom_reshape_param = 546; // 546 - ID is any number not present in caffe.proto.
}
// these lines to end of the file - describing contents of this parameter
// The lines from here to the end of the file are describing contents of this parameter.
message CustomReshapeParameter {
optional BlobShape shape = 1; // we just use the same parameter type as some other Caffe layers
optional BlobShape shape = 1; // Just use the same parameter type as some other Caffe layers.
}
```
@@ -31,15 +31,15 @@ For example, to add the description of the `CustomReshape` layer, which is an ar
cd <SITE_PACKAGES_WITH_INSTALLED_OPENVINO>/openvino/tools/mo/front/caffe/proto
python3 generate_caffe_pb2.py --input_proto <PATH_TO_CUSTOM_CAFFE>/src/caffe/proto/caffe.proto
```
where `PATH_TO_CUSTOM_CAFFE` is the path to the root directory of custom Caffe\*.
where `PATH_TO_CUSTOM_CAFFE` is the path to the root directory of custom Caffe.
3. Now, the Model Optimizer is able to load the model into memory and start working with your extensions if there are any.
3. Now, Model Optimizer is able to load the model into memory and start working with your extensions if there are any.
However, because your model has custom layers, you must register your custom layers as custom. To learn more about it, refer to the section [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md).
However, since your model has custom layers, you must register them as custom. To learn more about it, refer to [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md).
#### 2. How do I create a bare caffemodel, if I have only prototxt? <a name="question-2"></a>
You need the Caffe\* Python\* interface. In this case, do the following:
You need the Caffe Python interface. In this case, do the following:
```shell
python3
import caffe
@@ -48,25 +48,25 @@ net.save('<PATH_TO_PROTOTXT>/my_net.caffemodel')
```
#### 3. What does the message "[ ERROR ]: Unable to create ports for node with id" mean? <a name="question-3"></a>
Most likely, the Model Optimizer does not know how to infer output shapes of some layers in the given topology.
To lessen the scope, compile the list of layers that are custom for the Model Optimizer: present in the topology,
absent in [list of supported layers](Supported_Frameworks_Layers.md) for the target framework. Then refer to available options in the corresponding section in [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md).
Most likely, Model Optimizer does not know how to infer output shapes of some layers in the given topology.
To lessen the scope, compile the list of layers that are custom for Model Optimizer: present in the topology,
absent in the [list of supported layers](Supported_Frameworks_Layers.md) for the target framework. Then, refer to available options in the corresponding section in the [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md) page.
#### 4. What does the message "Input image of shape is larger than mean image from file" mean? <a name="question-4"></a>
Your model input shapes must be smaller than or equal to the shapes of the mean image file you provide. The idea behind the mean file is to subtract its values from the input image in an element-wise manner. When the mean file is smaller than the input image, there are not enough values to perform element-wise subtraction. Also, make sure that you use the mean file that was used during the network training phase. Note that the mean file is dataset dependent.
Your model input shapes must be smaller than or equal to the shapes of the mean image file you provide. The idea behind the mean file is to subtract its values from the input image in an element-wise manner. When the mean file is smaller than the input image, there are not enough values to perform element-wise subtraction. Also, make sure you use the mean file that was used during the network training phase. Note that the mean file is dependent on dataset.
#### 5. What does the message "Mean file is empty" mean? <a name="question-5"></a>
Most likely, the mean file that you have is specified with `--mean_file` flag, while launching the Model Optimizer is empty. Make sure that this is exactly the required mean file and try to regenerate it from the given dataset if possible.
Most likely, the mean file specified with the `--mean_file` flag is empty while Model Optimizer is launched. Make sure that this is exactly the required mean file and try to regenerate it from the given dataset if possible.
#### 6. What does the message "Probably mean file has incorrect format" mean? <a name="question-6"></a>
The mean file that you provide for the Model Optimizer must be in a `.binaryproto` format. You can try to check the content using recommendations from the BVLC Caffe\* ([#290](https://github.com/BVLC/caffe/issues/290)).
The mean file that you provide for Model Optimizer must be in the `.binaryproto` format. You can try to check the content, using recommendations from the BVLC Caffe ([#290](https://github.com/BVLC/caffe/issues/290)).
#### 7. What does the message "Invalid proto file: there is neither 'layer' nor 'layers' top-level messages" mean? <a name="question-7"></a>
The structure of any Caffe\* topology is described in the `caffe.proto` file of any Caffe version. For example, in the Model Optimizer, you can find the following proto file, used by default: `mo/front/caffe/proto/my_caffe.proto`. There you can find the structure:
The structure of any Caffe topology is described in the `caffe.proto` file of any Caffe version. For example, the following `.proto` file in Model Optimizer is used by default: `mo/front/caffe/proto/my_caffe.proto`, with the structure:
```
message NetParameter {
// ... some other parameters
@@ -81,7 +81,7 @@ This means that any topology should contain layers as top-level structures in `p
#### 8. What does the message "Old-style inputs (via 'input_dims') are not supported. Please specify inputs via 'input_shape'" mean? <a name="question-8"></a>
The structure of any Caffe\* topology is described in the `caffe.proto` file for any Caffe version. For example, in the Model Optimizer you can find the following `.proto` file, used by default: `mo/front/caffe/proto/my_caffe.proto`. There you can find the structure:
The structure of any Caffe topology is described in the `caffe.proto` file for any Caffe version. For example, the following `.proto` file in Model Optimizer is used by default: `mo/front/caffe/proto/my_caffe.proto`, with the structure:
```sh
message NetParameter {
@@ -98,7 +98,7 @@ message NetParameter {
// ... other parameters
}
```
So, the input layer of the provided model must be specified in one of the following styles:
Therefore, the input layer of the provided model must be specified in one of the following styles:
*
```sh
@@ -154,43 +154,43 @@ input_dim: 3
input_dim: 500
```
However, if your model contains more than one input, the Model Optimizer is able to convert the model with inputs specified in a form of 1, 2, 3 of the list above. The last form is not supported for multi-input topologies.
However, if your model contains more than one input, Model Optimizer is able to convert the model with inputs specified in one of the first three forms in the above list. The 4th form is not supported for multi-input topologies.
#### 9. What does the message "Mean file for topologies with multiple inputs is not supported" mean? <a name="question-9"></a>
Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in the OpenVINO Runtime to perform subtraction for every input of your multi-input model, see [Overview of Preprocessing](../../OV_Runtime_UG/preprocessing_overview.md) for details.
Model Optimizer does not support mean file processing for topologies with more than one input. In this case, you need to perform preprocessing of the inputs for a generated Intermediate Representation in OpenVINO Runtime to perform subtraction for every input of your multi-input model. See the [Overview of Preprocessing](../../OV_Runtime_UG/preprocessing_overview.md) for details.
#### 10. What does the message "Cannot load or process mean file: value error" mean? <a name="question-10"></a>
There are multiple reasons why the Model Optimizer does not accept the mean file. See FAQs [#4](#question-4), [#5](#question-5), and [#6](#question-6).
There are multiple reasons why Model Optimizer does not accept the mean file. See FAQs [#4](#question-4), [#5](#question-5), and [#6](#question-6).
#### 11. What does the message "Invalid prototxt file: value error" mean? <a name="question-11"></a>
There are multiple reasons why the Model Optimizer does not accept a Caffe* topology. See FAQs [#7](#question-7) and [#20](#question-20).
There are multiple reasons why Model Optimizer does not accept a Caffe topology. See FAQs [#7](#question-7) and [#20](#question-20).
#### 12. What does the message "Error happened while constructing caffe.Net in the Caffe fallback function" mean? <a name="question-12"></a>
Model Optimizer tried to infer a specified layer via the Caffe\* framework, however it cannot construct a net using the Caffe Python* interface. Make sure that your `caffemodel` and `prototxt` files are correct. To prove that the problem is not in the `prototxt` file, see FAQ [#2](#question-2).
Model Optimizer tried to infer a specified layer via the Caffe framework. However, it cannot construct a net using the Caffe Python interface. Make sure that your `caffemodel` and `prototxt` files are correct. To ensure that the problem is not in the `prototxt` file, see FAQ [#2](#question-2).
#### 13. What does the message "Cannot infer shapes due to exception in Caffe" mean? <a name="question-13"></a>
Model Optimizer tried to infer a custom layer via the Caffe\* framework, but an error occurred, meaning that the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases that result in problems with layers that have dynamic shapes. You should write your own extension for every custom layer you topology might have. For more details, refer to [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md).
Model Optimizer tried to infer a custom layer via the Caffe framework, but the model could not be inferred using Caffe. This might happen if you try to convert the model with some noise weights and biases, which conflict with layers that have dynamic shapes. You should write your own extension for every custom layer your topology might have. For more details, refer to the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) page.
#### 14. What does the message "Cannot infer shape for node {} because there is no Caffe available. Please register python infer function for op or use Caffe for shape inference" mean? <a name="question-14"></a>
Your model contains a custom layer and you have correctly registered it with the `CustomLayersMapping.xml` file. These steps are required to offload shape inference of the custom layer with the help of the system Caffe\*. However, the Model Optimizer could not import a Caffe package. Make sure that you have built Caffe with a `pycaffe` target and added it into the `PYTHONPATH` environment variable. At the same time, it is highly recommend to avoid dependency on Caffe and write your own Model Optimizer extension for your custom layer. For more information, refer to the FAQ [#45](#question-45).
Your model contains a custom layer and you have correctly registered it with the `CustomLayersMapping.xml` file. These steps are required to offload shape inference of the custom layer with the help of the system Caffe. However, Model Optimizer could not import a Caffe package. Make sure that you have built Caffe with a `pycaffe` target and added it to the `PYTHONPATH` environment variable. At the same time, it is highly recommended to avoid dependency on Caffe and write your own Model Optimizer extension for your custom layer. For more information, refer to FAQ [#44](#question-44).
#### 15. What does the message "Framework name can not be deduced from the given options. Use --framework to choose one of Caffe, TensorFlow, MXNet" mean? <a name="question-15"></a>
You have run the Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the input model file extension (`.pb` for TensorFlow\*, `.caffemodel` for Caffe\*, `.params` for MXNet\*). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`.
You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for Apache MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`.
#### 16. What does the message "Input shape is required to convert MXNet model. Please provide it with --input_shape" mean? <a name="question-16"></a>
Input shape was not provided. That is mandatory for converting an MXNet\* model to the Intermediate Representation, because MXNet models do not contain information about input shapes. Please, use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to the FAQ [#57](#question-57).
Input shape was not provided. That is mandatory for converting an MXNet model to the OpenVINO Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56).
#### 17. What does the message "Both --mean_file and mean_values are specified. Specify either mean file or mean values" mean? <a name="question-17"></a>
`--mean_file` and `--mean_values` are two ways of specifying preprocessing for the input. However, they cannot be used together, as it would mean double subtraction and lead to ambiguity. Choose one of these options and pass it using the corresponding CLI option.
The `--mean_file` and `--mean_values` options are two ways of specifying preprocessing for the input. However, they cannot be used together, as it would mean double subtraction and lead to ambiguity. Choose one of these options and pass it with the corresponding CLI option.
#### 18. What does the message "Negative value specified for --mean_file_offsets option. Please specify positive integer values in format '(x,y)'" mean? <a name="question-18"></a>
@@ -198,157 +198,157 @@ You might have specified negative values with `--mean_file_offsets`. Only positi
#### 19. What does the message "Both --scale and --scale_values are defined. Specify either scale factor or scale values per input channels" mean? <a name="question-19"></a>
`--scale` sets a scaling factor for all channels. `--scale_values` sets a scaling factor per each channel. Using both of them simultaneously produces ambiguity, so you must use only one of them. For more information, refer to the Using Framework-Agnostic Conversion Parameters: for <a href="ConvertFromCaffe.html#using-framework-agnostic-conv-param">Converting a Caffe* Model</a>, <a href="ConvertFromTensorFlow.html#using-framework-agnostic-conv-param">Converting a TensorFlow* Model</a>, <a href="ConvertFromMXNet.html#using-framework-agnostic-conv-param">Converting an MXNet* Model</a>.
The `--scale` option sets a scaling factor for all channels, while `--scale_values` sets a scaling factor per each channel. Using both of them simultaneously produces ambiguity, so you must use only one of them. For more information, refer to the **Using Framework-Agnostic Conversion Parameters** section: for <a href="ConvertFromCaffe.html#using-framework-agnostic-conv-param">Converting a Caffe Model</a>, <a href="ConvertFromTensorFlow.html#using-framework-agnostic-conv-param">Converting a TensorFlow Model</a>, <a href="ConvertFromMXNet.html#using-framework-agnostic-conv-param">Converting an MXNet Model</a>.
#### 20. What does the message "Cannot find prototxt file: for Caffe please specify --input_proto - a protobuf file that stores topology and --input_model that stores pre-trained weights" mean? <a name="question-20"></a>
Model Optimizer cannot find a `.prototxt` file for a specified model. By default, it must be located in the same directory as the input model with the same name (except extension). If any of these conditions is not satisfied, use `--input_proto` to specify the path to the `.prototxt` file.
#### 22. What does the message "Failed to create directory .. . Permission denied!" mean? <a name="question-22"></a>
#### 21. What does the message "Failed to create directory .. . Permission denied!" mean? <a name="question-21"></a>
Model Optimizer cannot create a directory specified via `--output_dir`. Make sure that you have enough permissions to create the specified directory.
#### 23. What does the message "Discovered data node without inputs and value" mean? <a name="question-23"></a>
#### 22. What does the message "Discovered data node without inputs and value" mean? <a name="question-22"></a>
One of the layers in the specified topology might not have inputs or values. Please make sure that the provided `caffemodel` and `protobuf` files are correct.
One of the layers in the specified topology might not have inputs or values. Make sure that the provided `caffemodel` and `protobuf` files are correct.
#### 24. What does the message "Part of the nodes was not translated to IE. Stopped" mean? <a name="question-24"></a>
#### 23. What does the message "Part of the nodes was not translated to IE. Stopped" mean? <a name="question-23"></a>
Some of the operations are not supported by the OpenVINO Runtime and cannot be translated to an Intermediate Representation. You can extend the Model Optimizer by allowing generation of new types of operations and implement these operations in the dedicated OpenVINO plugins. For more information, refer to the [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md)
Some of the operations are not supported by OpenVINO Runtime and cannot be translated to OpenVINO Intermediate Representation. You can extend Model Optimizer by allowing generation of new types of operations and implement these operations in the dedicated OpenVINO plugins. For more information, refer to the [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 25. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? <a name="question-25"></a>
#### 24. What does the message "While creating an edge from .. to .. : node name is undefined in the graph. Check correctness of the input model" mean? <a name="question-24"></a>
Model Optimizer cannot build a graph based on a specified model. Most likely, it is incorrect.
#### 26. What does the message "Node does not exist in the graph" mean? <a name="question-26"></a>
#### 25. What does the message "Node does not exist in the graph" mean? <a name="question-25"></a>
You might have specified an output node via the `--output` flag that does not exist in a provided model. Make sure that the specified output is correct and this node exists in the current model.
#### 27. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean? <a name="question-27"></a>
#### 26. What does the message "--input parameter was provided. Other inputs are needed for output computation. Provide more inputs or choose another place to cut the net" mean? <a name="question-26"></a>
Most likely, the Model Optimizer tried to cut the model by a specified input. However, other inputs are needed.
Most likely, Model Optimizer tried to cut the model by a specified input. However, other inputs are needed.
#### 28. What does the message "Placeholder node does not have an input port, but input port was provided" mean? <a name="question-28"></a>
#### 27. What does the message "Placeholder node does not have an input port, but input port was provided" mean? <a name="question-27"></a>
You might have specified a placeholder node with an input node, while the placeholder node does not have it the model.
You might have specified a placeholder node with an input node, while the placeholder node does not have it in the model.
#### 29. What does the message "Port index is out of number of available input ports for node" mean? <a name="question-29"></a>
#### 28. What does the message "Port index is out of number of available input ports for node" mean? <a name="question-28"></a>
This error occurs when an incorrect input port is specified with the `--input` command line argument. When using `--input`, you can optionally specify an input port in the form: `X:node_name`, where `X` is an integer index of the input port starting from 0 and `node_name` is the name of a node in the model. This error occurs when the specified input port `X` is not in the range 0..(n-1), where n is the number of input ports for the node. Please, specify a correct port index, or do not use it if it is not needed.
This error occurs when an incorrect input port is specified with the `--input` command line argument. When using `--input`, you may optionally specify an input port in the form: `X:node_name`, where `X` is an integer index of the input port starting from 0 and `node_name` is the name of a node in the model. This error occurs when the specified input port `X` is not in the range 0..(n-1), where n is the number of input ports for the node. Specify a correct port index, or do not use it if it is not needed.
#### 30. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean? <a name="question-30"></a>
#### 29. What does the message "Node has more than 1 input and input shapes were provided. Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer" mean? <a name="question-29"></a>
This error occurs when an incorrect combination of the `--input` and `--input_shape` command line options is used. Using both `--input` and `--input_shape` is valid only if `--input` points to the `Placeholder` node, a node with one input port or `--input` has the form `PORT:NODE`, where `PORT` is an integer port index of input for node `NODE`. Otherwise, the combination of `--input` and `--input_shape` is incorrect.
#### 31. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean? <a name="question-31"></a>
#### 30. What does the message "Input port > 0 in --input is not supported if --input_shape is not provided. Node: NAME_OF_THE_NODE. Omit port index and all input ports will be replaced by placeholders. Or provide --input_shape" mean? <a name="question-30"></a>
When using the `PORT:NODE` notation for the `--input` command line argument and `PORT` > 0, you should specify `--input_shape` for this input. This is a limitation of the current Model Optimizer implementation.
**NOTE**: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
#### 32. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean? <a name="question-32"></a>
#### 31. What does the message "No or multiple placeholders in the model, but only one shape is provided, cannot set it" mean? <a name="question-31"></a>
Looks like you have provided only one shape for the placeholder, however there are no or multiple inputs in the model. Please, make sure that you have provided correct data for placeholder nodes.
You might have provided only one shape for the placeholder, while there are none or multiple inputs in the model. Make sure that you have provided the correct data for placeholder nodes.
#### 33. What does the message "The amount of input nodes for port is not equal to 1" mean? <a name="question-33"></a>
#### 32. What does the message "The amount of input nodes for port is not equal to 1" mean? <a name="question-32"></a>
This error occurs when the `SubgraphMatch.single_input_node` function is used for an input port that supplies more than one node in a sub-graph. The `single_input_node` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the `input_nodes` function or `node_by_pattern` function instead of `single_input_node`. Please, refer to **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) documentation for more details.
This error occurs when the `SubgraphMatch.single_input_node` function is used for an input port that supplies more than one node in a sub-graph. The `single_input_node` function can be used only for ports that has a single consumer inside the matching sub-graph. When multiple nodes are connected to the port, use the `input_nodes` function or `node_by_pattern` function instead of `single_input_node`. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
#### 34. What does the message "Output node for port has already been specified" mean? <a name="question-34"></a>
#### 33. What does the message "Output node for port has already been specified" mean? <a name="question-33"></a>
This error occurs when the `SubgraphMatch._add_output_node` function is called manually from user's extension code. This is an internal function, and you should not call it directly.
#### 35. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean? <a name="question-35"></a>
#### 34. What does the message "Unsupported match kind.... Match kinds "points" or "scope" are supported only" mean? <a name="question-34"></a>
While using configuration file to implement a TensorFlow\* front replacement extension, an incorrect match kind was used. Only `points` or `scope` match kinds are supported. Please refer to [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) for more details.
While using configuration file to implement a TensorFlow front replacement extension, an incorrect match kind was used. Only `points` or `scope` match kinds are supported. For more details, refer to the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
#### 36. What does the message "Cannot write an event file for the TensorBoard to directory" mean? <a name="question-36"></a>
#### 35. What does the message "Cannot write an event file for the TensorBoard to directory" mean? <a name="question-35"></a>
Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen because the specified directory does not exist or you do not have enough permissions to write in it.
Model Optimizer tried to write an event file in the specified directory but failed to do that. That could happen when the specified directory does not exist or you do not have permissions to write in it.
#### 37. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean? <a name="question-37"></a>
#### 36. What does the message "There is no registered 'infer' function for node with op = .. . Please implement this function in the extensions" mean? <a name="question-36"></a>
Most likely, you tried to extend Model Optimizer with a new primitive, but did not specify an infer function. For more information on extensions, see [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md).
Most likely, you tried to extend Model Optimizer with a new primitive, but you did not specify an infer function. For more information on extensions, see the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 38. What does the message "Stopped shape/value propagation at node" mean? <a name="question-38"></a>
#### 37. What does the message "Stopped shape/value propagation at node" mean? <a name="question-37"></a>
Model Optimizer cannot infer shapes or values for the specified node. It can happen because of a bug in the custom shape infer function, because the node inputs have incorrect values/shapes, or because the input shapes are incorrect.
Model Optimizer cannot infer shapes or values for the specified node. It can happen because of the following reasons: a bug exists in the custom shape infer function, the node inputs have incorrect values/shapes, or the input shapes are incorrect.
#### 39. What does the message "The input with shape .. does not have the batch dimension" mean? <a name="question-39"></a>
#### 38. What does the message "The input with shape .. does not have the batch dimension" mean? <a name="question-38"></a>
Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not equal to either 1 or undefined, which is why the `-b` shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the `--input_shape` option. Run Model Optimizer with the `--help` option to learn more about the notation for input shapes.
Batch dimension is the first dimension in the shape and it should be equal to 1 or undefined. In your case, it is not either equal to 1 or undefined, which is why the `-b` shortcut produces undefined and unspecified behavior. To resolve the issue, specify full shapes for each input with the `--input_shape` option. Run Model Optimizer with the `--help` option to learn more about the notation for input shapes.
#### 40. What does the message "Not all output shapes were inferred or fully defined for node" mean? <a name="question-40"></a>
#### 39. What does the message "Not all output shapes were inferred or fully defined for node" mean? <a name="question-39"></a>
Most likely, the shape is not defined (partially or fully) for the specified node. You can use `--input_shape` with positive integers to override model input shapes.
#### 41. What does the message "Shape for tensor is not defined. Can not proceed" mean? <a name="question-41"></a>
#### 40. What does the message "Shape for tensor is not defined. Can not proceed" mean? <a name="question-40"></a>
This error occurs when the `--input` command line option is used to cut a model and `--input_shape` is not used to override shapes for a node and a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer and specify shapes with `--input_shape` for each node that is specified with the `--input` command line option.
This error occurs when the `--input` command-line option is used to cut a model and `--input_shape` is not used to override shapes for a node, so a shape for the node cannot be inferred by Model Optimizer. You need to help Model Optimizer by specifying shapes with `--input_shape` for each node specified with the `--input` command-line option.
#### 42. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean? <a name="question-42"></a>
#### 41. What does the message "Module TensorFlow was not found. Please install TensorFlow 1.2 or higher" mean? <a name="question-41"></a>
To convert TensorFlow\* models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see [Configuring the Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md).
To convert TensorFlow models with Model Optimizer, TensorFlow 1.2 or newer must be installed. For more information on prerequisites, see the [Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
#### 43. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean? <a name="question-43"></a>
#### 42. What does the message "Cannot read the model file: it is incorrect TensorFlow model file or missing" mean? <a name="question-42"></a>
The model file should contain a frozen TensorFlow\* graph in the text or binary format. Make sure that `--input_model_is_text` is provided for a model in the text format. By default, a model is interpreted as binary file.
The model file should contain a frozen TensorFlow graph in the text or binary format. Make sure that `--input_model_is_text` is provided for a model in the text format. By default, a model is interpreted as binary file.
#### 44. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean? <a name="question-44"></a>
#### 43. What does the message "Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format" mean? <a name="question-43"></a>
Most likely, there is a problem with the specified file for model. The file exists, but it has bad formatting or is corrupted.
Most likely, there is a problem with the specified file for the model. The file exists, but it has an invalid format or is corrupted.
#### 45. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean? <a name="question-45"></a>
#### 44. What does the message "Found custom layer. Model Optimizer does not support this layer. Please, register it in CustomLayersMapping.xml or implement extension" mean? <a name="question-44"></a>
This means that the layer `{layer_name}` is not supported in the Model Optimizer. You can find a list of all unsupported layers in the corresponding section. You should implement the extensions for this layer ([OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md)).
This means that the layer `{layer_name}` is not supported in Model Optimizer. You will find a list of all unsupported layers in the corresponding section. You should implement the extensions for this layer. See [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) for more information.
#### 46. What does the message "Custom replacement configuration file does not exist" mean? <a name="question-46"></a>
#### 45. What does the message "Custom replacement configuration file does not exist" mean? <a name="question-45"></a>
Path to the custom replacement configuration file was provided with the `--transformations_config` flag, but the file could not be found. Please, make sure that the specified path is correct and the file exists.
A path to the custom replacement configuration file was provided with the `--transformations_config` flag, but the file could not be found. Make sure the specified path is correct and the file exists.
#### 47. What does the message "Extractors collection have case insensitive duplicates" mean? <a name="question-47"></a>
#### 46. What does the message "Extractors collection have case insensitive duplicates" mean? <a name="question-46"></a>
When extending Model Optimizer with new primitives keep in mind that their names are case insensitive. Most likely, another operation with the same name is already defined. For more information, see [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md).
When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 48. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean? <a name="question-48"></a>
#### 47. What does the message "Input model name is not in an expected format, cannot extract iteration number" mean? <a name="question-47"></a>
Model Optimizer can not load an MXNet\* model in the specified file format. Please, use the `.json` or `.param` format.
Model Optimizer cannot load an MXNet model in the specified file format. Make sure you use the `.json` or `.param` format.
#### 49. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean? <a name="question-49"></a>
#### 48. What does the message "Cannot convert type of placeholder because not all of its outputs are 'Cast' to float operations" mean? <a name="question-48"></a>
There are models where `Placeholder` has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the `Placeholder` has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Please, change the model to have placeholder FP32 data type.
There are models where `Placeholder` has the UINT8 type and the first operation after it is 'Cast', which casts the input to FP32. Model Optimizer detected that the `Placeholder` has the UINT8 type, but the next operation is not 'Cast' to float. Model Optimizer does not support such a case. Make sure you change the model to have `Placeholder` for FP32.
#### 50. What does the message "Data type is unsupported" mean? <a name="question-50"></a>
#### 49. What does the message "Data type is unsupported" mean? <a name="question-49"></a>
Model Optimizer cannot convert the model to the specified data type. Currently, FP16 and FP32 are supported. Please, specify the data type with the `--data_type` flag. The available values are: FP16, FP32, half, float.
Model Optimizer cannot convert the model to the specified data type. Currently, FP16 and FP32 are supported. Make sure you specify the data type with the `--data_type` flag. The available values are: FP16, FP32, half, float.
#### 51. What does the message "No node with name ..." mean? <a name="question-51"></a>
#### 50. What does the message "No node with name ..." mean? <a name="question-50"></a>
Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name.
#### 52. What does the message "Module mxnet was not found. Please install MXNet 1.0.0" mean? <a name="question-52"></a>
#### 51. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean? <a name="question-51"></a>
To convert MXNet\* models with Model Optimizer, MXNet 1.0.0 must be installed. For more information about prerequisites, see [Configuring the Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md).
To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
#### 53. What does the message "The following error happened while loading MXNet model .." mean? <a name="question-53"></a>
#### 52. What does the message "The following error happened while loading MXNet model .." mean? <a name="question-52"></a>
Most likely, there is a problem with loading of the MXNet\* model. Please, make sure that the specified path is correct, the model exists, it is not corrupted, and you have sufficient permissions to work with it.
Most likely, there is a problem with loading of the MXNet model. Make sure the specified path is correct, the model exists and is not corrupted, and you have sufficient permissions to work with it.
#### 54. What does the message "The following error happened while processing input shapes: .." mean? <a name="question-54"></a>
#### 53. What does the message "The following error happened while processing input shapes: .." mean? <a name="question-53"></a>
Please, make sure that inputs are defined and have correct shapes. You can use `--input_shape` with positive integers to override model input shapes.
Make sure inputs are defined and have correct shapes. You can use `--input_shape` with positive integers to override model input shapes.
#### 55. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean? <a name="question-55"></a>
#### 54. What does the message "Attempt to register of custom name for the second time as class. Note that custom names are case-insensitive" mean? <a name="question-54"></a>
When extending Model Optimizer with new primitives keep in mind that their names are case insensitive. Most likely, another operation with the same name is already defined. For more information, see [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md).
When extending Model Optimizer with new primitives, keep in mind that their names are case-insensitive. Most likely, another operation with the same name is already defined. For more information, see the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 56. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean? <a name="question-56"></a>
#### 55. What does the message "Both --input_shape and --batch were provided. Please, provide only one of them" mean? <a name="question-55"></a>
You cannot specify the batch and the input shape at the same time. You should specify a desired batch as the first value of the input shape.
Specifying the batch and the input shapes at the same time is not supported. You must specify a desired batch as the first value of the input shape.
#### 57. What does the message "Input shape .. cannot be parsed" mean? <a name="question-57"></a>
#### 56. What does the message "Input shape .. cannot be parsed" mean? <a name="question-56"></a>
The specified input shape cannot be parsed. Please, define it in one of the following ways:
The specified input shape cannot be parsed. Define it in one of the following ways:
*
```shell
@@ -365,141 +365,141 @@ The specified input shape cannot be parsed. Please, define it in one of the foll
Keep in mind that there is no space between and inside the brackets for input shapes.
#### 58. What does the message "Please provide input layer names for input layer shapes" mean? <a name="question-58"></a>
#### 57. What does the message "Please provide input layer names for input layer shapes" mean? <a name="question-57"></a>
When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see [Converting a Caffe* Model](convert_model/Convert_Model_From_Caffe.md. Additional information for `--input_shape` is in FAQ [#57](#question-57).
When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see the [Converting a Caffe Model](convert_model/Convert_Model_From_Caffe.md). Additional information for `--input_shape` is in FAQ [#56](#question-56).
#### 59. What does the message "Values cannot be parsed" mean? <a name="question-59"></a>
#### 58. What does the message "Values cannot be parsed" mean? <a name="question-58"></a>
Mean values for the given parameter cannot be parsed. It should be a string with a list of mean values. For example, in '(1,2,3)', 1 stands for the RED channel, 2 for the GREEN channel, 3 for the BLUE channel.
#### 60. What does the message ".. channels are expected for given values" mean? <a name="question-60"></a>
#### 59. What does the message ".. channels are expected for given values" mean? <a name="question-59"></a>
The number of channels and the number of given values for mean values do not match. The shape should be defined as '(R,G,B)' or '[R,G,B]'. The shape should not contain undefined dimensions (? or -1). The order of values is as follows: (value for a RED channel, value for a GREEN channel, value for a BLUE channel).
#### 61. What does the message "You should specify input for each mean value" mean? <a name="question-61"></a>
#### 60. What does the message "You should specify input for each mean value" mean? <a name="question-60"></a>
Most likely, you have not specified inputs using `--mean_values`. Please, specify inputs with the `--input` flag. For usage examples, please, refer to FAQ [#63](#question-63).
Most likely, you didn't specify inputs using `--mean_values`. Specify inputs with the `--input` flag. For usage examples, refer to the FAQ [#62](#question-62).
#### 62. What does the message "You should specify input for each scale value" mean? <a name="question-62"></a>
#### 61. What does the message "You should specify input for each scale value" mean? <a name="question-61"></a>
Most likely, you have not specified inputs using `--scale_values`. Please, specify inputs with the `--input` flag. For usage examples, please, refer to FAQ [#64](#question-64).
Most likely, you didn't specify inputs using `--scale_values`. Specify inputs with the `--input` flag. For usage examples, refer to the FAQ [#63](#question-63).
#### 63. What does the message "Number of inputs and mean values does not match" mean? <a name="question-63"></a>
#### 62. What does the message "Number of inputs and mean values does not match" mean? <a name="question-62"></a>
The number of specified mean values and the number of inputs must be equal. Please, refer to [Converting a Caffe* Model](convert_model/Convert_Model_From_Caffe.md) for a usage example.
The number of specified mean values and the number of inputs must be equal. For a usage example, refer to the [Converting a Caffe Model](convert_model/Convert_Model_From_Caffe.md) guide.
#### 64. What does the message "Number of inputs and scale values does not match" mean? <a name="question-64"></a>
#### 63. What does the message "Number of inputs and scale values does not match" mean? <a name="question-63"></a>
The number of specified scale values and the number of inputs must be equal. Please, refer to [Converting a Caffe* Model](convert_model/Convert_Model_From_Caffe.md) for a usage example.
The number of specified scale values and the number of inputs must be equal. For a usage example, refer to the [Converting a Caffe Model](convert_model/Convert_Model_From_Caffe.md) guide.
#### 65. What does the message "No class registered for match kind ... Supported match kinds are .. " mean? <a name="question-65"></a>
#### 64. What does the message "No class registered for match kind ... Supported match kinds are .. " mean? <a name="question-64"></a>
A replacement defined in the configuration file for sub-graph replacement using node names patterns or start/end nodes has the `match_kind` attribute. The attribute may have only one of the values: `scope` or `points`. If a different value is provided, this error is displayed.
A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the `match_kind` attribute. The attribute may have only one of the values: `scope` or `points`. If a different value is provided, this error is displayed.
#### 66. What does the message "No instance(s) is(are) defined for the custom replacement" mean? <a name="question-66"></a>
#### 65. What does the message "No instance(s) is(are) defined for the custom replacement" mean? <a name="question-65"></a>
A replacement defined in the configuration file for sub-graph replacement using node names patterns or start/end nodes has the `instances` attribute. This attribute is mandatory, and it causes this error if it is missing. Refer to documentation with a description of the sub-graph replacement feature.
A replacement defined in the configuration file for sub-graph replacement, using node names patterns or start/end nodes, has the `instances` attribute. This attribute is mandatory. This error will occur if the attribute is missing. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
#### 67. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean? <a name="question-67"></a>
#### 66. What does the message "The instance must be a single dictionary for the custom replacement with id .." mean? <a name="question-66"></a>
A replacement defined in the configuration file for sub-graph replacement using start/end nodes has the `instances` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys `start_points` and `end_points`. Values for these keys are lists with the start and end node names, respectively. Refer to documentation with a description of the sub-graph replacement feature.
A replacement defined in the configuration file for sub-graph replacement, using start/end nodes, has the `instances` attribute. For this type of replacement, the instance must be defined with a dictionary with two keys `start_points` and `end_points`. Values for these keys are lists with the start and end node names, respectively. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
#### 68. What does the message "No instances are defined for replacement with id .. " mean? <a name="question-68"></a>
#### 67. What does the message "No instances are defined for replacement with id .. " mean? <a name="question-67"></a>
A replacement for the specified id is not defined in the configuration file. Please, refer to FAQ [#66](#question-66) for more information.
A replacement for the specified id is not defined in the configuration file. For more information, refer to the FAQ [#65](#question-65).
#### 69. What does the message "Custom replacements configuration file .. does not exist" mean? <a name="question-69"></a>
#### 68. What does the message "Custom replacements configuration file .. does not exist" mean? <a name="question-68"></a>
Path to a custom replacement configuration file was provided with the `--transformations_config` flag, but it cannot be found. Please, make sure that the specified path is correct and the file exists.
The path to a custom replacement configuration file was provided with the `--transformations_config` flag, but it cannot be found. Make sure the specified path is correct and the file exists.
#### 70. What does the message "Failed to parse custom replacements configuration file .." mean? <a name="question-70"></a>
#### 69. What does the message "Failed to parse custom replacements configuration file .." mean? <a name="question-69"></a>
The file for custom replacement configuration provided with the `--transformations_config` flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to [JSON Schema Reference](https://spacetelescope.github.io/understanding-json-schema/reference/index.html).
The file for custom replacement configuration provided with the `--transformations_config` flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to the [JSON Schema Reference](https://spacetelescope.github.io/understanding-json-schema/reference/index.html) page.
#### 71. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean? <a name="question-71"></a>
#### 70. What does the message "One of the custom replacements in the configuration file .. does not contain attribute 'id'" mean? <a name="question-70"></a>
Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ [#72](#question-72).
Every custom replacement should declare a set of mandatory attributes and their values. For more details, refer to FAQ [#71](#question-71).
#### 72. What does the message "File .. validation failed" mean? <a name="question-72"></a>
#### 71. What does the message "File .. validation failed" mean? <a name="question-71"></a>
The file for custom replacement configuration provided with the `--transformations_config` flag cannot pass validation. Make sure that you have specified `id`, `instances` and `match_kind` for all the patterns.
The file for custom replacement configuration provided with the `--transformations_config` flag cannot pass validation. Make sure you have specified `id`, `instances`, and `match_kind` for all the patterns.
#### 73. What does the message "Cannot update the file .. because it is broken" mean? <a name="question-73"></a>
#### 72. What does the message "Cannot update the file .. because it is broken" mean? <a name="question-72"></a>
The custom replacement configuration file provided with the `--tensorflow_custom_operations_config_update` cannot be parsed. Please, make sure that the file is correct and refer to FAQs [#69](#question-69), [#70](#question-70), [#71](#question-71), and [#72](#question-72).
The custom replacement configuration file provided with the `--tensorflow_custom_operations_config_update` cannot be parsed. Make sure that the file is correct and refer to FAQ [#68](#question-68), [#69](#question-69), [#70](#question-70), and [#71](#question-71).
#### 74. What does the message "End node .. is not reachable from start nodes: .." mean? <a name="question-74"></a>
#### 73. What does the message "End node .. is not reachable from start nodes: .." mean? <a name="question-73"></a>
This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs of the subgraph to find, there are nodes that are marked as outputs but there is no path from them to the input nodes. Make sure that the subgraph you want to match does actually contain all the specified output nodes.
This error occurs when you try to make a sub-graph match. It is detected that between the start and end nodes that were specified as inputs/outputs for the subgraph to find, there are nodes marked as outputs but there is no path from them to the input nodes. Make sure the subgraph you want to match does actually contain all the specified output nodes.
#### 75. What does the message "Sub-graph contains network input node .." mean? <a name="question-75"></a>
#### 74. What does the message "Sub-graph contains network input node .." mean? <a name="question-74"></a>
Start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes. Then it adds all input nodes to the sub-graph (and inputs of their inputs and so on) for these "internal" nodes. The error reports, that the Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. Refer to documentation with a description of the sub-graph replacement feature.
The start or end node for the sub-graph replacement using start/end nodes is specified incorrectly. Model Optimizer finds internal nodes of the sub-graph strictly "between" the start and end nodes, and then adds all input nodes to the sub-graph (and the inputs of their inputs, etc.) for these "internal" nodes. This error reports that Model Optimizer reached input node during this phase. This means that the start/end points are specified incorrectly in the configuration file. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
#### 76. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean? <a name="question-76"></a>
#### 75. What does the message "... elements of ... were clipped to infinity while converting a blob for node [...] to ..." mean? <a name="question-75"></a>
This message may appear when the `--data_type=FP16` command line option is used. This option implies conversion of all the blobs in the node to FP16. If a value in a blob is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the blob is printed out together with the name of the node, where this blob is used.
This message may appear when the `--data_type=FP16` command-line option is used. This option implies conversion of all the blobs in the node to FP16. If a value in a blob is out of the range of valid FP16 values, the value is converted to positive or negative infinity. It may lead to incorrect results of inference or may not be a problem, depending on the model. The number of such elements and the total number of elements in the blob is printed out together with the name of the node, where this blob is used.
#### 77. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean? <a name="question-77"></a>
#### 76. What does the message "... elements of ... were clipped to zero while converting a blob for node [...] to ..." mean? <a name="question-76"></a>
This message may appear when the `--data_type=FP16` command line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
This message may appear when the `--data_type=FP16` command-line option is used. This option implies conversion of all blobs in the mode to FP16. If a value in the blob is so close to zero that it cannot be represented as a valid FP16 value, it is converted to a true zero FP16 value. Depending on the model, it may lead to incorrect results of inference or may not be a problem. The number of such elements and the total number of elements in the blob are printed out together with a name of the node, where this blob is used.
#### 78. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean? <a name="question-78"></a>
#### 77. What does the message "The amount of nodes matched pattern ... is not equal to 1" mean? <a name="question-77"></a>
This error occurs when the `SubgraphMatch.node_by_pattern` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) documentation.
This error occurs when the `SubgraphMatch.node_by_pattern` function is used with a pattern that does not uniquely identify a single node in a sub-graph. Try to extend the pattern string to make unambiguous match to a single sub-graph node. For more details, refer to the **Graph Transformation Extensions** section in the [Model Optimizer Extensibility](customize_model_optimizer/Customize_Model_Optimizer.md) guide.
#### 79. What does the message "The topology contains no "input" layers" mean? <a name="question-79"></a>
#### 78. What does the message "The topology contains no "input" layers" mean? <a name="question-78"></a>
Your Caffe\* topology `.prototxt` file is intended for training. Model Optimizer expects a deployment-ready `.prototxt` file. To fix the problem, prepare a deployment-ready `.prototxt` file. Usually, preparation of a deploy-ready topology results in removing `data` layer(s), adding `input` layer(s), and removing loss layer(s).
Your Caffe topology `.prototxt` file is intended for training. Model Optimizer expects a deployment-ready `.prototxt` file. To fix the problem, prepare a deployment-ready `.prototxt` file. Preparation of a deploy-ready topology usually results in removing `data` layer(s), adding `input` layer(s), and removing loss layer(s).
#### 80. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean? <a name="question-80"></a>
#### 79. What does the message "Warning: please expect that Model Optimizer conversion might be slow" mean? <a name="question-79"></a>
You are using an unsupported Python\* version. Use only versions 3.4 - 3.6 for the C++ `protobuf` implementation that is supplied with the OpenVINO Toolkit. You can still boost conversion speed by building protobuf library from sources. For complete instructions about building `protobuf` from sources, see the appropriate section in [Converting a Model to Intermediate Representation](../Deep_Learning_Model_Optimizer_DevGuide.md).
You are using an unsupported Python version. Use only versions 3.4 - 3.6 for the C++ `protobuf` implementation that is supplied with OpenVINO toolkit. You can still boost the conversion speed by building the protobuf library from sources. For complete instructions about building `protobuf` from sources, see the appropriate section in the[Converting a Model to Intermediate Representation](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
#### 81. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean? <a name="question-81"></a>
#### 80. What does the message "Arguments --nd_prefix_name, --pretrained_model_name and --input_symbol should be provided. Please provide all or do not use any." mean? <a name="question-80"></a>
This error occurs if you do not provide `--nd_prefix_name`, `--pretrained_model_name` and `--input_symbol` parameters.
Model Optimizer requires both `.params` and `.nd` model files to merge into the result file (`.params`). Topology
description (`.json` file) should be prepared (merged) in advance and provided with `--input_symbol` parameter.
This error occurs if you did not provide the `--nd_prefix_name`, `--pretrained_model_name`, and `--input_symbol` parameters.
Model Optimizer requires both `.params` and `.nd` model files to merge into the result file (`.params`).
Topology description (`.json` file) should be prepared (merged) in advance and provided with the `--input_symbol` parameter.
If you add to your model additional layers and weights that are in `.nd` files, the Model Optimizer can build a model
If you add additional layers and weights that are in `.nd` files to your model, Model Optimizer can build a model
from one `.params` file and two additional `.nd` files (`*_args.nd`, `*_auxs.nd`).
To do that, provide both CLI options or do not pass them if you want to convert an MXNet model without additional weights.
For more information, refer to [Converting a MXNet* Model](convert_model/Convert_Model_From_MxNet.md).
For more information, refer to the [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md) guide.
#### 82. What does the message "You should specify input for mean/scale values" mean? <a name="question-82"></a>
#### 81. What does the message "You should specify input for mean/scale values" mean? <a name="question-81"></a>
In case when the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, a number of passed values should be the same as the number of inputs of the model.
For more information, refer to [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md).
When the model has multiple inputs and you want to provide mean/scale values, you need to pass those values for each input. More specifically, the number of passed values should be the same as the number of inputs of the model.
For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md) guide.
#### 83. What does the message "Input with name ... not found!" mean? <a name="question-83"></a>
#### 82. What does the message "Input with name ... not found!" mean? <a name="question-82"></a>
When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that by passing values with `--input` option, you list only names of the input layers of your model.
For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md).
When you passed the mean/scale values and specify names of input layers of the model, you might have used the name that does not correspond to any input layer. Make sure that you list only names of the input layers of your model when passing values with the `--input` option.
For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md) guide.
#### 84. What does the message "Specified input json ... does not exist" mean? <a name="question-84"></a>
#### 83. What does the message "Specified input json ... does not exist" mean? <a name="question-83"></a>
Most likely, `.json` file does not exist or has a name that does not match the notation of MXNet. Make sure that the file exists and it has a correct name.
For more information, refer to [Converting a MXNet\* Model](convert_model/Convert_Model_From_MxNet.md).
Most likely, `.json` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name.
For more information, refer to the [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md) guide.
#### 85. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? <a name="question-85"></a>
#### 84. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? <a name="question-84"></a>
Model Optimizer for MXNet supports only `.params` and `.nd` files formats. Most likely, you specified some unsupported file format in `--input_model`.
For more information, refer to [Converting a MXNet* Model](convert_model/Convert_Model_From_MxNet.md).
Model Optimizer for Apache MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`.
For more information, refer to [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md).
#### 86. What does the message "Operation ... not supported. Please register it as custom op" mean? <a name="question-86"></a>
#### 85. What does the message "Operation ... not supported. Please register it as custom op" mean? <a name="question-85"></a>
Model Optimizer tried to load the model that contains some unsupported operations.
If you want to convert model that contains unsupported operations you need to prepare extension for all such operations.
For more information, refer to [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md).
If you want to convert model that contains unsupported operations, you need to prepare extension for all such operations.
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 87. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean? <a name="question-87"></a>
#### 86. What does the message "Can not register Op ... Please, call function 'register_caffe_python_extractor' with parameter 'name'" mean? <a name="question-86"></a>
This error appears if the class of implementation of op for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently compared to ordinary Caffe layers.
This error appears if the class of implementation of `Op` for Python Caffe layer could not be used by Model Optimizer. Python layers should be handled differently comparing to ordinary Caffe layers.
In particular, you need to call the function `register_caffe_python_extractor` and pass `name` as the second argument of the function.
The name should be the compilation of the layer name and the module name separated by a dot.
The name should be the compilation of the layer name with the module name separated by a dot.
For example, your topology contains this layer with type `Python`:
@@ -516,7 +516,7 @@ layer {
}
```
What you do first is implementing an extension for this layer in the Model Optimizer as an ancestor of `Op` class.
The first step is to implement an extension for this layer in Model Optimizer as an ancestor of `Op` class:
```
class ProposalPythonExampleOp(Op):
op = 'Proposal'
@@ -534,48 +534,48 @@ register_caffe_python_extractor(ProposalPythonExampleOp, 'rpn.proposal_layer.Pro
Op.excluded_classes.append(ProposalPythonExampleOp)
```
Note that the first call <code>register_caffe_python_extractor(ProposalPythonExampleOp, 'rpn.proposal_layer.ProposalLayer')</code> registers extension of the layer in the Model Optimizer that will be found by the specific name (mandatory to join module name and layer name): <code>rpn.proposal_layer.ProposalLayer</code>.
Note that the first call <code>register_caffe_python_extractor(ProposalPythonExampleOp, 'rpn.proposal_layer.ProposalLayer')</code> registers an extension of the layer in Model Optimizer, which will be found by the specific name (mandatory to join module name and layer name): <code>rpn.proposal_layer.ProposalLayer</code>.
The second call prevents Model Optimizer from using this extension as if it is an extension for
a layer with type `Proposal`. Otherwise, this layer can be chosen as an implementation of extension that can lead to potential issues.
For more information, refer to the [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md).
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 88. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean? <a name="question-88"></a>
#### 87. What does the message "Model Optimizer is unable to calculate output shape of Memory node .." mean? <a name="question-87"></a>
Model Optimizer supports only `Memory` layers, in which `input_memory` goes before `ScaleShift` or `FullyConnected` layer.
This error message means that in your model the layer after input memory is not of type `ScaleShift` or `FullyConnected`.
Model Optimizer supports only `Memory` layers, in which `input_memory` goes before `ScaleShift` or the `FullyConnected` layer.
This error message means that in your model the layer after input memory is not of the `ScaleShift` or `FullyConnected` type.
This is a known limitation.
#### 89. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with <Nnet> tag" mean? <a name="question-89"></a>
#### 88. What do the messages "File ... does not appear to be a Kaldi file (magic number does not match)", "Kaldi model should start with <Nnet> tag" mean? <a name="question-88"></a>
These error messages mean that the Model Optimizer does not support your Kaldi\* model, because check sum of the model is not
16896 (the model should start with this number) or model file does not contain tag `<Net>` as a starting one.
Double check that you provide a path to a true Kaldi model and try again.
These error messages mean that Model Optimizer does not support your Kaldi model, because the `checksum` of the model is not
16896 (the model should start with this number), or the model file does not contain the `<Net>` tag as a starting one.
Make sure that you provide a path to a true Kaldi model and try again.
#### 90. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean? <a name="question-90"></a>
#### 89. What do the messages "Expect counts file to be one-line file." or "Expect counts file to contain list of integers" mean? <a name="question-89"></a>
These messages mean that you passed the file counts containing not one line. The count file should start with
`[` and end with `]`, and integer values should be separated by space between those signs.
These messages mean that the file counts you passed contain not one line. The count file should start with
`[` and end with `]`, and integer values should be separated by spaces between those brackets.
#### 91. What does the message "Model Optimizer is not able to read Kaldi model .." mean? <a name="question-91"></a>
#### 90. What does the message "Model Optimizer is not able to read Kaldi model .." mean? <a name="question-90"></a>
There are multiple reasons why the Model Optimizer does not accept a Kaldi topology:
file is not available or does not exist. Refer to FAQ [#89](#question-89).
There are multiple reasons why Model Optimizer does not accept a Kaldi topology, including:
the file is not available or does not exist. Refer to FAQ [#88](#question-88).
#### 92. What does the message "Model Optimizer is not able to read counts file .." mean? <a name="question-92"></a>
#### 91. What does the message "Model Optimizer is not able to read counts file .." mean? <a name="question-91"></a>
There are multiple reasons why the Model Optimizer does not accept a counts file:
file is not available or does not exist. Also refer to FAQ [#90](#question-90).
There are multiple reasons why Model Optimizer does not accept a counts file, including:
the file is not available or does not exist. Refer to FAQ [#89](#question-89).
#### 93. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? <a name="question-93"></a>
#### 92. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? <a name="question-92"></a>
This message means that if you have model with custom layers and its json file has been generated with MXNet version
lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it you have to rebuild
MXNet with unsupported layers or generate new json with MXNet version 1.0.0 and higher. Also you need to implement
OpenVINO extension for used custom layers.
For more information, refer to the [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md).
This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version
lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it, you have to rebuild
MXNet with unsupported layers or generate a new JSON file with Apache MXNet version 1.0.0 or higher. You also need to implement
OpenVINO extension to use custom layers.
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
#### 97. What does the message "Graph contains a cycle. Can not proceed .." mean? <a name="question-97"></a>
#### 93. What does the message "Graph contains a cycle. Can not proceed .." mean? <a name="question-93"></a>
Model Optimizer supports only straightforward models without cycles.
@@ -586,56 +586,56 @@ For Tensorflow:
For all frameworks:
1. [Replace cycle containing Sub-graph in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md)
2. [OpenVINO Extensibility Mechanism](../../Extensibility_UG/Intro.md)
2. See [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md)
or
* Edit model in original framework to exclude cycle.
* Edit the model in its original framework to exclude cycle.
#### 98. What does the message "Can not transpose attribute '..' with value .. for node '..' .." mean? <a name="question-98"></a>
#### 94. What does the message "Can not transpose attribute '..' with value .. for node '..' .." mean? <a name="question-94"></a>
This message means that model is not supported. It may be caused by using shapes larger than 4-D.
This message means that the model is not supported. It may be caused by using shapes larger than 4-D.
There are two ways to avoid such message:
1. [Cutting Off Parts of a Model](convert_model/Cutting_Model.md)
2. Edit network in original framework to exclude such layers.
* [Cut off parts of the model](convert_model/Cutting_Model.md).
* Edit the network in its original framework to exclude such layers.
#### 99. What does the message "Expected token `</ParallelComponent>`, has `...`" mean? <a name="question-99"></a>
#### 95. What does the message "Expected token `</ParallelComponent>`, has `...`" mean? <a name="question-95"></a>
This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains `ParallelComponent` that does not end by tag `</ParallelComponent>`.
Double check that you provide a path to a true Kaldi model and try again.
This error messages mean that Model Optimizer does not support your Kaldi model, because the Net contains `ParallelComponent` that does not end with the `</ParallelComponent>` tag.
Make sure that you provide a path to a true Kaldi model and try again.
#### 100. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean? <a name="question-100"></a>
#### 96. What does the message "Interp layer shape inference function may be wrong, please, try to update layer shape inference function in the file (extensions/ops/interp.op at the line ...)." mean? <a name="question-96"></a>
There are many flavors of Caffe framework, and most layers in them are implemented identically.
But there are exceptions. For example, output value of layer Interp is calculated differently in Deeplab-Caffe and classic Caffe. So if your model contain layer Interp and converting of your model has failed, please modify the 'interp_infer' function in the file extensions/ops/interp.op according to the comments of the file.
However, there are exceptions. For example, the output value of layer Interp is calculated differently in Deeplab-Caffe and classic Caffe. Therefore, if your model contains layer Interp and the conversion of your model has failed, modify the `interp_infer` function in the `extensions/ops/interp.op` file according to the comments in the file.
#### 101. What does the message "Mean/scale values should ..." mean? <a name="question-101"></a>
#### 97. What does the message "Mean/scale values should ..." mean? <a name="question-97"></a>
It means that your mean/scale values have wrong format. Specify mean/scale values using the form `layer_name(val1,val2,val3)`.
You need to specify values for each input of the model. For more information, refer to [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md).
It means that your mean/scale values have a wrong format. Specify mean/scale values in the form of `layer_name(val1,val2,val3)`.
You need to specify values for each input of the model. For more information, refer to the [Converting a Model to Intermediate Representation](convert_model/Converting_Model.md) guide.
#### 102. What does the message "Operation _contrib_box_nms is not supported ..." mean? <a name="question-102"></a>
#### 98. What does the message "Operation _contrib_box_nms is not supported ..." mean? <a name="question-98"></a>
It means that you trying to convert the topology which contains '_contrib_box_nms' operation which is not supported directly. However the sub-graph of operations including the '_contrib_box_nms' could be replaced with DetectionOutput layer if your topology is one of the gluoncv topologies. Specify '--enable_ssd_gluoncv' command line parameter for the Model Optimizer to enable this transformation.
It means that you are trying to convert a topology contains the `_contrib_box_nms` operation which is not supported directly. However, the sub-graph of operations including `_contrib_box_nms` could be replaced with the DetectionOutput layer if your topology is one of the `gluoncv` topologies. Specify the `--enable_ssd_gluoncv` command-line parameter for Model Optimizer to enable this transformation.
#### 103. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean? <a name="question-103"></a>
#### 99. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean? <a name="question-99"></a>
If a '*.caffemodel' file exists and it is correct, the error possibly occured due to the use of Python protobuf implementation. In some cases, it shows error message during model parsing, for example: "'utf-8' codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.6/3.7 or build 'cpp' implementation of protobuf yourself for your version of Python. For the complete instructions about building `protobuf` from sources, see the appropriate section in [Converting a Model to Intermediate Representation](../Deep_Learning_Model_Optimizer_DevGuide.md).
If a `*.caffemodel` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "`utf-8` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.6/3.7 or build the `cpp` implementation of `protobuf` yourself for your version of Python. For the complete instructions about building `protobuf` from sources, see the appropriate section in the [Converting Models with Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
#### 104. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet\* model conversion mean? <a name="question-104"></a>
#### 100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean? <a name="question-100"></a>
The issue "SyntaxError: 'yield' inside list comprehension" might occur during converting MXNet\* models (mobilefacedet-v1-mxnet, brain-tumor-segmentation-0001) on Windows* platform with Python* 3.8 environment. This issue is caused by API changes for `yield expression` in Python 3.8.
The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (`mobilefacedet-v1-mxnet`, `brain-tumor-segmentation-0001`) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8.
The following workarounds are suggested to resolve this issue:
1. Use Python 3.6/3.7 to convert MXNet\* models on Windows
2. Update MXNet: pip install mxnet=1.7.0.post2
Note that you might have conflicts between previously installed PyPI dependencies.
1. Use Python 3.6/3.7 to convert MXNet models on Windows
2. Update Apache MXNet by using `pip install mxnet==1.7.0.post2`
Note that it might have conflicts with previously installed PyPI dependencies.
#### 105. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? <a name="question-105"></a>
#### 101. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? <a name="question-101"></a>
For the models in ONNX* format, there are two available paths of IR conversion.
The old one is handled by the old Python* implementation, while the new one uses new C++ frontends.
For the models in ONNX format, there are two available paths of IR conversion.
The old one is handled by the old Python implementation, while the new one uses new C++ frontends.
Starting from the 2022.1 version, the default IR conversion path for ONNX models is processed using the new ONNX frontend.
Certain features, such as `--extensions` and `--transformations_config`, are not yet fully supported on the new frontends.
For `--extensions`, the new frontends support only paths to shared libraries (.dll and .so). For `--transformations_config`, they support JSON configurations with defined library fields.
Inputs freezing (enabled by `--freeze_placeholder_with_value` or `--input` arguments) is not supported on the new frontends.
The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (by `--use_new_frontend` or `--use_legacy_frontend` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path.
The new frontends support only paths to shared libraries (.dll and .so) for `--extensions`. They support JSON configurations with defined library fields for `--transformations_config`.
Inputs freezing (enabled by `--freeze_placeholder_with_value` or `--input` arguments) is not supported by the new frontends.
The IR conversion falls back to the old path if a user does not select any expected path of conversion explicitly (with `--use_new_frontend` or `--use_legacy_frontend` MO arguments) and unsupported pre-defined scenario is detected on the new frontend path.

View File

@@ -1,5 +1,8 @@
# Supported Framework Layers {#openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers}
In this article, you can find lists of supported framework layers, divided by frameworks.
## Caffe Supported Layers
@@ -16,7 +19,7 @@
| Crop | |
| Deconvolution | |
| DetectionOutput | |
| Dropout | Not needed for inference |
| Dropout | Not needed for inference. |
| Eltwise | |
| Flatten | |
| GlobalInput | |
@@ -24,7 +27,7 @@
| Input | |
| LRN | |
| Normalize | |
| Python | Supported only for the Python Proposal operation |
| Python | Supported only for the Python Proposal operation. |
| Permute | |
| Pooling | |
| Power | |
@@ -47,10 +50,10 @@
| Tile | |
## MXNet Supported Symbols
## Apache MXNet Supported Symbols
| Symbol Name in MXNet| Limitations|
| Symbol Name in Apache MXNet| Limitations|
| :----------| :----------|
| _Plus | |
| _contrib_arange_like | |
@@ -58,7 +61,7 @@
| _contrib_DeformableConvolution | |
| _contrib_DeformablePSROIPooling | |
| _contrib_div_sqrt_dim | |
| _contrib_MultiBoxDetection | "force_suppress" = 1 is not supported, non-default variances are not supported |
| _contrib_MultiBoxDetection | `force_suppress` = 1 is not supported, non-default variances are not supported. |
| _contrib_MultiBoxPrior | |
| _contrib_Proposal | |
| _copy | Not needed for inference |
@@ -70,7 +73,7 @@
| _random_uniform | Operation provides sequence from uniform distribution, but exact values won't match. |
| _rnn_param_concat | |
| _arange | |
| _contrib_AdaptiveAvgPooling2D | Converted to the Average Pooling with fixed paddings |
| _contrib_AdaptiveAvgPooling2D | Converted to the Average Pooling with fixed paddings. |
| _maximum | |
| _minimum | |
| _np_roll | |
@@ -96,8 +99,8 @@
| greater_scalar | |
| max | |
| minus_scalar | |
| null | Not needed for inference |
| LayerNorm | "output_mean_var" = True is not supported |
| null | Not needed for inference. |
| LayerNorm | `output_mean_var` = True is not supported. |
| repeat | |
| rnn | |
| rnn_param_concat | |
@@ -114,24 +117,24 @@
| tile | |
| transpose | |
| zeros | |
| Activation | supported "act_type" = "relu", "sigmoid", "softrelu" or "tanh" |
| Activation | Supported `act_type` = `relu`, `sigmoid`, `softrelu` or `tanh`. |
| BatchNorm | |
| Concat | |
| Convolution | |
| Crop | "center_crop" = 1 is not supported |
| Custom | [Custom Layers in Model Optimizer](customize_model_optimizer/Customize_Model_Optimizer.md) |
| Crop | `center_crop` = 1 is not supported. |
| Custom | See [Custom Layers in Model Optimizer].(customize_model_optimizer/Customize_Model_Optimizer.md) |
| Deconvolution | |
| DeformableConvolution | |
| DeformablePSROIPooling | |
| Dropout | Not needed for inference |
| Dropout | Not needed for inference. |
| ElementWiseSum | |
| Embedding | |
| Flatten | |
| FullyConnected | |
| InstanceNorm | |
| L2Normalization | only 4D input is supported |
| L2Normalization | Only 4D input is supported. |
| LRN | |
| LeakyReLU | supported "act_type" = "prelu", "elu", "leaky", "gelu" |
| LeakyReLU | Supported `act_type` = `prelu`, `elu`, `leaky`, `gelu`. |
| ones_like | |
| Pad | |
| Pooling | |
@@ -142,7 +145,7 @@
| SoftmaxActivation | |
| SoftmaxOutput | |
| SoftSign | |
| Take | The attribute 'mode' is not supported |
| Take | The attribute `mode` is not supported. |
| Tile | |
| UpSampling | |
| Where | |
@@ -151,7 +154,7 @@
## TensorFlow Supported Operations
Some TensorFlow operations do not match to any OpenVINO operation, but are still supported by the Model Optimizer and can be used on constant propagation path. These layers are labeled 'Constant propagation' in the table.
Some of TensorFlow operations do not match any OpenVINO operations. Yet, they are still supported by Model Optimizer and can be used on constant propagation path. These layers are labeled with `Constant propagation` in the table below:
| Operation Name in TensorFlow | Limitations|
@@ -165,19 +168,19 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| ArgMax | |
| ArgMin | |
| Asinh | |
| Assert | Not needed for inference |
| Assign | Not needed for inference |
| AssignSub | Not needed for inference |
| Assert | Not needed for inference. |
| Assign | Not needed for inference. |
| AssignSub | Not needed for inference. |
| Atanh | |
| AvgPool | |
| AvgPoolV2 | Supported only for constant-foldable kernel_size and strides inputs |
| AvgPoolV2 | Supported only for constant-foldable `kernel_size` and strides inputs. |
| AvgPool3D | |
| BatchMatMul | |
| BatchMatMulV2 | |
| BatchToSpaceND | |
| BiasAdd | |
| BlockLSTM | |
| Bucketize | CPU only |
| Bucketize | CPU only. |
| BroadcastTo | |
| Cast | |
| Ceil | |
@@ -191,30 +194,30 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Conv3DBackpropInputV2 | |
| Cos | |
| Cosh | |
| CropAndResize | "method" = "bilinear" only |
| CTCGreedyDecoder | Supported only with decoded indices output in a dense format |
| CTCLoss | Supported only with decoded indices input in a dense format |
| CropAndResize | `method` = `bilinear` only. |
| CTCGreedyDecoder | Supported only with decoded indices output in a dense format. |
| CTCLoss | Supported only with decoded indices input in a dense format. |
| CumSum | |
| DepthToSpace| |
| DepthwiseConv2dNative| |
| Einsum | Supported only with equation that does not contain repeated labels within a subscript |
| Einsum | Supported only with equation that does not contain repeated labels within a subscript. |
| Elu | |
| EmptyTensorList | Supported only when it is part of a sub-graph of the special form |
| Enter | Supported only when it is fused to the TensorIterator layer |
| EmptyTensorList | Supported only when it is part of a sub-graph of the special form. |
| Enter | Supported only when it is fused to the TensorIterator layer. |
| Equal | |
| Erf | |
| Exit | Supported only when it is fused to the TensorIterator layer |
| Exit | Supported only when it is fused to the TensorIterator layer. |
| Exp | |
| ExpandDims | |
| ExperimentalSparseWeightedSum | CPU only |
| ExperimentalSparseWeightedSum | CPU only. |
| ExtractImagePatches | |
| EuclideanNorm | |
| FakeQuantWithMinMaxVars | |
| FakeQuantWithMinMaxVarsPerChannel | |
| FFT | Supported only when it is part of a sub-graph of the special form |
| FFT2D | Supported only when it is part of a sub-graph of the special form |
| FFT3D | Supported only when it is part of a sub-graph of the special form |
| FIFOQueueV2 | Supported only when it is part of a sub-graph of the special form |
| FFT | Supported only when it is part of a sub-graph of the special form. |
| FFT2D | Supported only when it is part of a sub-graph of the special form. |
| FFT3D | Supported only when it is part of a sub-graph of the special form. |
| FIFOQueueV2 | Supported only when it is part of a sub-graph of the special form. |
| Fill | |
| Floor | |
| FloorDiv | |
@@ -228,12 +231,12 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| GatherV2 | |
| Greater | |
| GreaterEqual | |
| Identity | Not needed for shape inference |
| Identity | Not needed for shape inference. |
| IdentityN | |
| IFFT | Supported only when it is part of a sub-graph of the special form |
| IFFT2D | Supported only when it is part of a sub-graph of the special form |
| IFFT3D | Supported only when it is part of a sub-graph of the special form |
| IteratorGetNext | Supported only when it is part of a sub-graph of the special form |
| IFFT | Supported only when it is part of a sub-graph of the special form. |
| IFFT2D | Supported only when it is part of a sub-graph of the special form. |
| IFFT3D | Supported only when it is part of a sub-graph of the special form. |
| IteratorGetNext | Supported only when it is part of a sub-graph of the special form. |
| LRN | |
| LeakyRelu | |
| Less | |
@@ -244,23 +247,23 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| LogicalOr | |
| LogicalNot | |
| LogSoftmax | |
| LookupTableInsertV2 | Supported only when it is part of a sub-graph of the special form |
| LoopCond | Supported only when it is fused to the TensorIterator layer |
| LookupTableInsertV2 | Supported only when it is part of a sub-graph of the special form. |
| LoopCond | Supported only when it is fused to the TensorIterator layer. |
| MatMul | |
| Max | |
| MaxPool | |
| MaxPoolV2 | Supported only for constant-foldable kernel_size and strides inputs |
| MaxPoolV2 | Supported only for constant-foldable `kernel_size` and strides inputs. |
| MaxPool3D | |
| Maximum | |
| Mean | |
| Merge | Supported only when it is fused to the TensorIterator layer |
| Merge | Supported only when it is fused to the TensorIterator layer. |
| Min | |
| Minimum | |
| MirrorPad | |
| Mod | |
| Mul | |
| Neg | |
| NextIteration | Supported only when it is fused to the TensorIterator layer |
| NextIteration | Supported only when it is fused to the TensorIterator layer. |
| NonMaxSuppressionV2 | |
| NonMaxSuppressionV3 | |
| NonMaxSuppressionV4 | |
@@ -274,9 +277,9 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Placeholder | |
| PlaceholderWithDefault | |
| Prod | |
| QueueDequeue | Supported only when it is part of a sub-graph of the special form |
| QueueDequeueUpToV2 | Supported only when it is part of a sub-graph of the special form |
| QueueDequeueV2 | Supported only when it is part of a sub-graph of the special form |
| QueueDequeue | Supported only when it is part of a sub-graph of the special form. |
| QueueDequeueUpToV2 | Supported only when it is part of a sub-graph of the special form. |
| QueueDequeueV2 | Supported only when it is part of a sub-graph of the special form. |
| RandomUniform | |
| RandomUniformInt | |
| Range | |
@@ -290,7 +293,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| ResizeNearestNeighbor | |
| ResourceGather| |
| ReverseSequence | |
| ReverseV2 | Supported only when it can be converted to the ReverseSequence operation |
| ReverseV2 | Supported only when it can be converted to the ReverseSequence operation. |
| Roll | |
| Round | |
| Pow | |
@@ -309,10 +312,10 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Softsign | |
| SpaceToBatchND | |
| SpaceToDepth | |
| SparseFillEmptyRows | Supported only when it is part of a sub-graph of the special form |
| SparseReshape | Supported only when it is part of a sub-graph of the special form |
| SparseSegmentSum | Supported only when it is part of a sub-graph of the special form |
| SparseSegmentMean | Supported only when it is part of a sub-graph of the special form |
| SparseFillEmptyRows | Supported only when it is part of a sub-graph of the special form. |
| SparseReshape | Supported only when it is part of a sub-graph of the special form. |
| SparseSegmentSum | Supported only when it is part of a sub-graph of the special form. |
| SparseSegmentMean | Supported only when it is part of a sub-graph of the special form. |
| SparseToDense | CPU only |
| Split | |
| SplitV | |
@@ -320,31 +323,31 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Square | |
| SquaredDifference | |
| Square| |
| Squeeze | The case when squeeze axis is not specified is not supported |
| Squeeze | Cases in which squeeze axis is not specified are not supported. |
| StatelessWhile | |
| StopGradient | Not needed for shape inference |
| StridedSlice | Supported only for constant-foldable begin, end, and strides inputs |
| StopGradient | Not needed for shape inference. |
| StridedSlice | Supported only for constant-foldable `begin`, `end`, and `strides` inputs. |
| Sub | |
| Sum | |
| Swish | |
| swish_f32 | |
| Switch | Control flow propagation |
| Switch | Control flow propagation. |
| Tan | |
| Tanh | |
| TensorArrayGatherV3 | Supported only when it is fused to the TensorIterator layer |
| TensorArrayReadV3 | Supported only when it is fused to the TensorIterator layer |
| TensorArrayScatterV3 | Supported only when it is fused to the TensorIterator layer |
| TensorArraySizeV3 | Supported only when it is fused to the TensorIterator layer |
| TensorArrayV3 | Supported only when it is fused to the TensorIterator layer |
| TensorArrayWriteV3 | Supported only when it is fused to the TensorIterator layer |
| TensorListPushBack | Supported only when it is part of a sub-graph of the special form |
| TensorArrayGatherV3 | Supported only when it is fused to the TensorIterator layer. |
| TensorArrayReadV3 | Supported only when it is fused to the TensorIterator layer. |
| TensorArrayScatterV3 | Supported only when it is fused to the TensorIterator layer. |
| TensorArraySizeV3 | Supported only when it is fused to the TensorIterator layer. |
| TensorArrayV3 | Supported only when it is fused to the TensorIterator layer. |
| TensorArrayWriteV3 | Supported only when it is fused to the TensorIterator layer. |
| TensorListPushBack | Supported only when it is part of a sub-graph of the special form. |
| Tile | |
| TopkV2 | |
| Transpose | |
| Unpack | |
| Variable | |
| VariableV2 | |
| Where | Supported only when it is part of a sub-graph of the special form |
| Where | Supported only when it is part of a sub-graph of the special form. |
| ZerosLike | |
@@ -366,7 +369,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Bidirectional | |
| Concatenate | |
| Conv1D | |
| Conv1DTranspose | Not supported if dilation is not equal to 1 |
| Conv1DTranspose | Not supported if `dilation` is not equal to 1. |
| Conv2D | |
| Conv2DTranspose | |
| Conv3D | |
@@ -375,7 +378,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Cropping2D | |
| Cropping3D | |
| Dense | |
| DenseFeatures | Not supported for categorical and crossed features |
| DenseFeatures | Not supported for categorical and crossed features. |
| DepthwiseConv2D | |
| Dot | |
| Dropout | |
@@ -407,7 +410,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| Multiply | |
| PReLU | |
| Permute | |
| RNN | Not supported for some custom cells |
| RNN | Not supported for some custom cells. |
| ReLU | |
| RepeatVector | |
| Reshape | |
@@ -442,7 +445,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| affinetransform | |
| backproptruncationcomponent | |
| batchnormcomponent | |
| clipgradientcomponent | Not needed for inference |
| clipgradientcomponent | Not needed for inference. |
| concat | |
| convolutional1dcomponent | |
| convolutionalcomponent | |
@@ -452,7 +455,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| fixedaffinecomponent | |
| fixedbiascomponent | |
| fixedscalecomponent | |
| generaldropoutcomponent| Not needed for inference |
| generaldropoutcomponent| Not needed for inference. |
| linearcomponent | |
| logsoftmaxcomponent | |
| lstmnonlinearitycomponent | |
@@ -461,7 +464,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| maxpoolingcomponent | |
| naturalgradientaffinecomponent | |
| naturalgradientperelementscalecomponent | |
| noopcomponent | Not needed for inference |
| noopcomponent | Not needed for inference. |
| normalizecomponent | |
| parallelcomponent | |
| pnormcomponent | |
@@ -471,7 +474,7 @@ Some TensorFlow operations do not match to any OpenVINO operation, but are still
| sigmoidcomponent | |
| softmax | |
| softmaxComponent | |
| specaugmenttimemaskcomponent | Not needed for inference |
| specaugmenttimemaskcomponent | Not needed for inference. |
| splicecomponent | |
| tanhcomponent | |
| tdnncomponent | |
@@ -667,68 +670,95 @@ paddlepaddle>=2.1
| Operator Name in PaddlePaddle| Limitations|
| :----------| :----------|
| adpative_pool2d | 'NHWC' data_layout is not supported |
| arg_max | 'int32' output data_type is not supported |
| adpative_pool2d | The `NHWC` data_layout is not supported. |
| arg_max | The `int32` output data_type is not supported. |
| assign | |
| assign_value | |
| batch_norm | |
| bilinear_interp | 'NCW' 'NWC' 'NHWC' 'NCDHW' 'NDHWC' data_layout are not supported |
| bilinear_interp_v2 | 'NCW' 'NWC' 'NHWC' 'NCDHW' 'NDHWC' data_layout are not supported |
| bilinear_interp | `NCW`, `NWC`, `NHWC`, `NCDHW`, `NDHWC` data_layout are not supported. |
| bilinear_interp_v2 | `NCW`, `NWC`, `NHWC`, `NCDHW`, `NDHWC` data_layout are not supported. |
| bmm | |
| cast | |
| clip | |
| concat | |
| conv2d | 'NHWC' data_layout is not supported |
| depthwise_conv2d | 'NHWC' data_layout is not supported |
| conv2d | `NHWC` data_layout is not supported. |
| deformable_conv | |
| depthwise_conv2d | `NHWC` data_layout is not supported. |
| elementwise_add | |
| elementwise_div | |
| elementwise_max | |
| elementwise_min | |
| elementwise_mul | |
| elementwise_not_equal | |
| elementwise_pow | |
| elementwise_sub | |
| equal | |
| expand_v2 | |
| exp | |
| expand | |
| expand_v2 | |
| fill_any_like | |
| fill_constant_batch_size_like | |
| fill_constant | |
| fill_constant_batch_size_like | |
| fill_zeros_like | |
| flatten_contiguous_range | |
| floor | |
| gather | |
| gather_tree | |
| gelu | |
| generate_proposals_v2 | |
| greater_equal | |
| greater_than | |
| hard_sigmoid | |
| hard_swish | |
| layer_norm | |
| leaky_relu | |
| less_than | |
| log | |
| logical_and | |
| logical_not | |
| logical_or | |
| logical_xor | |
| lookup_table_v2 | |
| matmul | |
| matmul_v2 | |
| matrix_nms | Only supports IE CPU plugin with 'number of selected boxes' static shape(e.g.: min(min(num_boxes, nms_top_k) * num_classes_output, keep_top_k)) |
| matrix_nms | Only supports IE CPU plugin with *"number of selected boxes"* static shape(e.g.: `min(min(num_boxes, nms_top_k) * num_classes_output, keep_top_k)`). |
| max_pool2d_with_index | |
| meshgrid | |
| mul | |
| multiclass_nms3 | Only supports IE CPU plugin with 'number of selected boxes' static shape(e.g.: min(min(num_boxes, nms_top_k) * num_classes_output, keep_top_k)) |
| nearest_interp | 'NCW' 'NWC' 'NHWC' 'NCDHW' 'NDHWC' data_layout are not supported |
| nearest_interp_v2 | 'NCW' 'NWC' 'NHWC' 'NCDHW' 'NDHWC' data_layout are not supported |
| pad3d | 'Circular' mode is not supported |
| multiclass_nms3 | Only supports IE CPU plugin with *"number of selected boxes"* static shape(e.g.: `min(min(num_boxes, nms_top_k) * num_classes_output, keep_top_k)`). |
| nearest_interp | `NCW`, `NWC`, `NHWC`, `NCDHW`, `NDHWC` data_layout are not supported. |
| nearest_interp_v2 | `NCW`, `NWC`, `NHWC`, `NCDHW`, `NDHWC` data_layout are not supported. |
| pad3d | `Circular` mode is not supported. |
| pool2d | `NHWC` data_layout is not supported. |
| pow | |
| pool2d | 'NHWC' data_layout is not supported |
| prior_box | |
| range | |
| reduce_max | |
| reduce_mean | |
| reduce_min | |
| reduce_prod | |
| reduce_sum | |
| relu | |
| relu6 | |
| reshape2 | |
| rnn | 'SimpleRNN' and 'GRU' modes are not supported |
| rnn | `SimpleRNN` and `GRU` modes are not supported. |
| roi_align | |
| scale | |
| shape | |
| sigmoid | |
| slice | |
| softmax | |
| sigmoid | |
| softplus | |
| split | |
| sqrt | |
| squeeze2 | |
| stack | |
| strided_slice | |
| swish | |
| tanh | |
| top_k | |
| top_k_v2 | |
| transpose2 | |
| unsqueeze2 | |
| where | |
| yolo_box | |

View File

@@ -1,16 +1,15 @@
# Converting a Caffe* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe}
# Converting a Caffe Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe}
## Convert a Caffe* Model <a name="Convert_From_Caffe"></a>
To convert a Caffe\* model, run Model Optimizer with the path to the input model `.caffemodel` file:
<a name="Convert_From_Caffe"></a>To convert a Caffe model, run Model Optimizer with the path to the input model `.caffemodel` file:
```sh
mo --input_model <INPUT_MODEL>.caffemodel
```
The following list provides the Caffe\*-specific parameters.
The following list provides the Caffe-specific parameters.
```
Caffe*-specific parameters:
Caffe-specific parameters:
--input_proto INPUT_PROTO, -d INPUT_PROTO
Deploy-ready prototxt file that contains a topology
structure and layer attributes
@@ -45,14 +44,16 @@ Caffe*-specific parameters:
attributes without flattening nested parameters.
```
### Command-Line Interface (CLI) Examples Using Caffe\*-Specific Parameters
### CLI Examples Using Caffe-Specific Parameters
* Launching the Model Optimizer for the [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `prototxt` file. This is needed when the name of the Caffe\* model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
* Launching Model Optimizer for [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `prototxt` file.
This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
```sh
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
```
* Launching the Model Optimizer for the [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe\* system on the computer.
Optional parameters without default values and not specified by the user in the `.prototxt` file are removed from the Intermediate Representation, and nested parameters are flattened:
* Launching Model Optimizer for [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `CustomLayersMapping` file.
This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer.
The optional parameters without default values and not specified by the user in the `.prototxt` file are removed from the Intermediate Representation, and nested parameters are flattened:
```sh
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
```
@@ -82,22 +83,22 @@ Optional parameters without default values and not specified by the user in the
```
## Custom Layer Definition
Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom.
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
## Supported Caffe\* Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
## Supported Caffe Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
## Frequently Asked Questions (FAQ)
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](../Model_Optimizer_FAQ.md) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
## Summary
In this document, you learned:
* Basic information about how the Model Optimizer works with Caffe\* models
* Which Caffe\* models are supported
* How to convert a trained Caffe\* model using the Model Optimizer with both framework-agnostic and Caffe-specific command-line options
* Basic information about how the Model Optimizer works with Caffe models.
* Which Caffe models are supported.
* How to convert a trained Caffe model by using Model Optimizer with both framework-agnostic and Caffe-specific command-line options.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)

View File

@@ -1,17 +1,16 @@
# Converting a Kaldi* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi}
# Converting a Kaldi Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi}
> **NOTE**: The Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. Support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.
## Convert a Kaldi* Model <a name="Convert_From_Kaldi"></a>
To convert a Kaldi\* model, run Model Optimizer with the path to the input model `.nnet` or `.mdl` file:
> **NOTE**: Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. The support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.
<a name="Convert_From_Kaldi"></a>To convert a Kaldi model, run Model Optimizer with the path to the input model `.nnet` or `.mdl` file:
```sh
mo --input_model <INPUT_MODEL>.nnet
```
### Using Kaldi\*-Specific Conversion Parameters <a name="kaldi_specific_conversion_params"></a>
## Using Kaldi-Specific Conversion Parameters <a name="kaldi_specific_conversion_params"></a>
The following list provides the Kaldi\*-specific parameters.
The following list provides the Kaldi-specific parameters.
```sh
Kaldi-specific parameters:
@@ -21,14 +20,14 @@ Kaldi-specific parameters:
--remove_memory Remove the Memory layer and add new inputs and outputs instead
```
### Examples of CLI Commands
## Examples of CLI Commands
* To launch the Model Optimizer for the wsj_dnn5b_smbr model with the specified `.nnet` file:
* To launch Model Optimizer for the `wsj_dnn5b_smbr` model with the specified `.nnet` file:
```sh
mo --input_model wsj_dnn5b_smbr.nnet
```
* To launch the Model Optimizer for the wsj_dnn5b_smbr model with existing file that contains counts for the last layer with biases:
* To launch Model Optimizer for the `wsj_dnn5b_smbr` model with the existing file that contains counts for the last layer with biases:
```sh
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts
```
@@ -44,7 +43,7 @@ Kaldi-specific parameters:
\f$|C|\f$ - number of elements in the counts array;
* The normalized counts are subtracted from biases of the last or next to last layer (if last layer is SoftMax).
> **NOTE**: Model Optimizer will show warning if model contains counts values inside model and `--counts` option is not used.
> **NOTE**: Model Optimizer will show a warning if a model contains values of counts and the `--counts` option is not used.
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the
`--remove_output_softmax` flag:
@@ -52,28 +51,30 @@ Kaldi-specific parameters:
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
```
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
> **NOTE**: Model Optimizer can remove SoftMax layer only if the topology has one output.
> **NOTE**: Model Optimizer can remove SoftMax layer only if the topology has one output.
> **NOTE**: For sample inference of Kaldi models, you can use the OpenVINO Speech Recognition sample application. The sample supports models with one output. If your model has several outputs, specify the desired one with the `--output` option.
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the `--output` option.
If you want to convert a model for inference on Intel® Movidius™ Myriad™, use the `--remove_memory` option.
It removes Memory layers from the IR. Instead of it, additional inputs and outputs appear in the IR.
The Model Optimizer outputs the mapping between inputs and outputs. For example:
## Converting a Model for Intel® Movidius™ Myriad™ VPU
If you want to convert a model for inference on Intel® Movidius™ Myriad™ VPU, use the `--remove_memory` option.
It removes the Memory layers from the OpenVINO IR files. Additional inputs and outputs will appear in the IR files instead.
Model Optimizer will output the mapping between inputs and outputs. For example:
```sh
[ WARNING ] Add input/output mapped Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out -> Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out
[ WARNING ] Add input/output mapped Parameter_1_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out -> Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out
[ WARNING ] Add input/output mapped Parameter_0_for_iteration_Offset_fastlstm3.c_trunc__3390 -> Result_for_iteration_Offset_fastlstm3.c_trunc__3390
```
Based on this mapping, link inputs and outputs in your application manually as follows:
Based on this mapping, link inputs and outputs in your application manually as follows:
1. Initialize inputs from the mapping as zeros in the first frame of an utterance.
2. Copy output blobs from the mapping to the corresponding inputs. For example, data from `Result_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out`
must be copied to `Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r_trunc__2_out`.
## Supported Kaldi\* Layers
Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
## Supported Kaldi Layers
For the list of supported standard layers, refer to the [Supported Framework Layers ](../Supported_Frameworks_Layers.md) page.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)

View File

@@ -1,14 +1,13 @@
# Converting an MXNet* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet}
# Converting an MXNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet}
## Convert an MXNet* Model <a name="ConvertMxNet"></a>
To convert an MXNet\* model, run Model Optimizer with a path to the input model `.params` file:
<a name="ConvertMxNet"></a>To convert an MXNet model, run Model Optimizer with the path to the *`.params`* file of the input model:
```sh
mo --input_model model-file-0000.params
```
### Using MXNet\*-Specific Conversion Parameters <a name="mxnet_specific_conversion_params"></a>
The following list provides the MXNet\*-specific parameters.
## Using MXNet-Specific Conversion Parameters <a name="mxnet_specific_conversion_params"></a>
The following list provides the MXNet-specific parameters.
```
MXNet-specific parameters:
@@ -23,36 +22,36 @@ MXNet-specific parameters:
--save_params_from_nd
Enable saving built parameters file from .nd files
--legacy_mxnet_model
Enable MXNet loader to make a model compatible with the latest MXNet version.
Use only if your model was trained with MXNet version lower than 1.0.0
Enable Apache MXNet loader to make a model compatible with the latest Apache MXNet version.
Use only if your model was trained with Apache MXNet version lower than 1.0.0
--enable_ssd_gluoncv
Enable transformation for converting the gluoncv ssd topologies.
Use only if your topology is one of ssd gluoncv topologies
```
> **NOTE**: By default, the Model Optimizer does not use the MXNet loader, as it transforms the topology to another format, which is compatible with the latest
> version of MXNet, but it is required for models trained with lower version of MXNet. If your model was trained with MXNet version lower than 1.0.0, specify the
> `--legacy_mxnet_model` key to enable the MXNet loader. However, the loader does not support models with custom layers. In this case, you must manually
> recompile MXNet with custom layers and install it to your environment.
> **NOTE**: By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest
> version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the
> `--legacy_mxnet_model` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually
> recompile Apache MXNet with custom layers and install it in your environment.
## Custom Layer Definition
Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom.
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
## Supported MXNet\* Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
## Supported MXNet Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
## Frequently Asked Questions (FAQ)
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](../Model_Optimizer_FAQ.md) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
## Summary
In this document, you learned:
* Basic information about how the Model Optimizer works with MXNet\* models
* Which MXNet\* models are supported
* How to convert a trained MXNet\* model using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options
* Basic information about how Model Optimizer works with MXNet models.
* Which MXNet models are supported.
* How to convert a trained MXNet model by using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)

View File

@@ -1,28 +1,27 @@
# Converting an ONNX Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX}
## Introduction to ONNX
[ONNX*](https://github.com/onnx/onnx) is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch\*, Caffe2\*, Apache MXNet\*, Microsoft Cognitive Toolkit\* and other tools are developing ONNX support.
[ONNX](https://github.com/onnx/onnx) is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
This page gives instructions on how to convert a model from ONNX format to OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions here](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_dev_tools.html).
## Converting an ONNX Model <a name="Convert_From_ONNX"></a>
ONNX models are directly compatible with OpenVINO Runtime and can be loaded in their native `.onnx` format using `net = ie.read_model("model.onnx")`. The benefit of converting ONNX models to the OpenVINO IR format is that it allows them to be easily optimized for target hardware with advanced OpenVINO tools such as [NNCF](../../../optimization_guide/nncf_introduction.md).
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_dev_tools.html).
## Convert an ONNX* Model <a name="Convert_From_ONNX"></a>
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX\* model, run Model Optimizer with the path to the input model `.onnx` file:
To convert an ONNX model, run Model Optimizer with the path to the input model `.onnx` file:
```sh
mo --input_model <INPUT_MODEL>.onnx
```
There are no ONNX\* specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the General Conversion Parameters section on the [Converting a Model to Intermediate Representation (IR)](Converting_Model.md) page.
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the [Converting a Model to Intermediate Representation (IR)](Converting_Model.md) guide.
## Supported ONNX\* Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
## Supported ONNX Layers
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
## See Also
This page provided general instructions for converting ONNX models. See the [Model Conversion Tutorials](Convert_Model_Tutorials.md) page for a set of tutorials that give step-by-step instructions for converting specific ONNX models. Here are some example tutorials:
## Additional Resources
See the [Model Conversion Tutorials](Convert_Model_Tutorials.md) page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
* [Convert ONNX* Faster R-CNN Model](onnx_specific/Convert_Faster_RCNN.md)
* [Convert ONNX* GPT-2 Model](onnx_specific/Convert_GPT2.md)
* [Convert ONNX* Mask R-CNN Model](onnx_specific/Convert_Mask_RCNN.md)

View File

@@ -1,4 +1,4 @@
# Converting a PaddlePaddle* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
# Converting a PaddlePaddle Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
To convert a PaddlePaddle model, use the `mo` script and specify the path to the input `.pdmodel` model file:
@@ -12,7 +12,7 @@ To convert a PaddlePaddle model, use the `mo` script and specify the path to the
```
## Supported PaddlePaddle Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
For the list of supported standard layers, refer to the [Supported Framework Layers](../Supported_Frameworks_Layers.md) page.
## Officially Supported PaddlePaddle Models
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):

View File

@@ -1,18 +1,17 @@
# Converting a PyTorch* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
# Converting a PyTorch Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
## Typical Steps to Convert PyTorch Model <a name="typical-pytorch"></a>
PyTorch* framework is supported through export to ONNX\* format. A summary of the steps for optimizing and deploying a model that was trained with the PyTorch\* framework:
The PyTorch framework is supported through export to the ONNX format. In order to optimize and deploy a model that was trained with it:
1. [Export PyTorch model to ONNX\*](#export-to-onnx).
2. [Convert an ONNX\* model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
1. [Export a PyTorch model to ONNX](#export-to-onnx).
2. [Convert the ONNX model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
## Export PyTorch\* Model to ONNX\* Format <a name="export-to-onnx"></a>
PyTorch models are defined in a Python\* code, to export such models use `torch.onnx.export()` method. Usually code to
evaluate or test the model is provided with the model code and can be used to initialize and export model.
Only the basics will be covered here, the step to export to ONNX\* is crucial but it is covered by PyTorch\* framework.
For more information, please refer to [Exporting PyTorch models to ONNX format](https://pytorch.org/docs/stable/onnx.html).
## Exporting a PyTorch Model to ONNX Format <a name="export-to-onnx"></a>
PyTorch models are defined in Python. To export them, use the `torch.onnx.export()` method. The code to
evaluate or test the model is usually provided with its code and can be used for its initialization and export.
The export to ONNX is crucial for this process, but it is covered by PyTorch framework, therefore, It will not be covered here in detail.
For more information, refer to the [Exporting PyTorch models to ONNX format](https://pytorch.org/docs/stable/onnx.html) guide.
To export a PyTorch\* model you need to obtain the model as an instance of `torch.nn.Module` class and call the `export` function.
To export a PyTorch model, you need to obtain the model as an instance of `torch.nn.Module` class and call the `export` function.
```python
import torch
@@ -29,9 +28,9 @@ torch.onnx.export(model, (dummy_input, ), 'model.onnx')
## Known Issues
* Not all PyTorch\* operations can be exported to ONNX\* opset 9 which is used by default, as of version 1.8.1.
* As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version`
option of the `torch.onnx.export`. For more information about ONNX* opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md).
option of the `torch.onnx.export`. For more information about ONNX opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md) page.
## See Also
[Model Conversion Tutorials](Convert_Model_Tutorials.md)

View File

@@ -1,33 +1,34 @@
# Converting a TensorFlow* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
This page gives instructions on how to convert a model from TensorFlow format to OpenVINO IR format using Model Optimizer. The instructions are different depending on if your model was created with TensorFlow v1.X or TensorFlow v2.X.
# Converting a TensorFlow Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions here](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_dev_tools.html).
This page provides general instructions on how to convert a model from a TensorFlow format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.
## Convert TensorFlow 1 Models <a name="Convert_From_TF2X"></a>
To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](../../../install_guides/installing-model-dev-tools.md).
### Convert Frozen Model Format <a name="Convert_From_TF"></a>
To convert a TensorFlow model, use the `mo` script to simply convert a model with the path to the input model `.pb` file:
## Converting TensorFlow 1 Models <a name="Convert_From_TF2X"></a>
### Converting Frozen Model Format <a name="Convert_From_TF"></a>
To convert a TensorFlow model, use the *`mo`* script to simply convert a model with a path to the input model *`.pb`* file:
```sh
mo --input_model <INPUT_MODEL>.pb
```
### Convert Non-Frozen Model Formats <a name="loading-nonfrozen-models"></a>
### Converting Non-Frozen Model Formats <a name="loading-nonfrozen-models"></a>
There are three ways to store non-frozen TensorFlow models and convert them by Model Optimizer:
1. **Checkpoint**. In this case, a model consists of two files: `inference_graph.pb` (or `inference_graph.pbtxt`) and `checkpoint_file.ckpt`.
If you do not have an inference graph file, refer to [Freezing Custom Models in Python](#freeze-the-tensorflow-model).
To convert the model with the inference graph in `.pb` format, run the `mo` script with the path to the checkpoint file to convert a model:
If you do not have an inference graph file, refer to the [Freezing Custom Models in Python](#freeze-the-tensorflow-model) section.
To convert the model with the inference graph in `.pb` format, run the `mo` script with a path to the checkpoint file:
```sh
mo --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
```
To convert the model with the inference graph in `.pbtxt` format, run the `mo` script with the path to the checkpoint file to convert a model:
To convert the model with the inference graph in `.pbtxt` format, run the `mo` script with a path to the checkpoint file:
```sh
mo --input_model <INFERENCE_GRAPH>.pbtxt --input_checkpoint <INPUT_CHECKPOINT> --input_model_is_text
```
2. **MetaGraph**. In this case, a model consists of three or four files stored in the same directory: `model_name.meta`, `model_name.index`,
`model_name.data-00000-of-00001` (digit part may vary), and `checkpoint` (optional).
`model_name.data-00000-of-00001` (the numbers may vary), and `checkpoint` (optional).
To convert such TensorFlow model, run the `mo` script with a path to the MetaGraph `.meta` file:
```sh
mo --input_meta_graph <INPUT_META_GRAPH>.meta
@@ -45,11 +46,10 @@ If a model contains operations currently unsupported by OpenVINO, prune these op
To determine custom input nodes, display a graph of the model in TensorBoard. To generate TensorBoard logs of the graph, use the `--tensorboard_logs` option.
TensorFlow 2.x SavedModel format has a specific graph due to eager execution. In case of pruning, find custom input nodes in the `StatefulPartitionedCall/*` subgraph of TensorFlow 2.x SavedModel format.
### Freezing Custom Models in Python\* <a name="freeze-the-tensorflow-model"></a>
When a network is defined in Python\* code, you have to create an inference graph file. Usually graphs are built in a form
that allows model training. That means that all trainable parameters are represented as variables in the graph.
To be able to use such graph with Model Optimizer such graph should be frozen.
The graph is frozen and dumped to a file with the following code:
### Freezing Custom Models in Python <a name="freeze-the-tensorflow-model"></a>
When a network is defined in Python code, you have to create an inference graph file. Graphs are usually built in a form
that allows model training. That means all trainable parameters are represented as variables in the graph.
To be able to use such graph with Model Optimizer, it should be frozen and dumped to a file with the following code:
```python
import tensorflow as tf
@@ -60,18 +60,18 @@ graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
Where:
* `sess` is the instance of the TensorFlow\* Session object where the network topology is defined.
* `sess` is the instance of the TensorFlow Session object where the network topology is defined.
* `["name_of_the_output_node"]` is the list of output node names in the graph; `frozen` graph will
include only those nodes from the original `sess.graph_def` that are directly or indirectly used
to compute given output nodes. `'name_of_the_output_node'` here is an example of possible output
to compute given output nodes. The `'name_of_the_output_node'` is an example of a possible output
node name. You should derive the names based on your own graph.
* `./` is the directory where the inference graph file should be generated.
* `inference_graph.pb` is the name of the generated inference graph file.
* `as_text` specifies whether the generated file should be in human readable text format or binary.
## Convert TensorFlow 2 Models <a name="Convert_From_TF2X"></a>
To convert TensorFlow* 2 models, ensure that `openvino-dev[tensorflow2]` is installed via `pip`.
TensorFlow* 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5).
## Converting TensorFlow 2 Models <a name="Convert_From_TF2X"></a>
To convert TensorFlow 2 models, ensure that `openvino-dev[tensorflow2]` is installed via `pip`.
TensorFlow 2.X officially supports two model formats: SavedModel and Keras H5 (or HDF5).
Below are the instructions on how to convert each of them.
### SavedModel Format
@@ -82,7 +82,7 @@ To convert such a model, run the `mo` script with a path to the SavedModel direc
mo --saved_model_dir <SAVED_MODEL_DIRECTORY>
```
TensorFlow* 2 SavedModel format strictly requires the 2.x version of TensorFlow installed in the
TensorFlow 2 SavedModel format strictly requires the 2.x version of TensorFlow installed in the
environment for conversion to the Intermediate Representation (IR).
If a model contains operations currently unsupported by OpenVINO™,
@@ -92,11 +92,11 @@ options. To determine custom input nodes, visualize a model graph in the TensorB
To generate TensorBoard logs of the graph, use the Model Optimizer `--tensorboard_logs` command-line
option.
TensorFlow* 2 SavedModel format has a specific graph structure due to eager execution. In case of
TensorFlow 2 SavedModel format has a specific graph structure due to eager execution. In case of
pruning, find custom input nodes in the `StatefulPartitionedCall/*` subgraph.
### Keras H5
If you have a model in the HDF5 format, load the model using TensorFlow* 2 and serialize it in the
If you have a model in the HDF5 format, load the model using TensorFlow 2 and serialize it in the
SavedModel format. Here is an example of how to do it:
```python
@@ -117,9 +117,9 @@ tf.saved_model.save(model,'model')
Then follow the above instructions for the SavedModel format.
> **NOTE**: Do not use other hacks to resave TensorFlow* 2 models into TensorFlow* 1 formats.
> **NOTE**: Do not use other hacks to resave TensorFlow 2 models into TensorFlow 1 formats.
## Command-Line Interface (CLI) Examples Using TensorFlow\*-Specific Parameters
## Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters
* Launching the Model Optimizer for Inception V1 frozen model when model file is a plain text protobuf:
```sh
@@ -132,29 +132,29 @@ Then follow the above instructions for the SavedModel format.
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
```
* Launching the Model Optimizer for BERT model in the SavedModel format, with three inputs. Explicitly specify input shapes
* Launching the Model Optimizer for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes
where the batch size and the sequence length equal 2 and 30 respectively.
```sh
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
```
## Supported TensorFlow\* and TensorFlow 2 Keras\* Layers
Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for the list of supported standard layers.
## Supported TensorFlow and TensorFlow 2 Keras Layers
For the list of supported standard layers, refer to the [Supported Framework Layers ](../Supported_Frameworks_Layers.md) page.
## Frequently Asked Questions (FAQ)
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
The Model Optimizer provides explanatory messages if it is unable to run to completion due to typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
## Summary
In this document, you learned:
* Basic information about how the Model Optimizer works with TensorFlow models
* Which TensorFlow models formats are supported
* How to freeze a TensorFlow 1 model
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options
* Basic information about how the Model Optimizer works with TensorFlow models.
* Which TensorFlow models are supported.
* How to freeze a TensorFlow model.
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options.
## See Also
This page provided general instructions for converting TensorFlow models. For step-by-step instructions showing how to convert specific TensorFlow models, see the [Model Conversion Tutorials](Convert_Model_Tutorials.md) page. Here are some example tutorials:
## Additional Resources
For step-by-step instructions on how to convert specific TensorFlow models, see the [Model Conversion Tutorials](Convert_Model_Tutorials.md) page. Here are some examples:
* [Convert TensorFlow EfficientDet Models](tf_specific/Convert_EfficientDet_Models.md)
* [Convert TensorFlow FaceNet Models](tf_specific/Convert_FaceNet_From_Tensorflow.md)
* [Convert TensorFlow Object Detection API Models](tf_specific/Convert_Object_Detection_API_Models.md)

View File

@@ -37,8 +37,7 @@
@endsphinxdirective
This section provides you with a set of tutorials that demonstrate conversion steps for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models.
It contains conversion recipes for concrete models, that unnecessarily cover your case.
Try to convert the model out-of-the-box, meaning only the `--input_model` parameter is specified in the command line, before studying the tutorials.
This section provides a set of tutorials that demonstrate conversion methods for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models, that unnecessarily cover your case.
Before studying the tutorials, try to convert the model out-of-the-box by specifying only the `--input_model` parameter in the command line.
You can also find a collection of [Python tutorials](../../../tutorials.md) written for running on Jupyter* notebooks that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for optimized deep learning inference.
You will find a collection of [Python tutorials](../../../tutorials.md) written for running on Jupyter notebooks that provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for optimized deep learning inference.

View File

@@ -1,51 +1,50 @@
# Setting Input Shapes {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
Model Optimizer provides the option of making models more efficient by providing additional shape definition.
It is achieved with two parameters: `--input_shape` and `--static_shape`, used under certain conditions.
With Model Optimizer you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `--input_shape` and `--static_shape`.
@anchor when_to_specify_input_shapes
## When to Specify --input_shape Command-line Parameter
## Specifying --input_shape Command-line Parameter
Model Optimizer supports conversion of models with dynamic input shapes that contain undefined dimensions.
However, if the shape of data is not going to change from one inference to another,
However, if the shape of data is not going to change from one inference request to another,
it is recommended to set up static shapes (when all dimensions are fully defined) for the inputs.
It can be beneficial from a performance perspective and memory consumption.
Doing it at this stage, instead of during inference in runtime, can be beneficial in terms of performance and memory consumption.
To set up static shapes, Model Optimizer provides the `--input_shape` parameter.
The same functionality is also available in runtime via `reshape` method, please refer to [Changing input shapes](../../../OV_Runtime_UG/ShapeInference.md).
For more information about dynamic shapes in runtime, refer to [Dynamic Shapes](../../../OV_Runtime_UG/ov_dynamic_shapes.md)
For more information on input shapes under runtime, refer to the [Changing input shapes](../../../OV_Runtime_UG/ShapeInference.md) guide.
To learn more about dynamic shapes in runtime, refer to the [Dynamic Shapes](../../../OV_Runtime_UG/ov_dynamic_shapes.md) guide.
OpenVINO Runtime API can have limitations to infer models with undefined dimensions on some hardware (see [Features support matrix](../../../OV_Runtime_UG/supported_plugins/Device_Plugins.md) for reference).
The OpenVINO Runtime API may present certain limitations in inferring models with undefined dimensions on some hardware. See the [Features support matrix](../../../OV_Runtime_UG/supported_plugins/Device_Plugins.md) for reference.
In this case, the `--input_shape` parameter and the [reshape method](../../../OV_Runtime_UG/ShapeInference.md) can help to resolve undefined dimensions.
Sometimes Model Optimizer is unable to convert models out-of-the-box (only the `--input_model` parameter is specified).
Sometimes, Model Optimizer is unable to convert models out-of-the-box (only the `--input_model` parameter is specified).
Such problem can relate to models with inputs of undefined ranks and a case of cutting off parts of a model.
In this case, user has to specify input shapes explicitly using `--input_shape` parameter.
In this case, input shapes must be specified explicitly with the `--input_shape` parameter.
For example, run the Model Optimizer for the TensorFlow* MobileNet model with the single input
and specify input shape `[2,300,300,3]`.
For example, run Model Optimizer for the TensorFlow MobileNet model with the single input
and specify the input shape of `[2,300,300,3]`:
```sh
mo --input_model MobileNet.pb --input_shape [2,300,300,3]
```
If a model has multiple inputs, `--input_shape` must be used in conjunction with `--input` parameter.
The parameter `--input` contains a list of input names for which shapes in the same order are defined via `--input_shape`.
For example, launch the Model Optimizer for the ONNX* OCR model with a pair of inputs `data` and `seq_len`
and specify shapes `[3,150,200,1]` and `[3]` for them.
The `--input` parameter contains a list of input names, for which shapes in the same order are defined via `--input_shape`.
For example, launch Model Optimizer for the ONNX OCR model with a pair of inputs `data` and `seq_len`
and specify shapes `[3,150,200,1]` and `[3]` for them:
```sh
mo --input_model ocr.onnx --input data,seq_len --input_shape [3,150,200,1],[3]
```
The alternative way to specify input shapes is to use the `--input` parameter as follows:
Alternatively, specify input shapes, using the `--input` parameter as follows:
```sh
mo --input_model ocr.onnx --input data[3 150 200 1],seq_len[3]
```
The parameter `--input_shape` allows overriding original input shapes to the shapes compatible with a given model.
Dynamic shapes, i.e. with dynamic dimensions, in the original model can be replaced with static shapes for the converted model, and vice versa.
The dynamic dimension can be marked in Model Optimizer command-line as `-1` or `?`.
For example, launch the Model Optimizer for the ONNX* OCR model and specify dynamic batch dimension for inputs.
The `--input_shape` parameter allows overriding original input shapes to ones compatible with a given model.
Dynamic shapes, i.e. with dynamic dimensions, can be replaced in the original model with static shapes for the converted model, and vice versa.
The dynamic dimension can be marked in Model Optimizer command-line as `-1`* or *`?`.
For example, launch Model Optimizer for the ONNX OCR model and specify dynamic batch dimension for inputs:
```sh
mo --input_model ocr.onnx --input data,seq_len --input_shape [-1,150,200,1],[-1]
@@ -53,7 +52,7 @@ mo --input_model ocr.onnx --input data,seq_len --input_shape [-1,150,200,1],[-1]
To optimize memory consumption for models with undefined dimensions in run-time, Model Optimizer provides the capability to define boundaries of dimensions.
The boundaries of undefined dimension can be specified with ellipsis.
For example, launch the Model Optimizer for the ONNX* OCR model and specify a boundary for the batch dimension.
For example, launch Model Optimizer for the ONNX OCR model and specify a boundary for the batch dimension:
```sh
mo --input_model ocr.onnx --input data,seq_len --input_shape [1..3,150,200,1],[1..3]
@@ -61,20 +60,20 @@ mo --input_model ocr.onnx --input data,seq_len --input_shape [1..3,150,200,1],[1
Practically, some models are not ready for input shapes change.
In this case, a new input shape cannot be set via Model Optimizer.
Learn more about shape [inference troubleshooting](@ref troubleshooting_reshape_errors) and [ways to relax shape inference flow](@ref how-to-fix-non-reshape-able-model).
For more information about shape follow the [inference troubleshooting](@ref troubleshooting_reshape_errors) and [ways to relax shape inference flow](@ref how-to-fix-non-reshape-able-model) guides.
## When to Specify --static_shape Command-line Parameter
## Specifying --static_shape Command-line Parameter
Model Optimizer provides the `--static_shape` parameter that allows evaluating shapes of all operations in the model for fixed input shapes
and to fold shape computing sub-graphs into constants. The resulting IR can be more compact in size and the loading time for such IR can be decreased.
and folding shape computing sub-graphs into constants. The resulting IR may be more compact in size and the loading time for such IR may decrease.
However, the resulting IR will not be reshape-able with the help of the [reshape method](../../../OV_Runtime_UG/ShapeInference.md) from OpenVINO Runtime API.
It is worth noting that the `--input_shape` parameter does not affect reshape-ability of the model.
It is worth noting that the `--input_shape` parameter does not affect reshapeability of the model.
For example, launch the Model Optimizer for the ONNX* OCR model using `--static_shape`.
For example, launch Model Optimizer for the ONNX OCR model using `--static_shape`:
```sh
mo --input_model ocr.onnx --input data[3 150 200 1],seq_len[3] --static_shape
```
## See Also
* [Introduction](../../Deep_Learning_Model_Optimizer_DevGuide.md)
## Additional Resources
* [Introduction to converting models with Model Optimizer](../../Deep_Learning_Model_Optimizer_DevGuide.md)
* [Cutting Off Parts of a Model](Cutting_Model.md)

View File

@@ -1,16 +1,16 @@
# Cutting Off Parts of a Model {#openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model}
Sometimes some parts of a model must be removed while the Model Optimizer is converting models to the Intermediate Representation. This chapter describes methods of doing cutting off parts of a model using Model Optimizer command-line options. Model cutting applies mostly to TensorFlow\* models, but is also useful for other frameworks. In this chapter, TensorFlow examples are used for illustration.
Sometimes, it is necessary to remove parts of a model when converting it to OpenVINO IR. This chapter describes how to do it, using Model Optimizer command-line options. Model cutting applies mostly to TensorFlow models, which is why TensorFlow will be used in this chapter's examples, but it may be also useful for other frameworks.
## Purpose of Model Cutting
The following examples are the situations when model cutting is useful or even required:
* model has pre- or post-processing parts that cannot be translated to existing OpenVINO operations.
* model has a training part that is convenient to be kept in the model, but not used during inference.
* model is too complex (contains lots of unsupported operations that cannot be easily implemented as custom layers), so the complete model cannot be converted in one shot.
* problem with model conversion in the Model Optimizer or inference in the OpenVINO Runtime occurred. To localize the issue, limit the scope for conversion by iteratively searching for problematic places in the model.
* single custom layer or a combination of custom layers is isolated for debugging purposes.
* A model has pre- or post-processing parts that cannot be translated to existing OpenVINO operations.
* A model has a training part that is convenient to be kept in the model but not used during inference.
* A model is too complex be converted at once, because it contains a lot of unsupported operations that cannot be easily implemented as custom layers.
* A problem occurs with model conversion in Model Optimizer or inference in OpenVINO Runtime. To identify the issue, limit the conversion scope by iterative search for problematic areas in the model.
* A single custom layer or a combination of custom layers is isolated for debugging purposes.
## Command-Line Options
@@ -19,21 +19,21 @@ Model Optimizer provides command line options `--input` and `--output` to specif
* `--input` option accepts a comma-separated list of layer names of the input model that should be treated as new entry points to the model.
* `--output` option accepts a comma-separated list of layer names of the input model that should be treated as new exit points from the model.
The `--input` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and `--input_shape` or `--mean_values` options are used, you should use the `--input` option to specify the order of input nodes for correct mapping between multiple items provided in `--input_shape` and `--mean_values` and the inputs in the model. Details on these options are out of scope for this document, which focuses on model cutting.
The `--input` option is required for cases unrelated to model cutting. For example, when the model contains several inputs and `--input_shape` or `--mean_values` options are used, the `--input` option specifies the order of input nodes for correct mapping between multiple items provided in `--input_shape` and `--mean_values` and the inputs in the model.
Model cutting is illustrated with Inception V1. This model is in `models/research/slim` repository. [This section](Converting_Model.md) describes pre-work to prepare the model for the Model Optimizer to be ready to proceed with this chapter.
Model cutting is illustrated with the Inception V1 model, found in the `models/research/slim` repository. To proceed with this chapter, make sure you do the necessary steps to [prepare the model for Model Optimizer](Converting_Model.md).
## Default Behavior without --input and --output
The input model is converted as a whole if neither `--input` nor `--output` command line options are used. All `Placeholder` operations in a TensorFlow\* graph are automatically identified as entry points. The `Input` layer type is generated for each of them. All nodes that have no consumers are automatically identified as exit points.
The input model is converted as a whole if neither `--input` nor `--output` command line options are used. All `Placeholder` operations in a TensorFlow graph are automatically identified as entry points. The `Input` layer type is generated for each of them. All nodes that have no consumers are automatically identified as exit points.
For Inception_V1, there is one `Placeholder`: input. If the model is viewed in the TensorBoard\*, the input operation is easy to find:
For Inception_V1, there is one `Placeholder`: input. If the model is viewed in TensorBoard, the input operation is easy to find:
![Placeholder in Inception V1](../../img/inception_v1_std_input.png)
There is only one output operation, which enclosed in a nested name scope `InceptionV1/Logits/Predictions`, the `Reshape` operation has a full name `InceptionV1/Logits/Predictions/Reshape_1`.
`Reshape` is the only output operation, which is enclosed in a nested name scope of `InceptionV1/Logits/Predictions`, under the full name of `InceptionV1/Logits/Predictions/Reshape_1`.
In the TensorBoard, it looks the following way together with some predecessors:
In TensorBoard, along with some of its predecessors, it looks as follows:
![TensorBoard with predecessors](../../img/inception_v1_std_output.png)
@@ -41,7 +41,7 @@ Convert this model and put the results in a writable output directory:
```sh
mo --input_model inception_v1.pb -b 1 --output_dir <OUTPUT_MODEL_DIR>
```
(The other examples on this page assume that you first cd to the `model_optimizer` directory and add the `--output_dir` argument with a directory where you have write permissions.)
(The other examples on this page assume that you first go to the `model_optimizer` directory and add the `--output_dir` argument with a directory where you have read/write permissions.)
The output `.xml` file with an Intermediate Representation contains the `Input` layer among other layers in the model:
```xml
@@ -78,7 +78,7 @@ The last layer in the model is `InceptionV1/Logits/Predictions/Reshape_1`, which
</output>
</layer>
```
Due to automatic identification of inputs and outputs, you do not need to provide the `--input` and `--output` options to convert the whole model. The following commands are equivalent for the Inception V1 model:
Due to automatic identification of inputs and outputs, providing the `--input` and `--output` options to convert the whole model is not required. The following commands are equivalent for the Inception V1 model:
```sh
mo --input_model inception_v1.pb -b 1 --output_dir <OUTPUT_MODEL_DIR>
@@ -88,7 +88,7 @@ The Intermediate Representations are identical for both conversions. The same is
## Model Cutting
Now consider how to cut some parts of the model off. This chapter uses the first convolution block `InceptionV1/InceptionV1/Conv2d_1a_7x7` of the Inception V1 model to illustrate cutting:
Now, consider how to cut some parts of the model off. This chapter describes the first convolution block `InceptionV1/InceptionV1/Conv2d_1a_7x7` of the Inception V1 model to illustrate cutting:
![Inception V1 first convolution block](../../img/inception_v1_first_block.png)
@@ -138,7 +138,7 @@ If you want to cut your model at the end, you have the following options:
</edges>
</net>
```
As you can see in the TensorBoard picture, the original model has more nodes than Intermediate Representation. Model Optimizer has fused batch normalization `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm` to the convolution `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`, and it is not present in the final Intermediate Representation. This is not an effect of the `--output` option, it is usual behavior of the Model Optimizer for batch normalizations and convolutions. The effect of the `--output` is that the `ReLU` layer becomes the last one in the converted model.
As shown in the TensorBoard picture, the original model has more nodes than its Intermediate Representation. Model Optimizer has fused batch normalization `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm` with convolution `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`, which is why it is not present in the final model. This is not an effect of the `--output` option, it is the typical behavior of Model Optimizer for batch normalizations and convolutions. The effect of the `--output` is that the `ReLU` layer becomes the last one in the converted model.
2. The following command cuts the edge that comes from 0 output port of the `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` and the rest of the model, making this node the last one in the model:
```sh
@@ -182,7 +182,7 @@ If you want to cut your model at the end, you have the following options:
</edges>
</net>
```
This type of cutting is useful to cut edges in case of multiple output edges.
This type of cutting is useful for cutting multiple output edges.
3. The following command cuts the edge that comes to 0 input port of the `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` and the rest of the model including `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`, deleting this node and making the previous node `InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D` the last in the model:
```sh
@@ -222,7 +222,7 @@ If you want to cut your model at the end, you have the following options:
If you want to go further and cut the beginning of the model, leaving only the `ReLU` layer, you have the following options:
1. You can use the following command line, where `--input` and `--output` specify the same node in the graph:
1. Use the following command line, where `--input` and `--output` specify the same node in the graph:
```sh
mo --input_model=inception_v1.pb -b 1 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --input InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
```
@@ -250,11 +250,11 @@ If you want to go further and cut the beginning of the model, leaving only the `
</edges>
</net>
```
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. So the Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.<br>
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow* model to the point at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer. It produces such Intermediate Representation to make the node the first executable node in the final Intermediate Representation. Therefore, Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.<br>
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
2. You can cut edge incoming to layer by port number. To specify incoming port use notation `--input=port:input_node`.
So, to cut everything before `ReLU` layer, cut edge incoming in port 0 of `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` node:
2. Cut the edge incoming to layer by port number. To specify the incoming port, use the following notation `--input=port:input_node`.
To cut everything before `ReLU` layer, cut the edge incoming to port 0 of `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` node:
```sh
mo --input_model inception_v1.pb -b 1 --input 0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
```
@@ -282,11 +282,11 @@ So, to cut everything before `ReLU` layer, cut edge incoming in port 0 of `Incep
</edges>
</net>
```
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. So the Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.<br>
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow* model to the point at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. Therefore, Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.<br>
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow model to the point, at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
3. You can cut edge outcoming from layer by port number. To specify outcoming port use notation `--input=input_node:port`.
So, to cut everything before `ReLU` layer, cut edge from `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1` node to `ReLU`:
3. Cut edge outcoming from layer by port number. To specify the outcoming port, use the following notation `--input=input_node:port`.
To cut everything before `ReLU` layer, cut edge from `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1` node to `ReLU`:
```sh
mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0 --output InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output_dir <OUTPUT_MODEL_DIR>
```
@@ -354,18 +354,18 @@ gives the following shapes in the `Input` and `ReLU` layers:
</output>
</layer>
```
An input shape [1,20,5,10] in the final Intermediate Representation differs from the shape [1,5,10,20] specified in the command line, because the original TensorFlow\* model uses NHWC layout, but the Intermediate Representation uses NCHW layout. So usual NHWC to NCHW layout conversion occurred.
An input shape [1,20,5,10] in the final Intermediate Representation differs from the shape [1,5,10,20] specified in the command line, because the original TensorFlow model uses NHWC layout, but the Intermediate Representation uses NCHW layout. Thus, usual NHWC to NCHW layout conversion occurred.
When `--input_shape` is specified, shape inference inside the Model Optimizer is not performed for the nodes in the beginning of the model that are not included in the translated region. It differs from the case when `--input_shape` is not specified as noted in the previous section where the shape inference is still performed for such nodes to deduce shape for the layers that should fall into the final Intermediate Representation. So `--input_shape` should be used for a model with a complex graph with loops, which are not supported by the Model Optimizer, to exclude such parts from the Model Optimizer shape inference process completely.
When `--input_shape` is specified, shape inference inside Model Optimizer is not performed for the nodes in the beginning of the model that are not included in the translated region. It differs from the case when `--input_shape` is not specified as noted in the previous section, where the shape inference is still performed for such nodes to deduce shape for the layers that should fall into the final Intermediate Representation. Therefore, `--input_shape` should be used for a model with a complex graph with loops, which are not supported by Model Optimizer, to exclude such parts from the Model Optimizer shape inference process completely.
## Inputs with Multiple Input Ports
There are operations that contain more than one input ports. In the example considered here, the convolution `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution` is such operation. When `--input_shape` is not provided, a new `Input` layer is created for each dynamic input port for the node. If a port is evaluated to a constant blob, this constant remains in the model and a corresponding input layer is not created. TensorFlow convolution used in this model contains two ports:
There are operations that contain more than one input port. In the example considered here, the convolution `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution` is such operation. When `--input_shape` is not provided, a new `Input` layer is created for each dynamic input port for the node. If a port is evaluated to a constant blob, this constant remains in the model and a corresponding input layer is not created. TensorFlow convolution used in this model contains two ports:
* port 0: input tensor for convolution (dynamic)
* port 1: convolution weights (constant)
Following this behavior, the Model Optimizer creates an `Input` layer for port 0 only, leaving port 1 as a constant. So the result of:
Following this behavior, Model Optimizer creates an `Input` layer for port 0 only, leaving port 1 as a constant. Thus, the result of:
```sh
mo --input_model inception_v1.pb -b 1 --input InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --output_dir <OUTPUT_MODEL_DIR>
@@ -377,13 +377,13 @@ Different behavior occurs when `--input_shape` is also used as an attempt to ove
```sh
mo --input_model inception_v1.pb--input=InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape [1,224,224,3] --output_dir <OUTPUT_MODEL_DIR>
```
An error occurs (for more information, see <a href="MO_FAQ.html#FAQ30">FAQ #30</a>):
An error occurs (for more information, see the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md#FAQ30)):
```sh
[ ERROR ] Node InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution has more than 1 input and input shapes were provided.
Try not to provide input shapes or specify input port with PORT:NODE notation, where PORT is an integer.
For more information, see FAQ #30
```
In this case, when `--input_shape` is specified and the node contains multiple input ports, you need to specify an input port index together with an input node name. The input port index is specified in front of the node name with ':' as a separator (`PORT:NODE`). In the considered case, the port index 0 of the node `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution` should be specified as `0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`.
When `--input_shape` is specified and the node contains multiple input ports, you need to provide an input port index together with an input node name. The input port index is specified in front of the node name with ':' as a separator (`PORT:NODE`). In this case, the port index 0 of the node `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution` should be specified as `0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`.
The correct command line is:
```sh

View File

@@ -2,32 +2,32 @@
## Introduction
OpenVINO Runtime CPU and GPU devices can infer models in the low precision.
For details, refer to [Model Optimization Guide](@ref openvino_docs_model_optimization_guide).
OpenVINO Runtime CPU and GPU devices can infer models in low precision.
For more details, refer to the [Model Optimization Guide](@ref openvino_docs_model_optimization_guide).
Intermediate Representation (IR) should be specifically formed to be suitable for low precision inference.
Such an IR is called a Low Precision IR and you can generate it in two ways:
- [Quantize regular IR with the Post-Training Optimization tool](@ref pot_introduction)
- Use the Model Optimizer for a model pretrained for Low Precision inference: TensorFlow\* pre-TFLite models (`.pb` model file with `FakeQuantize*` operations) and ONNX\* quantized models.
Both TensorFlow and ONNX quantized models could be prepared by [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf/blob/develop/README.md).
Intermediate Representation should be specifically formed to be suitable for low precision inference.
Such a model is called a Low Precision IR and can be generated in two ways:
- By [quantizing regular IR with the Post-Training Optimization tool](@ref pot_introduction)
- Using Model Optimizer for a model pre-trained for Low Precision inference: TensorFlow pre-TFLite models (`.pb` model file with `FakeQuantize*` operations) and ONNX quantized models.
Both TensorFlow and ONNX quantized models can be prepared by [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf/blob/develop/README.md).
For an operation to be executed in INT8, it must have `FakeQuantize` operations as inputs.
See the [specification of `FakeQuantize` operation](../../../ops/quantization/FakeQuantize_1.md) for details.
For more details, see the [specification of `FakeQuantize` operation](../../../ops/quantization/FakeQuantize_1.md).
To execute the `Convolution` operation in INT8 on CPU, both data and weight inputs should have `FakeQuantize` as an input operation:
![](../../img/expanded_int8_Convolution_weights.png)
Low precision IR is also suitable for FP32 and FP16 inference if a chosen plugin supports all operations of the IR, because the only difference between a Low Precision IR and FP16 or FP32 IR is the existence of `FakeQuantize` in the Low Precision IR.
Plugins with Low Precision Inference support recognize these sub-graphs and quantize them during the inference time.
Plugins without Low Precision support execute all operations, including `FakeQuantize`, as is in the FP32 or FP16 precision.
Low precision IR is also suitable for FP32 and FP16 inference if a chosen plugin supports all operations of the IR. The only difference between a Low Precision IR and FP16 or FP32 IR is the existence of `FakeQuantize` in the Low Precision IR.
Plugins that support Low Precision Inference recognize these sub-graphs and quantize them during inference.
The ones that do not, execute all operations, including `FakeQuantize`, as is in the FP32 or FP16 precision.
Accordingly, the presence of FakeQuantize operations in the IR is a recommendation for a plugin on how to quantize particular operations in the model.
If capable, a plugin accepts the recommendation and performs Low Precision Inference, otherwise, the plugin ignores the recommendation and executes a model in the floating-point precision.
Consequently, when `FakeQuantize` operations are present in an OpenVINO IR, it suggests to the inference device how to quantize particular operations in the model.
If the device is capable, it accepts the suggestion and performs Low Precision Inference. If not, it executes the model in the floating-point precision.
## Compressed Low Precision Weights
Weighted operations, like `Convolution`, `MatMul`, and others, store weights as floating-point `Constant` in the graph followed by the `FakeQuantize` operation.
`Constant` followed by the `FakeQuantize` operation could be optimized memory-wise due to the `FakeQuantize` operation semantics.
Weighted operations, such as `Convolution` and `MatMul`, store weights as the floating-point `Constant` in the graph followed by the `FakeQuantize` operation.
The `Constant` followed by the `FakeQuantize` operation could be optimized memory-wise due to the `FakeQuantize` operation semantics.
The resulting weights sub-graph stores weights in Low Precision `Constant`, which gets unpacked back to floating point with the `Convert` operation.
Weights compression replaces `FakeQuantize` with optional `Subtract` and `Multiply` operation leaving output arithmetically the same and weights storing takes four times less memory.

View File

@@ -1,45 +1,45 @@
# Convert Kaldi* ASpIRE Chain Time Delay Neural Network (TDNN) Model {#openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model}
# Converting a Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model {#openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model}
You can [download a pre-trained model](https://kaldi-asr.org/models/1/0001_aspire_chain_model.tar.gz)
for the ASpIRE Chain Time Delay Neural Network (TDNN) from the Kaldi* project official website.
At the beginning, you should [download a pre-trained model](https://kaldi-asr.org/models/1/0001_aspire_chain_model.tar.gz)
for the ASpIRE Chain Time Delay Neural Network (TDNN) from the Kaldi project official website.
## Convert ASpIRE Chain TDNN Model to IR
## Converting an ASpIRE Chain TDNN Model to IR
To generate the Intermediate Representation (IR) of the model, run the Model Optimizer with the following parameters:
Generate the Intermediate Representation of the model by running Model Optimizer with the following parameters:
```sh
mo --input_model exp/chain/tdnn_7b/final.mdl --output output
```
The IR will have two inputs: `input` for data and `ivector` for ivectors.
The IR will have two inputs: `input` for data, and `ivector` for ivectors.
## Example: Run ASpIRE Chain TDNN Model with the Speech Recognition Sample
## Example: Running ASpIRE Chain TDNN Model with the Speech Recognition Sample
These instructions show how to run the converted model with the [Speech Recognition sample](../../../../../samples/cpp/speech_sample/README.md).
In this example, the input data contains one utterance from one speaker.
> **NOTE**: Before you continue with this part of the article, get familiar with the [Speech Recognition sample](../../../../../samples/cpp/speech_sample/README.md).
To follow the steps described below, you must first do the following:
In this example, the input data contains one utterance from one speaker.
To run the ASpIRE Chain TDNN Model with Speech Recognition sample, You need to prepare environment. Do it by following the steps below :
1. Download a [Kaldi repository](https://github.com/kaldi-asr/kaldi).
2. Build it using instructions in `README.md` in the repository.
2. Build it by following instructions in `README.md` from the repository.
3. Download the [model archive](https://kaldi-asr.org/models/1/0001_aspire_chain_model.tar.gz) from Kaldi website.
4. Extract the downloaded model archive to the `egs/aspire/s5` folder of the Kaldi repository.
To run the ASpIRE Chain TDNN Model with Speech Recognition sample:
Once everything has been prepared, you can start a proper run:
1. Prepare the model for decoding. Refer to the `README.txt` file from the downloaded model archive for instructions.
2. Convert data and ivectors to `.ark` format. Refer to the corresponding sections below for instructions.
### Prepare Data
### Preparing Data
If you have a `.wav` data file, you can convert it to `.ark` format using the following command:
If you have a `.wav` data file, convert it to the `.ark` format using the following command:
```sh
<path_to_kaldi_repo>/src/featbin/compute-mfcc-feats --config=<path_to_kaldi_repo>/egs/aspire/s5/conf/mfcc_hires.conf scp:./wav.scp ark,scp:feats.ark,feats.scp
```
Add the `feats.ark` absolute path to `feats.scp` to avoid errors in later commands.
### Prepare Ivectors
### Preparing Ivectors
To prepare ivectors for the Speech Recognition sample, do the following:
Prepare ivectors for the Speech Recognition sample:
1. Copy the `feats.scp` file to the `egs/aspire/s5/` directory of the built Kaldi repository and navigate there:
```sh
@@ -51,29 +51,27 @@ cd <path_to_kaldi_repo>/egs/aspire/s5/
```sh
./steps/online/nnet2/extract_ivectors_online.sh --nj 1 --ivector_period <max_frame_count_in_utterance> <data folder> exp/tdnn_7b_chain_online/ivector_extractor <ivector folder>
```
To simplify the preparation of ivectors for the Speech Recognition sample,
specify the maximum number of frames in utterances as a parameter for `--ivector_period`
You can simplify the preparation of ivectors for the Speech Recognition sample. To do it, specify the maximum number of frames in utterances as a parameter for `--ivector_period`
to get only one ivector per utterance.
To get the maximum number of frames in utterances, you can use the following command line:
To get the maximum number of frames in utterances, use the following command line:
```sh
../../../src/featbin/feat-to-len scp:feats.scp ark,t: | cut -d' ' -f 2 - | sort -rn | head -1
```
As a result, in `<ivector folder>`, you will find the `ivector_online.1.ark` file.
As a result, you will find the `ivector_online.1.ark` file in `<ivector folder>`.
3. Go to the `<ivector folder>`:
```sh
cd <ivector folder>
```
4. Convert the `ivector_online.1.ark` file to text format using the `copy-feats` tool. Run the following command:
4. Convert the `ivector_online.1.ark` file to text format, using the `copy-feats` tool. Run the following command:
```sh
<path_to_kaldi_repo>/src/featbin/copy-feats --binary=False ark:ivector_online.1.ark ark,t:ivector_online.1.ark.txt
```
5. For the Speech Recognition sample, the `.ark` file must contain an ivector
for each frame. You must copy the ivector `frame_count` times.
To do this, you can run the following script in the Python* command prompt:
for each frame. Copy the ivector `frame_count` times by running the below script in the Python command prompt:
```python
import subprocess
@@ -101,12 +99,12 @@ length_file.close()
<path_to_kaldi_repo>/src/featbin/copy-feats --binary=True ark,t:ivector_online_ie.ark.txt ark:ivector_online_ie.ark
```
### Run the Speech Recognition Sample
### Running the Speech Recognition Sample
Run the Speech Recognition sample with the created ivector `.ark` file as follows:
Run the Speech Recognition sample with the created ivector `.ark` file:
```sh
speech_sample -i feats.ark,ivector_online_ie.ark -m final.xml -d CPU -o prediction.ark -cw_l 17 -cw_r 12
```
Results can be decoded as described in "Use of Sample in Kaldi* Speech Recognition Pipeline" chapter
in [the Speech Recognition Sample description](../../../../../samples/cpp/speech_sample/README.md).
Results can be decoded as described in "Use of Sample in Kaldi Speech Recognition Pipeline"
in the [Speech Recognition Sample description](../../../../../samples/cpp/speech_sample/README.md) article.

View File

@@ -1,6 +1,6 @@
# Convert MXNet GluonCV* Models {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models}
# Converting MXNet GluonCV Models {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models}
This document provides the instructions and examples on how to use Model Optimizer to convert [GluonCV SSD and YOLO-v3 models](https://gluon-cv.mxnet.io/model_zoo/detection.html) to IR.
This article provides the instructions and examples on how to use Model Optimizer to convert [GluonCV SSD and YOLO-v3 models](https://gluon-cv.mxnet.io/model_zoo/detection.html) to IR.
1. Choose the topology available from the [GluonCV Model Zoo](https://gluon-cv.mxnet.io/model_zoo/detection.html) and export to the MXNet format using the GluonCV API. For example, for the `ssd_512_mobilenet1.0` topology:
```python
@@ -10,7 +10,7 @@ net = model_zoo.get_model('ssd_512_mobilenet1.0_voc', pretrained=True)
export_block('ssd_512_mobilenet1.0_voc', net, preprocess=True, layout='HWC')
```
As a result, you will get an MXNet model representation in `ssd_512_mobilenet1.0.params` and `ssd_512_mobilenet1.0.json` files generated in the current directory.
2. Run the Model Optimizer tool specifying the `--enable_ssd_gluoncv` option. Make sure the `--input_shape` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrates running the Model Optimizer for the SSD and YOLO-v3 models trained with the NHWC layout and located in the `<model_directory>`:
2. Run the Model Optimizer tool, specifying the `--enable_ssd_gluoncv` option. Make sure the `--input_shape` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrate running the Model Optimizer for the SSD and YOLO-v3 models trained with the NHWC layout and located in the `<model_directory>`:
* **For GluonCV SSD topologies:**
```sh
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>

View File

@@ -1,41 +1,41 @@
# Convert MXNet Style Transfer Model {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet}
# Converting an MXNet Style Transfer Model {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet}
The tutorial explains how to generate a model for style transfer using the public MXNet\* neural style transfer sample.
To use the style transfer sample from OpenVINO&trade;, follow the steps below as no public pre-trained style transfer model is provided with the OpenVINO toolkit.
This article provides instructions on how to generate a model for style transfer, using the public MXNet neural style transfer sample.
#### 1. Download or clone the repository with an MXNet neural style transfer sample: [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style).
**Step 1**: Download or clone the repository [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style) with an MXNet neural style transfer sample.
#### 2. Prepare the environment required to work with the cloned repository:
1. Install packages dependency:<br>
**Step 2**: Prepare the environment required to work with the cloned repository:
> **NOTE**: Python-tk installation is needed only for Linux. Python for Windows includes it by default.
1. Install packages dependency.<br>
```sh
sudo apt-get install python-tk
```
Installing python-tk step is needed only for Linux, as it is included by default in Python\* for Windows\*.
2. Install Python\* requirements:
2. Install Python requirements:
```sh
pip3 install --user mxnet
pip3 install --user matplotlib
pip3 install --user scikit-image
```
#### 3. Download the pre-trained [VGG19 model](https://github.com/dmlc/web-data/raw/master/mxnet/neural-style/model/vgg19.params) and save it to the root directory of the cloned repository because the sample expects the model `vgg19.params` file to be in that directory.<br>
**Step 3**: Download the pretrained [VGG19 model](https://github.com/dmlc/web-data/raw/master/mxnet/neural-style/model/vgg19.params) and save it to the root directory of the cloned repository. The sample expects the model `vgg19.params` file to be in that directory.<br>
#### 4. Modify source code files of style transfer sample from cloned repository.<br>
**Step 4**: Modify source code files of style transfer sample from the cloned repository:<br>
1. Go to the `fast_mrf_cnn` subdirectory.
```sh
cd ./fast_mrf_cnn
```
2. Open the `symbol.py` file and modify the `decoder_symbol()` function. Replace.
2. Open the `symbol.py` file and modify the `decoder_symbol()` function. You should see the following code there:
```py
def decoder_symbol():
data = mx.sym.Variable('data')
data = mx.sym.Convolution(data=data, num_filter=256, kernel=(3,3), pad=(1,1), stride=(1, 1), name='deco_conv1')
```
with the following code:<br>
Replace the code above with the following:<br>
```py
def decoder_symbol_with_vgg(vgg_symbol):
data = mx.sym.Convolution(data=vgg_symbol, num_filter=256, kernel=(3,3), pad=(1,1), stride=(1, 1), name='deco_conv1')
@@ -43,64 +43,64 @@ def decoder_symbol_with_vgg(vgg_symbol):
3. Save and close the `symbol.py` file.
4. Open and edit the `make_image.py` file:
Modify the `__init__()` function in the `Maker` class. Replace:<br>
4. Open and edit the `make_image.py` file. Go to the `__init__()` function in the `Maker` class:<br>
```py
decoder = symbol.decoder_symbol()
```
with the following code:<br>
Modfiy it with the following code:<br>
```py
decoder = symbol.decoder_symbol_with_vgg(vgg_symbol)
```
5. To join the pre-trained weights with the decoder weights, make the following changes:
After the code lines for loading the decoder weights:<br>
```py
args = mx.nd.load('%s_decoder_args.nd'%model_prefix)
auxs = mx.nd.load('%s_decoder_auxs.nd'%model_prefix)
```
add the following line:<br>
```py
arg_dict.update(args)
```
5. To join the pretrained weights with the decoder weights, make the following changes:
After the code lines for loading the decoder weights:<br>
```py
args = mx.nd.load('%s_decoder_args.nd'%model_prefix)
auxs = mx.nd.load('%s_decoder_auxs.nd'%model_prefix)
```
6. Use `arg_dict` instead of `args` as a parameter of the `decoder.bind()` function. Replace the line:<br>
Add the following line:<br>
```py
arg_dict.update(args)
```
6. Use `arg_dict` instead of `args` as a parameter of the `decoder.bind()` function. Find the line below:<br>
```py
self.deco_executor = decoder.bind(ctx=mx.gpu(), args=args, aux_states=auxs)
```
with the following:<br>
Replace it with the following:<br>
```py
self.deco_executor = decoder.bind(ctx=mx.cpu(), args=arg_dict, aux_states=auxs)
```
7. To save the result model as a `.json` file, add the following code to the end of the `generate()` function in the `Maker` class:<br>
7. Add the following code to the end of the `generate()` function in the `Maker` class to save the result model as a `.json` file:<br>
```py
self.vgg_executor._symbol.save('{}-symbol.json'.format('vgg19'))
self.deco_executor._symbol.save('{}-symbol.json'.format('nst_vgg19'))
```
8. Save and close the `make_image.py` file.
#### 5. Run the sample with a decoder model according to the instructions from the `README.md` file in the `fast_mrf_cnn` directory of the cloned repository.
For example, to run the sample with the pre-trained decoder weights from the `models` folder and output shape, use the following code:<br>
**Step 5**: Follow the instructions from the `README.md` file in the `fast_mrf_cnn` directory of the cloned repository and run the sample with a decoder model.
For example, use the following code to run the sample with the pretrained decoder weights from the `models` folder and output shape:<br>
```py
import make_image
maker = make_image.Maker('models/13', (1024, 768))
maker.generate('output.jpg', '../images/tubingen.jpg')
```
Where the `models/13` string is composed of the following substrings:
* `models/`: path to the folder that contains .nd files with pre-trained styles weights
* `13`: prefix pointing to 13_decoder, which is the default decoder for the repository.
The `models/13` string in the code above is composed of the following substrings:
* `models/` -- path to the folder that contains `.nd` files with pretrained styles weights.
* `13` -- prefix pointing to the default decoder for the repository, `13_decoder`.
> **NOTE**: If you get an error saying "No module named 'cPickle'", try running the script from this step in Python 2. Then return to Python 3 for the remaining steps.
> **NOTE**: If an error prompts with "No module named `cPickle`", try running the script from Step 5 in Python 2. After that return to Python 3 for the remaining steps.
You can choose any style from [collection of pre-trained weights](https://pan.baidu.com/s/1skMHqYp). (On the Chinese-language page, click the down arrow next to a size in megabytes. Then wait for an overlay box to appear, and click the blue button in it to download.) The `generate()` function generates `nst_vgg19-symbol.json` and `vgg19-symbol.json` files for the specified shape. In the code, it is [1024 x 768] for a 4:3 ratio, and you can specify another, for example, [224,224] for a square ratio.
Any style can be selected from [collection of pretrained weights](https://pan.baidu.com/s/1skMHqYp). On the Chinese-language page, click the down arrow next to a size in megabytes. Then wait for an overlay box to appear, and click the blue button in it to download. The `generate()` function generates `nst_vgg19-symbol.json` and `vgg19-symbol.json` files for the specified shape. In the code, it is [1024 x 768] for a 4:3 ratio. You can specify another, for example, [224,224] for a square ratio.
#### 6. Run the Model Optimizer to generate an Intermediate Representation (IR):
**Step 6**: Run the Model Optimizer to generate an Intermediate Representation (IR):
1. Create a new directory. For example:<br>
```sh
mkdir nst_model
```
2. Copy the initial and generated model files to the created directory. For example, to copy the pre-trained decoder weights from the `models` folder to the `nst_model` directory, run the following commands:<br>
2. Copy the initial and generated model files to the created directory. For example, to copy the pretrained decoder weights from the `models` folder to the `nst_model` directory, run the following commands:<br>
```sh
cp nst_vgg19-symbol.json nst_model
cp vgg19-symbol.json nst_model
@@ -110,8 +110,8 @@ cp models/13_decoder_auxs.nd nst_model
```
> **NOTE**: Make sure that all the `.params` and `.json` files are in the same directory as the `.nd` files. Otherwise, the conversion process fails.
3. Run the Model Optimizer for MXNet. Use the `--nd_prefix_name` option to specify the decoder prefix and `--input_shape` to specify input shapes in [N,C,W,H] order. For example:<br>
3. Run the Model Optimizer for Apache MXNet. Use the `--nd_prefix_name` option to specify the decoder prefix and `--input_shape` to specify input shapes in [N,C,W,H] order. For example:<br>
```sh
mo --input_symbol <path/to/nst_model>/nst_vgg19-symbol.json --framework mxnet --output_dir <path/to/output_dir> --input_shape [1,3,224,224] --nd_prefix_name 13_decoder --pretrained_model <path/to/nst_model>/vgg19-0000.params
```
4. The IR is generated (`.bin`, `.xml` and `.mapping` files) in the specified output directory and ready to be consumed by the OpenVINO Runtime.
4. The IR is generated (`.bin`, `.xml` and `.mapping` files) in the specified output directory, and ready to be consumed by the OpenVINO Runtime.

View File

@@ -1,10 +1,11 @@
# Convert ONNX* Faster R-CNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN}
# Converting an ONNX Faster R-CNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN}
These instructions are applicable only to the Faster R-CNN model converted to the ONNX* file format from the [facebookresearch/maskrcnn-benchmark model](https://github.com/facebookresearch/maskrcnn-benchmark).
The instructions below are applicable **only** to the Faster R-CNN model converted to the ONNX file format from the [maskrcnn-benchmark model](https://github.com/facebookresearch/maskrcnn-benchmark):
**Step 1**. Download the pre-trained model file from [onnx/models](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn) (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).
1. Download the pretrained model file from [onnx/models](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn):
* (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).
**Step 2**. To generate the Intermediate Representation (IR) of the model, change your current working directory to the Model Optimizer installation directory and run the Model Optimizer with the following parameters:
2. Generate the Intermediate Representation of the model, by changing your current working directory to the Model Optimizer installation directory, and running Model Optimizer with the following parameters:
```sh
mo \
--input_model FasterRCNN-10.onnx \
@@ -14,6 +15,9 @@ These instructions are applicable only to the Faster R-CNN model converted to th
--transformations_config front/onnx/faster_rcnn.json
```
Note that the height and width specified with the `input_shape` command line parameter could be different. Refer to the [documentation](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn) for more information about supported input image dimensions and required pre- and post-processing steps.
Be aware that the height and width specified with the `input_shape` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the [Faster R-CNN article](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn).
**Step 3**. Interpret the outputs. The generated IR file has several outputs: class indices, probabilities and box coordinates. These are outputs from the "DetectionOutput" layer.
3. Interpret the outputs of the generated IR: class indices, probabilities and box coordinates. Below are the outputs from the "DetectionOutput" layer:
* class indices.
* probabilities.
* box coordinates.

View File

@@ -1,17 +1,17 @@
# Convert ONNX* GPT-2 Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2}
# Converting an ONNX GPT-2 Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2}
[Public pre-trained GPT-2 model](https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2) is a large
[Public pretrained GPT-2 model](https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2) is a large
transformer-based language model with a simple objective: predict the next word, given all of the previous words within some text.
## Download the Pre-Trained Base GPT-2 Model
## Downloading the Pre-Trained Base GPT-2 Model
To download the model, click **Download** on [https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.onnx](https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.onnx).
To download the model, go to [this model](https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.onnx), and press **Download**.
To download the model and sample test data, click **Download** on [https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.tar.gz](https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.tar.gz).
To download the model and sample test data, go to [this model](https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-10.tar.gz), and press **Download**.
## Convert ONNX* GPT-2 Model to IR
## Converting an ONNX GPT-2 Model to IR
To generate the Intermediate Representation (IR) of the model GPT-2, run the Model Optimizer with the following parameters:
Generate the Intermediate Representation of the model GPT-2 by running Model Optimizer with the following parameters:
```sh
mo --input_model gpt2-10.onnx --input_shape [X,Y,Z] --output_dir <OUTPUT_MODEL_DIR>
```

View File

@@ -1,10 +1,11 @@
# Convert ONNX* Mask R-CNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN}
# Converting an ONNX Mask R-CNN Model {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN}
These instructions are applicable only to the Mask R-CNN model converted to the ONNX* file format from the [facebookresearch/maskrcnn-benchmark model](https://github.com/facebookresearch/maskrcnn-benchmark).
The instructions below are applicable **only** to the Mask R-CNN model converted to the ONNX file format from the [maskrcnn-benchmark model](https://github.com/facebookresearch/maskrcnn-benchmark).
**Step 1**. Download the pre-trained model file from [onnx/models](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn) (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).
1. Download the pretrained model file from [onnx/models](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn):
* commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117.
**Step 2**. To generate the Intermediate Representation (IR) of the model, change your current working directory to the Model Optimizer installation directory and run the Model Optimizer with the following parameters:
2. Generate the Intermediate Representation of the model by changing your current working directory to the Model Optimizer installation directory and running Model Optimizer with the following parameters:
```sh
mo \
--input_model mask_rcnn_R_50_FPN_1x.onnx \
@@ -14,6 +15,12 @@ These instructions are applicable only to the Mask R-CNN model converted to the
--transformations_config front/onnx/mask_rcnn.json
```
Note that the height and width specified with the `input_shape` command line parameter could be different. Refer to the [documentation](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn) for more information about supported input image dimensions and required pre- and post-processing steps.
Be aware that the height and width specified with the `input_shape` command line parameter could be different. For more information about supported input image dimensions and required pre- and post-processing steps, refer to the [documentation](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/mask-rcnn).
**Step 3**. Interpret the outputs. The generated IR file has several outputs: masks, class indices, probabilities and box coordinates. The first one is a layer with the name "6849/sink_port_0". The rest three are outputs from the "DetectionOutput" layer.
3. Interpret the outputs of the generated IR file: masks, class indices, probabilities and box coordinates.
* masks.
* class indices.
* probabilities.
* box coordinates.
The first one is a layer with the name `6849/sink_port_0`, and rest are outputs from the `DetectionOutput` layer.

View File

@@ -1,15 +1,18 @@
# Convert PyTorch* BERT-NER Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner}
# Converting a PyTorch BERT-NER Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner}
## Download and Convert the Model to ONNX*
The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
To download a pre-trained model or train the model yourself, refer
to the [instruction](https://github.com/kamalkraj/BERT-NER/blob/dev/README.md) in the
BERT-NER model repository. The model with config files is stored in the `out_base` directory.
To convert the model to ONNX* format, create and run the script with the following content in the root
directory of the model repository. If you download the pre-trained model, you need
## Downloading and Converting the Model to ONNX
To download a pretrained model or train the model yourself, refer
to the [instructions](https://github.com/kamalkraj/BERT-NER/blob/dev/README.md) in the
BERT-NER model repository. The model with configuration files is stored in the `out_base` directory.
To convert the model to ONNX format, create and run the following script in the root
directory of the model repository. If you download the pretrained model, you need
to download [`bert.py`](https://github.com/kamalkraj/BERT-NER/blob/dev/bert.py) to run the script.
The instruction was tested with the repository hash commit `e5be564156f194f1becb0d82aeaf6e762d9eb9ed`.
The instructions were tested with the commit-SHA: `e5be564156f194f1becb0d82aeaf6e762d9eb9ed`.
```python
import torch
@@ -44,12 +47,12 @@ torch.onnx.export(ner_model,
)
```
The script generates ONNX* model file `bert-ner.onnx`.
The script generates ONNX model file `bert-ner.onnx`.
## Convert ONNX* BERT-NER model to IR
## Converting an ONNX BERT-NER model to IR
```bash
mo --input_model bert-ner.onnx --input "input_mask[1 128],segment_ids[1 128],input_ids[1 128]"
```
where `1` is `batch_size` and `128` is `sequence_length`.
where `1` is `batch_size` and `128` is `sequence_length`.

View File

@@ -1,6 +1,8 @@
# Convert PyTorch Cascade RCNN R-101 Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Cascade_RCNN_res101}
# Converting a PyTorch Cascade RCNN R-101 Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Cascade_RCNN_res101}
## Download and Convert Model to ONNX
The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
## Downloading and Converting Model to ONNX
* Clone the [repository](https://github.com/open-mmlab/mmdetection):
@@ -9,9 +11,9 @@ git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
```
> **NOTE**: To set up an environment, refer to this [instruction](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/get_started.md#installation).
> **NOTE**: To set up an environment, refer to the [instructions](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/get_started.md#installation).
* Download the pre-trained [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco/cascade_rcnn_r101_fpn_1x_coco_20200317-0b6a2fbf.pth). You can also find the link to the model [here](https://github.com/open-mmlab/mmdetection/blob/master/configs/cascade_rcnn/README.md).
* Download the pretrained [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco/cascade_rcnn_r101_fpn_1x_coco_20200317-0b6a2fbf.pth). The model is also available [here](https://github.com/open-mmlab/mmdetection/blob/master/configs/cascade_rcnn/README.md).
* To convert the model to ONNX format, use this [script](https://github.com/open-mmlab/mmdetection/blob/master/tools/deployment/pytorch2onnx.py).
@@ -19,10 +21,10 @@ cd mmdetection
python3 tools/deployment/pytorch2onnx.py configs/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco.py cascade_rcnn_r101_fpn_1x_coco_20200317-0b6a2fbf.pth --output-file cascade_rcnn_r101_fpn_1x_coco.onnx
```
The script generates ONNX model file `cascade_rcnn_r101_fpn_1x_coco.onnx` in the directory `tools/deployment/`. If required, you can specify the model name or output directory using `--output-file <path-to-dir>/<model-name>.onnx`
The script generates ONNX model file `cascade_rcnn_r101_fpn_1x_coco.onnx` in the directory `tools/deployment/`. If required, specify the model name or output directory, using `--output-file <path-to-dir>/<model-name>.onnx`.
## Convert ONNX Cascade RCNN R-101 Model to IR
## Converting an ONNX Cascade RCNN R-101 Model to OpenVINO IR
```bash
mo --input_model cascade_rcnn_r101_fpn_1x_coco.onnx --mean_values [123.675,116.28,103.53] --scale_values [58.395,57.12,57.375]
```
```

View File

@@ -1,8 +1,8 @@
# Convert PyTorch* F3Net Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net}
# Converting a PyTorch F3Net Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net}
[F3Net](https://github.com/weijun88/F3Net): Fusion, Feedback and Focus for Salient Object Detection
## Clone the F3Net Repository
## Cloning the F3Net Repository
To clone the repository, run the following command:
@@ -10,10 +10,10 @@ To clone the repository, run the following command:
git clone http://github.com/weijun88/F3Net.git
```
## Download and Convert the Model to ONNX*
## Downloading and Converting the Model to ONNX
To download the pre-trained model or train the model yourself, refer to the
[instruction](https://github.com/weijun88/F3Net/blob/master/README.md) in the F3Net model repository. First, convert the model to ONNX\* format. Create and run the following Python script in the `src` directory of the model repository:
To download the pretrained model or train the model yourself, refer to the
[instructions](https://github.com/weijun88/F3Net/blob/master/README.md) in the F3Net model repository. First, convert the model to ONNX format. Create and run the following Python script in the `src` directory of the model repository:
```python
import torch
from dataset import Config
@@ -24,9 +24,9 @@ net = F3Net(cfg)
image = torch.zeros([1, 3, 352, 352])
torch.onnx.export(net, image, 'f3net.onnx', export_params=True, do_constant_folding=True, opset_version=11)
```
The script generates the ONNX\* model file f3net.onnx. This model conversion was tested with the repository hash commit `eecace3adf1e8946b571a4f4397681252f9dc1b8`.
The script generates the ONNX model file `f3net.onnx`. The model conversion was tested with the commit-SHA: `eecace3adf1e8946b571a4f4397681252f9dc1b8`.
## Convert ONNX* F3Net Model to IR
## Converting an ONNX F3Net Model to IR
```sh
mo --input_model <MODEL_DIR>/f3net.onnx

View File

@@ -1,13 +1,13 @@
# Convert PyTorch* QuartzNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet}
# Converting a PyTorch QuartzNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet}
[NeMo project](https://github.com/NVIDIA/NeMo) provides the QuartzNet model.
## Download the Pre-Trained QuartzNet Model
## Downloading the Pretrained QuartzNet Model
To download the pre-trained model, refer to the [NeMo Speech Models Catalog](https://ngc.nvidia.com/catalog/models/nvidia:nemospeechmodels).
Here are the instructions on how to obtain QuartzNet in ONNX* format.
To download the pretrained model, refer to the [NeMo Speech Models Catalog](https://ngc.nvidia.com/catalog/models/nvidia:nemospeechmodels).
Here are the instructions on how to obtain QuartzNet in ONNX format.
1. Install the NeMo toolkit using the [instructions](https://github.com/NVIDIA/NeMo/tree/main#installation).
1. Install the NeMo toolkit, using the [instructions](https://github.com/NVIDIA/NeMo/tree/main#installation).
2. Run the following code:
@@ -16,16 +16,16 @@ import nemo
import nemo.collections.asr as nemo_asr
quartznet = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name="QuartzNet15x5Base-En")
# Export QuartzNet model to ONNX* format
# Export QuartzNet model to ONNX format
quartznet.decoder.export('decoder_qn.onnx')
quartznet.encoder.export('encoder_qn.onnx')
quartznet.export('qn.onnx')
```
This code produces 3 ONNX* model files: `encoder_qn.onnx`, `decoder_qn.onnx`, `qn.onnx`.
They are `decoder`, `encoder` and a combined `decoder(encoder(x))` models, respectively.
This code produces 3 ONNX model files: `encoder_qn.onnx`, `decoder_qn.onnx`, `qn.onnx`.
They are `decoder`, `encoder`, and a combined `decoder(encoder(x))` models, respectively.
## Convert ONNX* QuartzNet model to IR
## Converting an ONNX QuartzNet model to IR
If using a combined model:
```sh

View File

@@ -1,13 +1,12 @@
# Convert PyTorch* RCAN Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN}
# Converting a PyTorch RCAN Model {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN}
[RCAN](https://github.com/yulunzhang/RCAN): Image Super-Resolution Using Very Deep Residual Channel Attention Networks
## Download and Convert the Model to ONNX*
## Downloading and Converting the Model to ONNX
To download the pre-trained model or train the model yourself, refer to the
[instruction](https://github.com/yulunzhang/RCAN/blob/master/README.md) in the RCAN model repository. Firstly,
convert the model to ONNX\* format. Create and run the script with the following content in the root
To download the pretrained model or train the model yourself, refer to the [instruction](https://github.com/yulunzhang/RCAN/blob/master/README.md) in the RCAN model repository. First, convert the model to ONNX format. Create and run the script with the following content in the root
directory of the model repository:
```python
from argparse import Namespace
@@ -22,9 +21,9 @@ net.eval()
dummy_input = torch.randn(1, 3, 360, 640)
torch.onnx.export(net, dummy_input, 'RCAN.onnx')
```
The script generates the ONNX\* model file RCAN.onnx. You can find more information about model parameters (`n_resblocks`, `n_resgroups`, and others) in the model repository and use different values of them. The model conversion was tested with the repository hash commit `3339ebc59519c3bb2b5719b87dd36515ec7f3ba7`.
The script generates the ONNX model file `RCAN.onnx`. More information about model parameters (`n_resblocks`, `n_resgroups`, and others) and their different values can be found in the model repository. The model conversion was tested with the commit-SHA: `3339ebc59519c3bb2b5719b87dd36515ec7f3ba7`.
## Convert ONNX* RCAN Model to IR
## Converting an ONNX RCAN Model to IR
```sh
mo --input_model RCAN.onnx

Some files were not shown because too many files have changed in this diff Show More