Compare commits

..

91 Commits

Author SHA1 Message Date
Dmitry Budnikov
0629a0eb08 Fluid update for er47 (fork) (#3306)
* update script

* update fluid for ER47

* revert script with checksum
2020-11-26 12:27:27 +03:00
Rafal Blaczkowski
5e7aaee3fd Update tests requirements (#3358) 2020-11-25 17:29:57 +03:00
Marina Mineeva
bed54b7572 Update convert_opset1_to_legacy.cpp (#2953)
skip pass FakeQuantizeMulFusion
2020-11-05 18:16:13 +03:00
Vladislav Vinogradov
1db261981a [IE][BUILD] Fix C5208 warning under Windows (#2628)
* C++ feature in C `typedef struct` code.
* The warning can be promoted to error in dependent projects.

C5208: unnamed class used in typedef name cannot declare members other than
non-static data members, member enumerations, or member classes
2020-10-21 16:20:15 +03:00
Vladislav Vinogradov
1ab1c86855 [IE][TESTS][CMAKE] Add -Wno-deprecated-copy compile flag for GTest
It fixes build on Ubuntu 20.04 with gcc 9.3.0.
2020-10-21 15:42:31 +03:00
Andrey Somsikov
606dfcb96b Itt merge (#2676)
* Use  ittnotify built from sources

ITT tracing was only possible on the platfroms supported by VTune.
Building ittnotify from sources removes VTune dependency.

ITT traces was found significantly slowdown tests execution time.
Disabling ENABLE_PROFILING_ITT by default. This is also
a current behavior of Intel Distribution of OpenVINO.
2020-10-21 15:42:23 +03:00
Alexander Novak
051a924024 Remove checks for output precision (#2441) 2020-10-21 15:42:22 +03:00
Andrey Somsikov
8c3acd4f4e Add hddl_unite to dependencies.sh/bat (#2467) 2020-10-21 15:42:22 +03:00
Artemy Skrebkov
cf6c4e72b3 Update setupvars to add path to XLink (#2371) (#2410)
- Do not export KMB_INSTALL_DIR. It is exported by another script
2020-10-21 15:42:22 +03:00
Artemy Skrebkov
7cd76c1d29 Update fluid to 7c22cd49a7eb76ae1d9606672ee467fb52383de0 (#2407)
* Update fluid to 7c22cd49a7eb76ae1d9606672ee467fb52383de0

   OpenCV 4.5.0

* Fix windows build
2020-10-21 15:42:22 +03:00
Alexey Suhov
4a46be7631 [install_dependencies.sh] install latest cmake if current version is lower 3.13 (#2695) (#2701)
* [install_dependencies.sh] install latest cmake if current version is lower 3.13

* add shellcheck for Ubuntu

* install python 2.7 for Ubuntu
2020-10-16 21:20:06 +03:00
Andrey Zaytsev
c112547a50 Fixed CVS-35316 (#2072) (#2670)
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
2020-10-15 12:29:12 +03:00
Andrey Zaytsev
41e7475731 Feature/ntyukaev/separate layout (#2629)
* convert to doxygen comments

* layouts and code comments

* separate layout

* Changed layouts

* Removed FPGA from the documentation

* Updated according to CVS-38225

* some changes

* Made changes to benchmarks according to review comments

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* update layouts

* Post-release fixes and installation path changes

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Removed DLDT description

* Replaced wrong links

* MInor fix for path to the cpp samples

* fixes

* Update ops.py

* Fix style

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: aalborov <alina.alborova@intel.com>
Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com>
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
2020-10-14 20:13:04 +03:00
Anton Romanov
f050de86dd Improve pip installation guide (#2644)
* Improve pip installation guide

* Updated after comments
2020-10-14 12:18:39 +03:00
Rafal Blaczkowski
3c4b116895 Skip hanging test case of OpenVino ONNX CI (#2608)
* Update OpenVino ONNX CI

* Change parallel execution to single

* Enlarge timeout

* Remove timeout

* Add timeout to test execution

* Skip hanging test

* Add description to skip issue
2020-10-12 07:00:15 +03:00
Rafal Blaczkowski
a5f538462d Update OpenVino ONNX CI check (#2599)
* Update OpenVino ONNX CI

* Change parallel execution to single

* Enlarge timeout

* Remove timeout

* Add timeout to test execution
2020-10-09 15:14:10 +03:00
Anton Romanov
0731f67e9f Added pip install documentation (#2465)
* Added pip install documentation

* Change references

* tiny fixes of links

* Update installing-openvino-pip.md

Co-authored-by: Alina Alborova <alina.alborova@intel.com>
2020-10-09 12:24:04 +03:00
Gleb Kazantaev
4793774d18 Added deprecation note for PassConfig class (#2593) 2020-10-08 18:11:19 +03:00
Ilya Churaev
ea06196afb Fixed links to images (#2569) 2020-10-07 13:32:47 +03:00
Alexey Suhov
f557dca475 Update SW requirements in build instructions and change latest release to 2021.1 (#2565) 2020-10-07 00:29:37 +03:00
Andrey Zaytsev
185fe44080 Feature/azaytsev/docs 2021 1 (#2560)
* Removed FPGA from the documentation

* Updated according to CVS-38225

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* COnverted svg images to png

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image
2020-10-06 23:22:53 +03:00
Ilya Churaev
2a1f43a64a First draft of nGraph documentation (#2271)
* First draft of nGraph documentation

* updated according to review comments

* Updated

* Reviewed the nGraph Transformation section, added missing images

* Update nGraph_dg.md

* Delete python_api.md

Removed since there is already the nGraph_Python_API.md document with a comprehensive overview.

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: CCR\avladimi <anastasiya.ageeva@intel.com>
2020-10-06 22:43:47 +03:00
Michał Karzyński
f19d1d16f0 nGraph Python API tutorial (#2500)
* nGraph Python API tutorial

* Tweaks

* Code review comments

* Code review comments
2020-10-06 18:28:51 +03:00
Denis Orlov
d98beb796b [GNA] Documentation updates for 2021.1 (#2460)
* [GNA] Documentation updates for 2021.1

* Take Mike's comments into account

* More fixes according to review

* Fix processor generation names
2020-10-06 10:58:19 +03:00
Alina Kladieva
915858198e [Jenkinsfile] Bump infra (#2546) 2020-10-05 23:37:13 +03:00
Vitaliy Urusovskij
8d4545e1b2 Remove --collect_results_only (#2523)
* Remove `--collect_results_only` from MemCheckTests

* Remove CLI keys from README
2020-10-05 13:18:17 +03:00
Denis Orlov
e45272c714 Update docs for speech libs and demos (#2518) 2020-10-02 18:00:20 +03:00
Andrey Zaytsev
fe3dc7d176 Updated according to the comments in the ticket CVS-37827 (#2448) 2020-10-02 13:25:26 +03:00
Andrey Zaytsev
9c297a3174 Feature/azaytsev/cvs-38240 (#2469)
* Updated for 2020 version, replaced Ubuntu 16.04 with Ubuntu 20.04

* Updated the release package numbers
2020-10-02 12:21:48 +03:00
Andrey Zaytsev
f9c692b885 Update build-instruction.md for MacOsX (#2457)
* Update build-instruction.md for MacOsX

* Removed call of install_dependencies.sh from the steps
2020-10-01 23:21:32 +03:00
Andrey Zaytsev
bbce6f5b3a Feature/azaytsev/benchmarks 2021 1 (#2501)
* Initial changes for 2021.1

* Inserted Graphtool scripts, updated configurations info

* Updated FAQ and minor changes to performance_benchmarks.md

* Updated for 2021.1

* Updated

* incorporated review comments

* incorporated review comments for FAQ

* fixed link
2020-10-01 20:49:49 +03:00
azhogov
2395f9f120 Azure CI: Add separated pipelines for Windows, Linux, Mac 2020-10-01 20:07:41 +03:00
Maxim Vafin
c88f838dfa [Docs] Update MO What's new description (#2481) 2020-10-01 16:50:46 +03:00
Alina Alborova
ce6ce23eec [DOCS] Update Installation Guide - GPU steps (#2308)
* Initial commit

* fixing lists

* Update installing-openvino-linux.md

* Get rid of the note

* Added the scrrenshot

* Update installing-openvino-linux.md

* fixes
2020-09-30 20:33:27 +03:00
Alina Alborova
6a32854ec4 Remove the deprecation notice (#2314)
* Removed deprecation notice

* Removed the note from other files
2020-09-30 20:32:53 +03:00
Mikhail Ryzhov
bece22ac67 Added closing braсket (#2466)
Fixed syntax error (b4b03b1)
2020-09-30 18:09:18 +03:00
Maksim Proshin
76606ba2fc Update the menu to align with POT doc headers (#2433)
* Update the menu to align with POT doc headers

It changes the menu to align with Post-training Optimization Toolkit documentation titles.

* Corrected one title

Run Examples => How to Run Examples
2020-09-30 14:00:03 +03:00
Andrey Zaytsev
1c538af62f Replace absolute links to docs.openvinotoolkit.org by relative ones (#2439)
* Replaced direct links to docs.openvinotoolkit.org with relative links

* Replaced direct links to docs.openvinotoolkit.org with relative links. Added GSGs for Win and macOS

* Minor fixes in GSGs

* Replaced direct links to docs.openvinotoolkit.org with relative links

* Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process

* Fixed Notes

* Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process

* fixed link to installing-openvino-linux.md
2020-09-29 18:55:08 +03:00
Ilya Lavrenov
3a720d188b Fixed docs build on Windows (#2383) 2020-09-24 12:12:35 +03:00
Andrey Zaytsev
70f619b5eb Added new GSG for macOS, made minor changes in Windows GSG (#2070) (#2405)
* Added new GSG for macOS, made minor changes in Windows GSG

* Update get_started_macos.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
2020-09-24 12:01:39 +03:00
Ilya Lavrenov
0dbaf078d8 Added link options for cross-compilation (#2397) 2020-09-23 17:42:57 +03:00
Nikolay Tyukaev
3c5fa6f4b8 Fix layout links for dl streamer and c api (#2375)
* fix layouts

* change the dl-streamer link

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
2020-09-23 16:43:23 +03:00
Rafal Blaczkowski
31ccf354dc Delete xfail for resolved known issue (#2385) 2020-09-23 13:51:13 +03:00
Gleb Kazantaev
bf9b649cdf Updated Transformation development doc (#2370) 2020-09-23 09:46:40 +03:00
Dmitrii Denisov
84518964ba Install dependency refactoring. (#2381) 2020-09-22 19:11:16 +03:00
Mikhail Ryzhov
0b4846cfcc Downgrade cmake for samples (#2372)
* Downgrade cmake for samples

Downgraded cmake version to default version for Ubuntu 18.04

* Updated supported python version

The minimal python version in 2021.1 is 3.5

* Added notes about cmake requirements for samples and demo
2020-09-22 19:09:50 +03:00
Ilya Churaev
950388d9e8 [DOCS] Added an evaluate method for custom operation (#2272)
* Added an evaluate method for custom operation

* Fixed comments
2020-09-22 13:22:31 +03:00
Nikolay Tyukaev
f828b16f40 add doxygen doc build configurations (#2191)
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
2020-09-22 12:37:10 +03:00
Artyom Anokhov
261bd3de6b install_NEO_OCL_driver: Added checking of installed packages before trying to remove them. Added quotes for echo. (#2350) 2020-09-21 17:35:44 +03:00
Evgeny Talanin
31b3e356ab Bump cmake version to 3.13 (#2339) 2020-09-18 18:58:17 +03:00
Artyom Anokhov
607982e79c install_NEO_OCL_driver: Updated exit codes, messages. Updated way to remove old driver on Ubuntu (#2333) 2020-09-18 17:44:26 +03:00
Vitaliy Urusovskij
c083e5b146 Implement run_executable.py to run TimeTests several times (#2125) (#2188)
CI passed
2020-09-18 16:17:47 +03:00
Vladimir Gavrilov
444301a1d6 Added ONNX Resize-11 and ONNX Resize-13 to supported frameworks layers list. (#2325) 2020-09-18 15:16:29 +03:00
Evgenya Stepyreva
f56ba0daa9 Revert "[IE][VPU]: Fix K propagation through Reshape (2021.1) (#2180)" (#2322)
This reverts commit d604a03ac0.
2020-09-18 12:19:27 +03:00
Mikhail Ryzhov
cd101085d7 Fixed c samples build (#2278) (#2304)
* Fixed c samples build

fixed CVS-38816 - Failure to build samples in C

* Fixed issue with gflags
2020-09-18 10:39:57 +03:00
Evgeny Lazarev
2c79f74579 Updated operations specification documents (2021.1) (#2268)
* Updated documentation structure and remove incorrect added files for Acosh-1, Asinh-1 and Atanh-1

* Fixed broken links
2020-09-18 08:16:04 +03:00
Artyom Anokhov
d7463eb216 setupvars: Updated notifications, fixed calling python in Windows case (#2318) 2020-09-17 21:20:27 +03:00
Irina Efode
74b13a0f74 [IE TESTS] CoreThreading_LoadNetwork tests were disabled for GPU plugin (#2245) (#2283) 2020-09-17 20:11:18 +03:00
Dmitrii Denisov
1c8188908e Added python3-gi package and fixed libglib2.0-0 package location. (#2294) 2020-09-17 16:42:45 +03:00
Artyom Anokhov
86e39a6775 [Scripts] Fix setting PYTHONPATH logic (#2305)
* setupvars.sh: Added logic for exporting path env in case if it not defined

* setupvars: Removed duplicated colon

* install_openvino_dependencies: Updated copyrights

setupvars.bat: Updated notification about incorrect Python version. Removed checking ICC2019
setupvars.sh: Removed logic with choosing higher version of installed Python. Added dynamic detecting python3 major and minor version for setting path. Add checking minimum required Python version(now 3.6)
2020-09-17 16:41:46 +03:00
Tomasz Dołbniak
2645421df6 Clone a specific tag for pybind11 (#2296) 2020-09-16 23:04:42 +03:00
Zoe Cayetano
9b1961502b Update get_ov_update_message.py (#2286) 2020-09-16 20:33:39 +03:00
Vladimir Gavrilov
2023a7cd81 Fixes for Interpolate-4. (#2281) 2020-09-16 19:41:37 +03:00
Vladislav Volkov
105cd18d0b Fix for static PartialShape detection algorithm (#2177) 2020-09-16 18:09:07 +03:00
Maksim Doronin
92d19291c8 [IE][VPU]: KW fixes (#2186)
* Some KW fixes
* Fix printTo in vpu ngraph transformations
2020-09-16 18:08:55 +03:00
Vladislav Vinogradov
191e9f7f72 [IE][TESTS] Fix compareRawBuffers and compareBlobData methods (#2246)
Use `<=` comparison instead of `<` with thresholds.
This allows to use `0` threshold for bit-exact comparison.
2020-09-16 18:00:06 +03:00
Alexander Novak
126c2600bb Add VPUX configuration to compile_tool (#2248) 2020-09-16 17:59:17 +03:00
Alexey Suhov
b922800ae2 update OpenCV version to 4.5.0 (#2260) 2020-09-16 16:12:59 +03:00
Maxim Shevtsov
272b17f5d9 Reverting devicePriorities to be vector and respect the order, as opposed to the incorrect (recent?) refactoring that introduced the unordered_map that effectively ignores the priorities (#2251) 2020-09-16 16:12:40 +03:00
Evgeny Latkin
b89e7d69dd [IE][VPU]: update firmware 1381 (#2236) 2020-09-16 16:05:43 +03:00
Ilya Churaev
528e6f9328 Fixed KW warning and review issues (#2262) 2020-09-16 15:33:10 +03:00
Gorokhov Dmitriy
ebf009d1a1 Revert "[IE TESTS] dynavic batch for mvn layer (#1010)" (#2256)
This reverts commit 2e3378c50f.
2020-09-16 14:11:34 +03:00
Andrew Bakalin
d604a03ac0 [IE][VPU]: Fix K propagation through Reshape (2021.1) (#2180)
* Fix K propagation through Reshape
* Add test cases
2020-09-16 12:42:15 +03:00
Nikolay Shchegolev
e7e82b9eb7 Statically analyzed issues. (#2261) 2020-09-16 12:32:20 +03:00
Maxim Kurin
f5bd16990e [IE][VPU][OpenCL] 2021.1 release compiler (#2189) 2020-09-16 00:46:27 +03:00
Evgenya Stepyreva
488f2dd916 [DOC] Reshape feature (#2194) 2020-09-15 21:24:25 +03:00
Evgeny Talanin
79853baf28 Add exposing function signatures via Cython (#2244) 2020-09-15 20:19:57 +03:00
Svetlana Dolinina
6c5e0cfaa4 Duplicate PR 2167 for release branch: GatherTree description was extended and outdated link fixed (#2235)
* add more alrifications to description

* move clarification to comment

* pseudo code become more accurate

* review changes
2020-09-15 19:36:51 +03:00
Maksim Doronin
d239b2584c [IE][VPU]: Remove the second call of ngraph::CommonOptimizations (#2221)
* Remove the second call of ngraph::CommonOptimizations in myriad plugin
* Reuse code with vpu ngraph transformations
2020-09-15 17:08:46 +03:00
Roman Vyunov (Intel)
28a733b771 [IE][VPU]: Workaround to support parameter Beta for layer Swish (#2207)
* Workaround to full support Swish layer. It is faster than native Swish for now.
2020-09-15 14:44:38 +03:00
Ilya Churaev
7bba2a9542 Fixed output names for case with redundant ops before result (#2209) 2020-09-15 14:00:27 +03:00
Ilya Churaev
9b7e22f49a Fix QueryNetwork for networks with KSO (#2202)
* Added a test to reproduce QueryNetwork with KSO

* Fixed QueryNetwork for networks with KSO

* Added additional test
2020-09-15 14:00:09 +03:00
Ilya Churaev
a4dc5c89f3 some nGraph KW fixes (#2176)
* Removed redundant methods

* Fixed KW for linux
2020-09-15 13:59:42 +03:00
Ilya Churaev
fef1803a86 Extend error message (#2174) 2020-09-15 13:59:15 +03:00
Tomasz Dołbniak
e94393df10 FakeQuantize + Mul fusion (#2133)
* FQ+Mul fusion transform skeleton

* FQ+Mul fusion transform tests prep

* Basic UT for the transform

* Basic implementation of the transform

* Parametrized UTs for FQMul transform

* Parametrization of FQ+Mul UTs

* Make sure that the shapes of constants match

* Check if the mul constant matches FQ data

* CentOs compilation error fix

* PR feedback and adjusted tests

* NHWC layout of the mul constant

* UT: FQ output limits 4D

* Redundant CF pass removed

* Rewrite the graph in a different way

* Shape checking infrastructure skeleton

* Handle some negative cases

* Check the rt info in the fusion test

* Fuse all Mul nodes detected after FQ node

* Dont cast the original FQ node

* Dont throw if CF fails in new output range calculation

* More UTs

* Accept any type of input to FQ in the transformation

* Test the fusion when all FQ inputs are non-const

* Fusion test when only one output limit is const
2020-09-15 11:33:35 +03:00
Artyom Anokhov
2e4f46e1fd [Scripts] Fixing issue with exporting path-like env when it undef (#2164)
* setupvars.sh: Added logic for exporting path env in case if it not defined

* setupvars: Removed duplicated colon

* Kept quotes where they were

* setupvars: updated copyrights
2020-09-14 19:49:42 +03:00
Edward Shogulin
177906b99a [LPT] Copy constant with several outputs before blob update (#2197)
* [LPT] Copy constant implementation

* [LPT] the same Constant ops as FQ interval boundaries
2020-09-14 18:32:37 +03:00
Anna Alberska
6d38488462 [GNA] fix scale factor calculation for unfused bias after fc (2021.1) (#2195)
* [GNA] fix scale factor calculation for unfused bias after fc

* change check

* add test

* apply requested changes

* cpplint fix

* apply test changes

* modify model for test to match ::op::
2020-09-14 17:30:12 +03:00
Kamil Magierski
db5aa551af LSTMCell test [GNA] LSTMCell fix for GNA (#2216) 2020-09-14 17:29:45 +03:00
Denis Orlov
6d90eedbd2 [GNA] Safety fixes (#2193) 2020-09-14 12:10:25 +03:00
Sergey Shlyapnikov
a91e256d27 [IE CLDNN] Memory allocation optimizations (#2178) 2020-09-11 15:55:46 +03:00
7661 changed files with 291238 additions and 370913 deletions

View File

@@ -1,53 +0,0 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Analyze GTest logs
"""
import re
from argparse import ArgumentParser
def get_passed_tests(log_file_path):
"""Gets passed tests with OK status"""
ok_test_line_pattern = "[ OK ] "
ok_tests = []
with open(log_file_path) as log_file_obj:
for line in log_file_obj.readlines():
if ok_test_line_pattern in line:
ok_tests.append(line.split(ok_test_line_pattern)[1])
return ok_tests
def get_total_time(tests):
"""Gets total execution time (sec)"""
re_compile_time = re.compile(r".+ \(([0-9]+) ms\)")
total_time = 0.0
for test in tests:
re_time = re_compile_time.match(test)
if re_time:
total_time += int(re_time.group(1)) / 1000
else:
print("No time in the test line:", test)
return total_time
def main():
"""The main entry point function"""
arg_parser = ArgumentParser()
arg_parser.add_argument(
"--log-file", metavar="PATH", default="gtest.log", help="Path to GTest log file"
)
args = arg_parser.parse_args()
passed_tests = get_passed_tests(args.log_file)
print("PASSED tests count:", len(passed_tests))
print("Total execution time of passed tests (sec):", get_total_time(passed_tests))
print("\nPASSED tests:")
print("".join(sorted(passed_tests)))
if __name__ == "__main__":
main()

View File

@@ -1,205 +1,118 @@
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2021/3
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2021/3
jobs:
- job: Lin
# About 150% of total time
timeoutInMinutes: 90
timeoutInMinutes: 85
pool:
name: LIN_VMSS_VENV_F16S_WU2
name: LIN_VMSS_VENV_F8S_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 16
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE)
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh
steps:
- checkout: self
clean: true
fetchDepth: 1
lfs: false
submodules: recursive
path: openvino
- script: |
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2019-06-01"
whoami
uname -a
echo Python3 info ; which python3 ; python3 --version
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
which python3
python3 --version
gcc --version
lsb_release
env
cat /proc/cpuinfo
cat /proc/meminfo
cat /etc/fstab
vmstat -s
df
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
free -h
displayName: 'System info'
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
rm -rf $(BUILD_DIR) ; mkdir $(BUILD_DIR)
echo TargetBranch: $(System.PullRequest.TargetBranch)
echo SourceBranch: $(Build.SourceBranch)
displayName: 'Make dir'
- checkout: self
clean: true
lfs: false
submodules: recursive
path: openvino
- checkout: openvino_contrib
clean: true
lfs: false
submodules: recursive
path: openvino_contrib
- checkout: testdata
clean: true
lfs: true
path: testdata
- script: |
sudo apt --assume-yes install libusb-1.0-0-dev
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/requirements.txt
python3 -m pip install -r ./inference-engine/ie_bridges/python/requirements.txt
# For running Python API tests
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/src/requirements-dev.txt
# Speed up build
python3 -m pip install -r ./inference-engine/ie_bridges/python/src/requirements-dev.txt
displayName: 'Install dependencies'
- script: |
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
# Speed up tests
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
displayName: 'Install Ninja'
- task: CMake@1
inputs:
# CMake must get Python 3.x version by default
cmakeArgs: >
-GNinja
-DVERBOSE_BUILD=ON
-DENABLE_TEMPLATE_PLUGIN=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_PYTHON=ON
-DPYTHON_EXECUTABLE=/usr/bin/python3.6
-DENABLE_TESTS=ON
-DENABLE_FASTER_BUILD=ON
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
$(REPO_DIR)
cmakeArgs: -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE=/usr/bin/python3.6 -DENABLE_TESTS=ON $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
- script: ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Lin'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*
displayName: 'nGraph UT'
continueOnError: false
- script: $(BIN_DIR)/InferenceEngineUnitTests --gtest_print_time=1 --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
- script: $(BIN_DIR)/InferenceEngineUnitTests --gtest_print_time=1
displayName: 'IE UT old'
continueOnError: false
- script: $(BIN_DIR)/ieUnitTests --gtest_output=xml:TEST-ieUnitTests.xml
- script: $(BIN_DIR)/ieUnitTests
displayName: 'IE UT'
continueOnError: false
- script: $(BIN_DIR)/cpuUnitTests --gtest_output=xml:TEST-cpuUnitTests.xml
- script: $(BIN_DIR)/cpuUnitTests
displayName: 'CPU UT'
continueOnError: false
- script: $(BIN_DIR)/gnaUnitTests --gtest_output=xml:TEST-gnaUnitTests.xml
- script: $(BIN_DIR)/gnaUnitTests
displayName: 'GNA UT'
continueOnError: false
- script: $(BIN_DIR)/vpuUnitTests --gtest_output=xml:TEST-vpuUnitTests.xml
- script: $(BIN_DIR)/vpuUnitTests
displayName: 'VPU UT'
continueOnError: false
- script: $(BIN_DIR)/onnxImporterUnitTests --gtest_output=xml:TEST-onnxImporterUnitTests.xml
displayName: 'ONNX Importer UT'
continueOnError: false
- script: $(BIN_DIR)/ieFuncTests --gtest_output=xml:TEST-ieFuncTests.xml
- script: $(BIN_DIR)/ieFuncTests
displayName: 'IE FuncTests'
continueOnError: false
- script: $(BIN_DIR)/templateFuncTests --gtest_filter=*smoke* --gtest_output=xml:TEST-templateFuncTests.xml
displayName: 'TEMPLATE FuncTests'
continueOnError: false
- script: $(BIN_DIR)/cpuFuncTests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
- script: $(BIN_DIR)/cpuFuncTests --gtest_print_time=1
displayName: 'CPU FuncTests'
continueOnError: false
- script: $(BIN_DIR)/MklDnnBehaviorTests --gtest_output=xml:TEST-MklDnnBehaviorTests.xml
- script: $(BIN_DIR)/MklDnnBehaviorTests
displayName: 'MklDnnBehaviorTests'
continueOnError: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
python3 $(WORK_DIR)/gtest-parallel/gtest-parallel $(BIN_DIR)/MklDnnFunctionalTests --workers=$(WORKERS_NUMBER) --dump_json_test_results=MklDnnFunctionalTests.json --gtest_filter=*smoke* -- --gtest_print_time=1
git clone https://github.com/openvinotoolkit/testdata.git
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
displayName: 'Clone testdata & gtest-parallel'
- script: |
export DATA_PATH=$(WORK_DIR)/testdata
export MODELS_PATH=$(WORK_DIR)/testdata
python3 $(WORK_DIR)/gtest-parallel/gtest-parallel $(BIN_DIR)/MklDnnFunctionalTests --workers=$(WORKERS_NUMBER) --print_test_times --dump_json_test_results=MklDnnFunctionalTests.json -- --gtest_print_time=1
workingDirectory: $(WORK_DIR)
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
$(BIN_DIR)/InferenceEngineCAPITests --gtest_output=xml:TEST-InferenceEngineCAPITests.xml
export DATA_PATH=$(WORK_DIR)/testdata
export MODELS_PATH=$(WORK_DIR)/testdata
$(BIN_DIR)/InferenceEngineCAPITests
displayName: 'IE CAPITests'
continueOnError: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
export DATA_PATH=$(WORK_DIR)/testdata
export MODELS_PATH=$(WORK_DIR)/testdata
export LD_LIBRARY_PATH=$(BIN_DIR)/lib
export PYTHONPATH=$(BIN_DIR)/lib/python_api/python3.6
env
cd $(REPO_DIR)/inference-engine/ie_bridges/python/tests
pytest pytest --junitxml=TEST-PythonAPI.xml
pytest
displayName: 'Python API Tests'
continueOnError: false
enabled: false
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
testResultsFiles: '**/TEST-*.xml'
#searchFolder: '$(BUILD_DIR)'
mergeTestResults: false # Optional
#failTaskOnFailedTests: false # Optional
#testRunTitle: 'Pre/Post-Commit' # Optional
buildPlatform: 'x64' # Optional
buildConfiguration: 'Linux' # Optional
#publishRunAttachments: true # Optional

View File

@@ -1,87 +0,0 @@
jobs:
- job: LinCC
# About 150% of total time
timeoutInMinutes: 90
pool:
name: LIN_VMSS_VENV_F16S_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 16
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE)
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh
steps:
- script: |
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2019-06-01"
whoami
uname -a
echo Python3 info ; which python3 ; python3 --version
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
lsb_release
env
cat /proc/cpuinfo
cat /proc/meminfo
cat /etc/fstab
vmstat -s
df
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
free -h
displayName: 'System info'
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
rm -rf $(BUILD_DIR) ; mkdir $(BUILD_DIR)
displayName: 'Make dir'
- checkout: self
clean: true
lfs: false
submodules: recursive
path: openvino
- script: |
sudo apt --assume-yes install libusb-1.0-0-dev
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/requirements.txt
# Speed up build
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-DVERBOSE_BUILD=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_FASTER_BUILD=ON
-DENABLE_PROFILING_ITT=ON
-DSELECTIVE_BUILD=COLLECT
$(REPO_DIR)
workingDirectory: $(BUILD_DIR)
- script: ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'

View File

@@ -1,82 +0,0 @@
jobs:
- job: nGraph_ONNX_Lin
# About 300% of total time
timeoutInMinutes: 90
pool:
name: LIN_VMSS_VENV_ONNX_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
WORK_DIR: $(Pipeline.Workspace)/_w
MODELS_DIR: /mount/cinfsshare/onnxtestdata
TMP_DIR: /mnt/tmp
steps:
- script: |
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2019-06-01"
whoami
uname -a
echo Python3 info ; which python3 ; python3 --version
echo Python info ; which python ; python --version
echo Java info ; which java ; java -version
echo gcc info ; which gcc ; gcc --version
lsb_release
env
cat /proc/cpuinfo
cat /proc/meminfo
cat /etc/fstab
vmstat -s
df
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
free -h
displayName: 'System info'
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
sudo mkdir -p $(MODELS_DIR)
sudo apt --assume-yes install nfs-common
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(MODELS_DIR) -o vers=4,minorversion=1,sec=sys
displayName: 'Make dirs'
- checkout: self
clean: true
lfs: false
submodules: recursive
path: openvino
- script: docker build --tag=openvino-onnx-ci-image --file=.ci/openvino-onnx/Dockerfile .
displayName: 'Docker build'
- script: ngraph/python/tests/test_onnx/model_zoo_preprocess.sh -d $(TMP_DIR) -o
displayName: 'Get models'
- script: |
##wget -O "$(TMP_DIR)/msft.zip" https://onnxruntimetestdata.blob.core.windows.net/models/20191107.zip
##unzip "$(TMP_DIR)/msft.zip" -d "$(MODELS_DIR)/msft"
#unzip "/mnt/onnxtestdata/models/20191107.zip" -d "$(MODELS_DIR)/msft"
#mv $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/seq_lens_sorted $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/test_data_set_0
#mv $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/seq_lens_unsorted $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/test_data_set_1
displayName: 'Get MSFT models'
enabled: false
- script: |
ls -alR $(MODELS_DIR)
ls -alR $(TMP_DIR)
displayName: 'List models'
enabled: false
- script: sudo fallocate -l 48G /swapfile ; sudo mkswap /swapfile ; sudo swapon /swapfile ; df ; free -h
displayName: 'Create swap'
- script: |
docker run --name openvino-onnx-ci-container --volume $(TMP_DIR)/model_zoo:/root/.onnx/model_zoo --volume $(MODELS_DIR)/msft:/root/.onnx/model_zoo/MSFT openvino-onnx-ci-image
displayName: 'Docker run'

View File

@@ -1,90 +1,47 @@
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2021/3
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2021/3
jobs:
- job: Mac
# About 250% of total time (perfomace of Mac hosts is unstable, 360 is max)
timeoutInMinutes: 360
# About 200% of total time (perfomace of Mac hosts is unstable)
timeoutInMinutes: 180
pool:
vmImage: 'macOS-10.15'
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 3
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE)
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh
steps:
- checkout: self
clean: true
fetchDepth: 1
lfs: false
submodules: recursive
path: openvino
- script: |
whoami
uname -a
which python3
python3 --version
which java
java -version
gcc --version
xcrun --sdk macosx --show-sdk-version
env
sysctl -a
displayName: 'System info'
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
rm -rf $(BUILD_DIR) ; mkdir $(BUILD_DIR)
displayName: 'Make dir'
- checkout: self
clean: true
lfs: false
submodules: recursive
path: openvino
- checkout: openvino_contrib
clean: true
lfs: false
submodules: recursive
path: openvino_contrib
- checkout: testdata
clean: true
lfs: true
path: testdata
- task: UsePythonVersion@0
inputs:
versionSpec: '3.7'
- script: |
brew install cython
brew install automake
# Speed up build
brew install ninja
# Speed up tests
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
- script: brew install ninja
displayName: 'Install Ninja'
- script: |
export PATH="/usr/local/opt/cython/bin:$PATH"
export CC=gcc
@@ -92,81 +49,54 @@ jobs:
# Disable errors with Ninja
export CXXFLAGS="-Wno-error=unused-command-line-argument"
export CFLAGS="-Wno-error=unused-command-line-argument"
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: ninja
workingDirectory: $(BUILD_DIR)
displayName: 'Build Mac'
- script: ls -alR $(REPO_DIR)/bin/
displayName: 'List files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid:IE_CPU/GRUSequenceOp.onnx_model_gru* --gtest_output=xml:TEST-NGraphUT.xml
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid
displayName: 'nGraph UT'
continueOnError: false
- script: $(BIN_DIR)/InferenceEngineUnitTests --gtest_print_time=1 --gtest_filter=-MKLDNNGraphStructureTests.TestNoRedundantReordersBeforeDWConvolution:TestConvolution/MKLDNNGraphConvolutionTests.TestsConvolution/0:TestConvolutionDefaultPrimitivesPriority/MKLDNNGraphConvolutionTests.TestsConvolution/0 --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
- script: $(BIN_DIR)/InferenceEngineUnitTests --gtest_print_time=1
displayName: 'IE UT old'
continueOnError: false
- script: $(BIN_DIR)/ieUnitTests --gtest_output=xml:TEST-ieUnitTests.xml
- script: $(BIN_DIR)/ieUnitTests
displayName: 'IE UT'
continueOnError: false
- script: $(BIN_DIR)/cpuUnitTests --gtest_output=xml:TEST-cpuUnitTests.xml
- script: $(BIN_DIR)/cpuUnitTests
displayName: 'CPU UT'
continueOnError: false
- script: $(BIN_DIR)/vpuUnitTests --gtest_output=xml:TEST-vpuUnitTests.xml
- script: $(BIN_DIR)/vpuUnitTests
displayName: 'VPU UT'
continueOnError: false
- script: $(BIN_DIR)/onnxImporterUnitTests --gtest_output=xml:TEST-onnxImporterUnitTests.xml
displayName: 'ONNX Importer UT'
continueOnError: false
- script: $(BIN_DIR)/ieFuncTests --gtest_output=xml:TEST-ieFuncTests.xml
- script: $(BIN_DIR)/ieFuncTests
displayName: 'IE FuncTests'
continueOnError: false
- script: $(BIN_DIR)/cpuFuncTests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
- script: $(BIN_DIR)/cpuFuncTests --gtest_print_time=1
displayName: 'CPU FuncTests'
continueOnError: false
- script: $(BIN_DIR)/MklDnnBehaviorTests --gtest_output=xml:TEST-MklDnnBehaviorTests.xml
- script: $(BIN_DIR)/MklDnnBehaviorTests
displayName: 'MklDnnBehaviorTests'
continueOnError: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
python3 $(WORK_DIR)/gtest-parallel/gtest-parallel $(BIN_DIR)/MklDnnFunctionalTests --workers=$(WORKERS_NUMBER) --dump_json_test_results=MklDnnFunctionalTests.json --gtest_filter=*smoke*:-smoke_MobileNet/ModelTransformationsTest.LPT/mobilenet_v2_tf_depthwise_batch1_inPluginDisabled_inTestDisabled_asymmetric* -- --gtest_print_time=1
git clone https://github.com/openvinotoolkit/testdata.git
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
displayName: 'Clone testdata & gtest-parallel'
- script: |
export DATA_PATH=$(WORK_DIR)/testdata
export MODELS_PATH=$(WORK_DIR)/testdata
python3 $(WORK_DIR)/gtest-parallel/gtest-parallel $(BIN_DIR)/MklDnnFunctionalTests --workers=$(WORKERS_NUMBER) --print_test_times --dump_json_test_results=MklDnnFunctionalTests.json --gtest_filter=-smoke_MobileNet/ModelTransformationsTest.LPT/mobilenet_v2_tf_depthwise_batch1_inPluginDisabled_inTestDisabled_asymmetric* -- --gtest_print_time=1
workingDirectory: $(WORK_DIR)
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
$(BIN_DIR)/InferenceEngineCAPITests --gtest_output=xml:TEST-InferenceEngineCAPITests.xml
export DATA_PATH=$(WORK_DIR)/testdata
export MODELS_PATH=$(WORK_DIR)/testdata
$(BIN_DIR)/InferenceEngineCAPITests
displayName: 'IE CAPITests'
continueOnError: false
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
testResultsFiles: '**/TEST-*.xml'
#searchFolder: '$(BUILD_DIR)'
mergeTestResults: false # Optional
#failTaskOnFailedTests: false # Optional
#testRunTitle: 'Pre/Post-Commit' # Optional
buildPlatform: 'x64' # Optional
buildConfiguration: 'Mac' # Optional
#publishRunAttachments: true # Optional

View File

@@ -1,212 +1,133 @@
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2021/3
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2021/3
jobs:
- job: Win
# About 150% of total time
timeoutInMinutes: 120
pool:
name: WIN_VMSS_VENV_F8S_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: D:\build
BIN_DIR: $(REPO_DIR)\bin\intel64
MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
INSTALL_DIR: $(WORK_DIR)\install_pkg
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
IB_DIR: C:\Program Files (x86)\IncrediBuild
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
steps:
- checkout: self
clean: true
fetchDepth: 1
lfs: false
submodules: recursive
path: openvino
- script: |
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
where python3
where python
python --version
where java
java -version
wmic computersystem get TotalPhysicalMemory
wmic cpu list
wmic logicaldisk get description,name
wmic VOLUME list
set
displayName: 'System info'
- script: |
rd /Q /S $(WORK_DIR) & mkdir $(WORK_DIR)
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
displayName: 'Make dir'
- script: |
certutil -urlcache -split -f https://incredibuilddiag1wu2.blob.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat
call install_ib_console.bat
workingDirectory: $(WORK_DIR)
displayName: 'Install IncrediBuild'
- checkout: self
clean: true
lfs: false
submodules: recursive
path: openvino
- checkout: openvino_contrib
clean: true
lfs: false
submodules: recursive
path: openvino_contrib
- checkout: testdata
clean: true
lfs: true
path: testdata
- script: |
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
powershell -command "Expand-Archive -Force ninja-win.zip"
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
displayName: Install Ninja
- script: |
certutil -urlcache -split -f https://incredibuilddiag1wu2.blob.core.windows.net/incredibuild/IBSetupConsole_9_5_0.exe IBSetupConsole_9_5_0.exe
call IBSetupConsole_9_5_0.exe /Install /Components=Agent,oneuse /Coordinator=11.1.0.4 /AGENT:OPENFIREWALL=ON /AGENT:AUTOSELECTPORTS=ON /ADDTOPATH=ON /AGENT:INSTALLADDINS=OFF
workingDirectory: $(WORK_DIR)
displayName: Install IncrediBuild
- script: |
echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
reg add HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Xoreax\IncrediBuild\Builder /f /v LastEnabled /d 0 && echo Start IncrediBuild_Agent && net start IncrediBuild_Agent
displayName: Start IncrediBuild
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DENABLE_TEMPLATE_PLUGIN=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja"
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja" /MaxCPUS=40
workingDirectory: $(BUILD_DIR)
displayName: 'Build Win'
- script: dir $(REPO_DIR)\bin\ /s
displayName: 'List files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
displayName: 'nGraph UT'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
"$(IB_TESTCONSOLE)" $(BIN_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests-IB.xml
displayName: 'IE UT old - IB'
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\ieUnitTests --gtest_output=xml:TEST-ieUnitTests.xml
displayName: 'IE UT'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\cpuUnitTests --gtest_output=xml:TEST-cpuUnitTests.xml
displayName: 'CPU UT'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\gnaUnitTests --gtest_output=xml:TEST-gnaUnitTests.xml
displayName: 'GNA UT'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\vpuUnitTests --gtest_output=xml:TEST-vpuUnitTests.xml
displayName: 'VPU UT'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\onnxImporterUnitTests --gtest_output=xml:TEST-onnxImporterUnitTests.xml
displayName: 'ONNX Importer UT'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\ieFuncTests --gtest_output=xml:TEST-ieFuncTests.xml
displayName: 'IE FuncTests'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\templateFuncTests --gtest_output=xml:TEST-templateFuncTests.xml
displayName: 'TEMPLATE FuncTests'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
"$(IB_TESTCONSOLE)" $(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke*:-*CompareWithRefs/base_size=16_pre_nms_topn=100_post_nms_topn=100_nms_thresh=0.7_feat_stride=1_min_size=1_ratio* --gtest_output=xml:TEST-cpuFuncTests-IB.xml /testlevel=24
displayName: 'CPU FuncTests - IB'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
$(BIN_DIR)\MklDnnBehaviorTests --gtest_output=xml:TEST-MklDnnBehaviorTests.xml
displayName: 'MklDnnBehaviorTests'
continueOnError: false
# Add for gtest-parallel, it hangs now (CVS-33386)
#python $(WORK_DIR)\gtest-parallel\gtest-parallel $(BIN_DIR)\MklDnnFunctionalTests --workers=$(WORKERS_NUMBER) --dump_json_test_results=MklDnnFunctionalTests.json --gtest_filter=*smoke* -- --gtest_print_time=1
- script: |
set PATH=$(TEST_ENV_PATH)
set DATA_PATH=$(MODELS_PATH)
set MODELS_PATH=$(MODELS_PATH)
rem "$(IB_TESTCONSOLE)" $(BIN_DIR)\MklDnnFunctionalTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-MklDnnFunctionalTests-IB.xml
$(BIN_DIR)\MklDnnFunctionalTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-MklDnnFunctionalTests.xml
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
set PATH=$(TEST_ENV_PATH)
set DATA_PATH=$(MODELS_PATH)
set MODELS_PATH=$(MODELS_PATH)
$(BIN_DIR)\InferenceEngineCAPITests --gtest_output=xml:TEST-InferenceEngineCAPITests.xml
displayName: 'IE CAPITests'
continueOnError: false
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
testResultsFiles: '**/TEST-*.xml'
#searchFolder: '$(BUILD_DIR)'
mergeTestResults: false # Optional
#failTaskOnFailedTests: false # Optional
#testRunTitle: 'Pre/Post-Commit' # Optional
buildPlatform: 'x64' # Optional
buildConfiguration: 'Windows' # Optional
#publishRunAttachments: true # Optional
- script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
displayName: Stop IncrediBuild
continueOnError: true
enabled: false
- script: dir $(REPO_DIR)\bin\ /s /b
displayName: 'List files'
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*
displayName: 'nGraph UT'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\InferenceEngineUnitTests --gtest_print_time=1
displayName: 'IE UT old'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\ieUnitTests
displayName: 'IE UT'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\cpuUnitTests
displayName: 'CPU UT'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\gnaUnitTests
displayName: 'GNA UT'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\vpuUnitTests
displayName: 'VPU UT'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\ieFuncTests
displayName: 'IE FuncTests'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\cpuFuncTests --gtest_print_time=1
displayName: 'CPU FuncTests'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;%PATH%
$(BIN_DIR)\MklDnnBehaviorTests
displayName: 'MklDnnBehaviorTests'
continueOnError: false
- script: |
git clone https://github.com/openvinotoolkit/testdata.git
git clone https://github.com/google/gtest-parallel.git
workingDirectory: $(BUILD_DIR)
displayName: 'Clone testdata & gtest-parallel'
# Add for gtest-parallel, it hangs now (CVS-33386)
#python $(BUILD_DIR)\gtest-parallel\gtest-parallel $(BIN_DIR)\MklDnnFunctionalTests --workers=$(WORKERS_NUMBER) --print_test_times --dump_json_test_results=MklDnnFunctionalTests.json -- --gtest_print_time=1
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.0\opencv\bin;%PATH%
set DATA_PATH=$(BUILD_DIR)\testdata
set MODELS_PATH=$(BUILD_DIR)\testdata
$(BIN_DIR)\MklDnnFunctionalTests --gtest_print_time=1
displayName: 'MklDnnFunctionalTests'
continueOnError: false
- script: |
set PATH=$(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.0\opencv\bin;%PATH%
set DATA_PATH=$(BUILD_DIR)\testdata
set MODELS_PATH=$(BUILD_DIR)\testdata
$(BIN_DIR)\InferenceEngineCAPITests
displayName: 'IE CAPITests'
continueOnError: false

View File

@@ -1,89 +0,0 @@
jobs:
- job: WinCC
# About 150% of total time
timeoutInMinutes: 120
pool:
name: WIN_VMSS_VENV_F8S_WU2
variables:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
WORKERS_NUMBER: 8
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: D:\build
BIN_DIR: $(REPO_DIR)\bin\intel64
MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
INSTALL_DIR: $(WORK_DIR)\install_pkg
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
IB_DIR: C:\Program Files (x86)\IncrediBuild
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
steps:
- script: |
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
where python3
where python
python --version
where java
java -version
wmic computersystem get TotalPhysicalMemory
wmic cpu list
wmic logicaldisk get description,name
wmic VOLUME list
set
displayName: 'System info'
- script: |
rd /Q /S $(WORK_DIR) & mkdir $(WORK_DIR)
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
displayName: 'Make dir'
- script: |
certutil -urlcache -split -f https://incredibuilddiag1wu2.blob.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat
call install_ib_console.bat
workingDirectory: $(WORK_DIR)
displayName: 'Install IncrediBuild'
- checkout: self
clean: true
lfs: false
submodules: recursive
path: openvino
- script: |
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
powershell -command "Expand-Archive -Force ninja-win.zip"
workingDirectory: $(WORK_DIR)
displayName: 'Install dependencies'
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PROFILING_ITT=ON -DSELECTIVE_BUILD=COLLECT -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
workingDirectory: $(BUILD_DIR)
displayName: 'CMake'
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja"
workingDirectory: $(BUILD_DIR)
displayName: 'Build Win'
- script: dir $(REPO_DIR)\bin\ /s
displayName: 'List files'
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
workingDirectory: $(BUILD_DIR)
displayName: 'Install'
- script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
displayName: Stop IncrediBuild
continueOnError: true
enabled: false

View File

@@ -57,6 +57,8 @@ RUN cmake .. \
-DENABLE_OPENCV=OFF \
-DENABLE_CPPLINT=OFF \
-DENABLE_TESTS=OFF \
-DENABLE_BEH_TESTS=OFF \
-DENABLE_FUNCTIONAL_TESTS=OFF \
-DENABLE_MKL_DNN=ON \
-DENABLE_CLDNN=OFF \
-DENABLE_PROFILING_ITT=OFF \
@@ -73,8 +75,8 @@ RUN make -j $(nproc) install
# Run tests via tox
WORKDIR /openvino/ngraph/python
ENV NGRAPH_CPP_BUILD_PATH=/openvino/dist/deployment_tools/ngraph
ENV LD_LIBRARY_PATH=/openvino/dist/deployment_tools/ngraph/lib
ENV NGRAPH_CPP_BUILD_PATH=/openvino/dist
ENV LD_LIBRARY_PATH=/openvino/dist/lib
ENV NGRAPH_ONNX_IMPORT_ENABLE=TRUE
ENV PYTHONPATH=/openvino/bin/intel64/Release/lib/python_api/python3.8:${PYTHONPATH}
RUN git clone --recursive https://github.com/pybind/pybind11.git -b v2.5.0 --depth 1

View File

@@ -4,25 +4,6 @@
DOCKER_CONTAINER_NAME= "openvino-onnx-ci-container"
DOCKER_IMAGE_TAG = "openvino-onnx-ci-image"
// workaround for aborting previous builds on PR update
@NonCPS
def stopPreviousRunningBuilds() {
def jobname = env.JOB_NAME
if (jobname.startsWith("onnx/openvino_ci/PR")){
def buildnum = env.BUILD_NUMBER.toInteger()
def job = Jenkins.instance.getItemByFullName(jobname)
def job_newest = job.builds.first()
for (build in job.builds.reverse()[0..<-1]) {
if (build.isBuilding()){
echo "Stop task = ${build} because newest #${job_newest} is on the way"
build.doStop();
continue;
}
}
}
}
def getGitPrInfo(String project) {
def gitPrInfo = [
prAuthorEmail : "",
@@ -77,14 +58,7 @@ def gitSubmoduleUpdate(String repository_name) {
}
}
def updateModels() {
sh """
./ngraph/python/tests/test_onnx/model_zoo_preprocess.sh -d ${HOME}/ONNX_CI/data -o
"""
}
def buildDockerImage() {
updateModels()
sh """
docker build --tag=${DOCKER_IMAGE_TAG} --file=.ci/openvino-onnx/Dockerfile \
--build-arg http_proxy=http://proxy-chain.intel.com:911/ \
@@ -95,12 +69,10 @@ def buildDockerImage() {
def runTests() {
sh """
docker run --name ${DOCKER_CONTAINER_NAME} \
--volume ${HOME}/ONNX_CI/data/model_zoo:/root/.onnx/model_zoo \
${DOCKER_IMAGE_TAG}
--volume ${HOME}/ONNX_CI/onnx_models/.onnx:/root/.onnx ${DOCKER_IMAGE_TAG}
"""
}
pipeline {
agent {
label "OpenVino"
@@ -111,12 +83,10 @@ pipeline {
}
options {
skipDefaultCheckout true
timeout(activity: true, time: 60, unit: 'MINUTES')
}
stages {
stage("Clone repository") {
steps{
stopPreviousRunningBuilds()
dir("${WORKDIR}") {
checkout scm
}
@@ -125,14 +95,14 @@ pipeline {
}
stage("Prepare Docker environment") {
steps{
dir("${WORKDIR}") {
dir("${WORKDIR}") {
buildDockerImage()
}
}
}
stage("Run tests") {
options {
timeout(time: 60, unit: 'MINUTES')
timeout(time: 10, unit: 'MINUTES')
}
steps{
runTests()

View File

@@ -1,65 +0,0 @@
// Copyright (C) 2018-2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
timeout(30)
{
node(LABEL) {
BUILD_WORKSPACE = "$WORKSPACE/$BUILD_NUMBER"
WATCHDOG_ROOT = "$BUILD_WORKSPACE/.ci/openvino-onnx/watchdog"
VENV_PATH = "${BUILD_WORKSPACE}/.wdvenv"
try {
stage("Clone repository") {
dir ("$BUILD_WORKSPACE") {
checkout([$class: 'GitSCM', branches: [[name: "*/$BRANCH"]],
doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'CloneOption', timeout: 30]], submoduleCfg: [],
userRemoteConfigs: [[credentialsId: "${GITHUB_KEY}", url: "${OPEN_VINO_URL}"]]])
}
}
stage("Prepare environment") {
sh """#!/bin/bash
if [ ! -d ${VENV_PATH} ]; then
python3 -m venv ${VENV_PATH}
source ${VENV_PATH}/bin/activate
pip install -r ${WATCHDOG_ROOT}/requirements.txt
fi
"""
}
stage("Run script") {
withCredentials([
usernamePassword(credentialsId: '7157091e-bc04-42f0-99fd-dc4da2922a55',
usernameVariable: 'username',
passwordVariable: 'password')])
{
dir ("$BUILD_WORKSPACE") {
sh """#!/bin/bash
source ${VENV_PATH}/bin/activate
export PYTHONHTTPSVERIFY=0
python ${WATCHDOG_ROOT}/src/main.py \
--msteams-url=${MSTEAMS_URL_FILE} \
--github-credentials '${username}' '${password}' \
--github-org=${GITHUB_ORG} \
--github-project=${GITHUB_PROJECT} \
--jenkins-token=${JENKINS_TOKEN_FILE} \
--jenkins-server=${JENKINS_SERVER} \
--jenkins-user=${JENKINS_USER} \
--ci-job=${CI_JOB_NAME} \
--watchdog-job=${WATCHDOG_JOB_NAME}
"""
}
}
}
} catch (e) {
echo "$e"
currentBuild.result = "FAILURE"
} finally {
stage("Cleanup") {
sh """
cd $BUILD_WORKSPACE
rm -rf ..?* .[!.]* *
"""
}
}
}
}

View File

@@ -1,6 +0,0 @@
python-jenkins==1.7.0
retrying==1.3.3
pygithub==1.51
timeout-decorator==0.4.1
requests==2.23.0
wheel

View File

@@ -1,108 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging
import timeout_decorator
from datetime import datetime
from retrying import retry
from github import Github, GithubException
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
_RETRY_LIMIT = 3
_RETRY_COOLDOWN_MS = 2000
_REQUEST_TIMEOUT_S = 10
class GitWrapper:
"""Class wrapping PyGithub API.
The purpose of this class is to wrap methods from PyGithub API used in Watchdog, for less error-prone and
more convenient use. Docs for used API, including wrapped methods can be found at:
https://pygithub.readthedocs.io/en/latest/introduction.html
:param github_credentials: Credentials used for GitHub
:param repository: GitHub repository name
:param project: GitHub project name
:type github_credentials: String
:type repository: String
:type project: String
"""
def __init__(self, github_credentials, repository, project):
self.git = Github(*github_credentials)
self.repository = repository
self.project = project
self.github_credentials = github_credentials
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_git_time(self):
"""Retrieve time from GitHub.
Used to reliably determine time during Watchdog run.
:return: Datetime object describing current time
:rtype: datetime
"""
try:
datetime_object = self._get_git_time()
except ValueError as e:
raise GitWrapperError(str(e))
except GithubException as e:
message = 'GitHub Exception during API status retrieval. Exception: {}'.format(str(e))
raise GitWrapperError(message)
except timeout_decorator.TimeoutError:
message = 'GitHub Exception during API status retrieval. Timeout during API request.'
raise GitWrapperError(message)
return datetime_object
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_pull_requests(self):
"""Retrieve paginated list of pull requests from GitHub.
:return: Paginated list of Pull Requests in GitHub repo
:rtype: github.PaginatedList.PaginatedList of github.PullRequest.PullRequest
"""
try:
prs = self._get_pull_requests()
except GithubException as e:
message = 'GitHub Exception during API status retrieval. Exception: {}'.format(str(e))
raise GitWrapperError(message)
return prs
@timeout_decorator.timeout(_REQUEST_TIMEOUT_S)
def _get_git_time(self):
"""Private method retrieving time from GitHub.
:return: Datetime object describing current time
:rtype: datetime
"""
datetime_string = self.git.get_api_status().raw_headers.get('date', '')
datetime_format = '%a, %d %b %Y %H:%M:%S %Z'
datetime_object = datetime.strptime(datetime_string, datetime_format)
return datetime_object
@timeout_decorator.timeout(_REQUEST_TIMEOUT_S)
def _get_pull_requests(self):
"""Private method retrieving pull requests from GitHub.
:return: Paginated list of Pull Requests in GitHub repo
:rtype: github.PaginatedList.PaginatedList of github.PullRequest.PullRequest
"""
return self.git.get_organization(self.repository).get_repo(self.project).get_pulls()
class GitWrapperError(Exception):
"""Base class for exceptions raised in GitWrapper.
:param message Explanation of the error
"""
def __init__(self, message):
self.message = message
log.exception(message)

View File

@@ -1,91 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests
import jenkins
import logging
from retrying import retry
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
_RETRY_LIMIT = 3
_RETRY_COOLDOWN_MS = 5000
class JenkinsWrapper:
"""Class wrapping Python-Jenkins API.
The purpose of this class is to wrap methods from Python-Jenkins API used in Watchdog, for less error-prone and
more convenient use. Docs for used API, including wrapped methods can be found at:
https://python-jenkins.readthedocs.io/en/latest/
:param jenkins_token: Token used for Jenkins
:param jenkins_user: Username used to connect to Jenkins
:param jenkins_server: Jenkins server address
:type jenkins_token: String
:type jenkins_user: String
:type jenkins_server: String
"""
def __init__(self, jenkins_token, jenkins_user, jenkins_server):
self.jenkins_server = jenkins_server
self.jenkins = jenkins.Jenkins(jenkins_server, username=jenkins_user,
password=jenkins_token)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_build_console_output(self, job_name, build_number):
return self.jenkins.get_build_console_output(job_name, build_number)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_job_info(self, job_name):
return self.jenkins.get_job_info(job_name)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_build_info(self, job_name, build_number):
return self.jenkins.get_build_info(job_name, build_number)
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_queue_item(self, queue_id):
"""Attempt to retrieve Jenkins job queue item.
Exception communicating queue doesn't exist is expected,
in that case method returns empty dict.
:param queue_id: Jenkins job queue ID number
:type queue_id: int
:return: Dictionary representing Jenkins job queue item
:rtype: dict
"""
try:
return self.jenkins.get_queue_item(queue_id)
except Exception as e:
# Exception 'queue does not exist' is expected behaviour when job is running
if 'queue' in str(e) and 'does not exist' in str(e):
return {}
else:
raise
@retry(stop_max_attempt_number=_RETRY_LIMIT, wait_fixed=_RETRY_COOLDOWN_MS)
def get_idle_ci_hosts(self):
"""Query Jenkins for idle servers.
Send GET request to Jenkins server, querying for idle servers labeled
for OpenVino-ONNX CI job.
:return: Number of idle hosts delegated to OpenVino-ONNX CI
:rtype: int
"""
jenkins_request_url = self.jenkins_server + 'label/ci&&onnx/api/json?pretty=true'
try:
log.info('Sending request to Jenkins: %s', jenkins_request_url)
r = requests.Request(method='GET', url=jenkins_request_url, verify=False)
response = self.jenkins.jenkins_request(r).json()
return int(response['totalExecutors']) - int(response['busyExecutors'])
except Exception as e:
log.exception('Failed to send request to Jenkins!\nException message: %s', str(e))
raise

View File

@@ -1,89 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import sys
from watchdog import Watchdog
DEFAULT_MSTEAMS_URL_FILE = '/home/lab_nerval/tokens/msteams_url'
DEFAULT_GITHUB_ORGANIZATION = 'openvinotoolkit'
DEFAULT_GITHUB_PROJECT = 'openvino'
DEFAULT_JENKINS_TOKEN_FILE = '/home/lab_nerval/tokens/crackerjack'
DEFAULT_JENKINS_SERVER = 'https://crackerjack.intel.com/'
DEFAULT_JENKINS_USER = 'lab_nerval'
DEFAULT_CI_JOB_NAME = 'onnx/OpenVino_CI'
DEFAULT_WATCHDOG_JOB_NAME = 'onnx/ci_watchdog'
def main(args):
"""
Read args passed to script, load tokens and run watchdog.
Keyword arguments:
:param args: arguments parsed by argparse ArgumentParser
:return: returns status code 0 on successful completion
"""
jenkins_server = args.jenkins_server.strip()
jenkins_user = args.jenkins_user.strip()
jenkins_token = open(args.jenkins_token).read().replace('\n', '').strip()
msteams_url = open(args.msteams_url).read().replace('\n', '').strip()
github_credentials = args.github_credentials
github_org = args.github_org
github_project = args.github_project
ci_job = args.ci_job.strip()
watchdog_job = args.watchdog_job.strip()
quiet = args.quiet
wd = Watchdog(jenkins_token=jenkins_token,
jenkins_server=jenkins_server,
jenkins_user=jenkins_user,
github_credentials=github_credentials,
git_org=github_org,
git_project=github_project,
msteams_url=msteams_url,
ci_job_name=ci_job,
watchdog_job_name=watchdog_job)
wd.run(quiet=quiet)
return 0
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--msteams-url', help='Path to MS Teams channel url to communicate messages.',
default=DEFAULT_MSTEAMS_URL_FILE, action='store', required=False)
parser.add_argument('--github-credentials', help='GitHub user credentials to access repo.',
nargs="+", required=True)
parser.add_argument('--github-org', help='Name of organization on GitHub.',
default=DEFAULT_GITHUB_ORGANIZATION, action='store', required=False)
parser.add_argument('--github-project', help='Name of project on GitHub.',
default=DEFAULT_GITHUB_PROJECT, action='store', required=False)
parser.add_argument('--jenkins-token', help='Path to Jenkins user token to access build info.',
default=DEFAULT_JENKINS_TOKEN_FILE, action='store', required=False)
parser.add_argument('--jenkins-server', help='Jenkins server address.',
default=DEFAULT_JENKINS_SERVER, action='store', required=False)
parser.add_argument('--jenkins-user', help='Jenkins user used to log in.',
default=DEFAULT_JENKINS_USER, action='store', required=False)
parser.add_argument('--ci-job', help='Jenkins CI job name.',
default=DEFAULT_CI_JOB_NAME, action='store', required=False)
parser.add_argument('--watchdog-job', help='Jenkins CI Watchdog job name.',
default=DEFAULT_WATCHDOG_JOB_NAME, action='store', required=False)
parser.add_argument('--quiet', help="Quiet mode - doesn\'t send message to communicator.",
action='store_true', required=False)
args = parser.parse_args()
sys.exit(main(args))

View File

@@ -1,128 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import requests
class MSTeamsCommunicator:
"""Class communicating with MSTeams using Incoming Webhook.
The purpose of this class is to use MSTeams API to send message.
Docs for used API, including wrapped methods can be found at:
https://docs.microsoft.com/en-us/outlook/actionable-messages/send-via-connectors
"""
def __init__(self, _ci_alerts_channel_url):
self._ci_alerts_channel_url = _ci_alerts_channel_url
self._queued_messages = {
self._ci_alerts_channel_url: [],
}
@property
def messages(self):
"""
Get list of queued messages.
:return: List of queued messages
:return type: List[String]
"""
return self._queued_messages.values()
def queue_message(self, message):
"""
Queue message to be sent later.
:param message: Message content
:type message: String
"""
self._queued_messages[self._ci_alerts_channel_url].append(message)
def _parse_text(self, watchdog_log, message):
"""
Parse text to display as alert.
:param watchdog_log: Watchdog log content
:param message: Unparsed message content
:type watchdog_log: String
:type message: String
"""
message_split = message.split('\n')
log_url = None
if len(message_split) == 3:
log_url = message_split[-1]
title = message_split[0]
text = message_split[1]
header = watchdog_log.split(' - ')
header_formatted = '{} - [Watchdog Log]({})'.format(header[0], header[1])
return title, log_url, '{}\n\n{}'.format(header_formatted, text)
def _json_request_content(self, title, log_url, text_formatted):
"""
Create final json request to send message to MS Teams channel.
:param title: Title of alert
:param log_url: URL to PR
:param text_formatted: General content of alert - finally formatted
:type title: String
:type title: String
:type title: String
"""
data = {
'@context': 'https://schema.org/extensions',
'@type': 'MessageCard',
'themeColor': '0072C6',
'title': title,
'text': text_formatted,
'potentialAction':
[
{
'@type': 'OpenUri',
'name': 'Open PR',
'targets':
[
{
'os': 'default',
'uri': log_url,
},
],
},
],
}
return data
def _send_to_channel(self, watchdog_log, message_queue, channel_url):
"""
Send MSTeams message to specified channel.
:param watchdog_log: Watchdog log content
:param message_queue: Queued messages to send
:param channel_url: Channel url
:type watchdog_log: String
:type message_queue: String
:type channel_url: String
"""
for message in message_queue:
title, log_url, text_formatted = self._parse_text(watchdog_log, message)
data = self._json_request_content(title, log_url, text_formatted)
try:
requests.post(url=channel_url, json=data)
except Exception as ex:
raise Exception('!!CRITICAL!! MSTeamsCommunicator: Could not send message '
'due to {}'.format(ex))
def send_message(self, watchdog_log, quiet=False):
"""
Send queued messages as single communication.
:param watchdog_log: Watchdog log content
:param quiet: Flag for disabling sending report through MS Teams
:type watchdog_log: String
:type quiet: Boolean
"""
for channel, message_queue in self._queued_messages.items():
if not quiet and message_queue:
self._send_to_channel(watchdog_log, message_queue, channel)

View File

@@ -1,505 +0,0 @@
#!/usr/bin/python3
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import datetime
import time
import re
import logging
import requests
from ms_teams_communicator import MSTeamsCommunicator
from jenkins_wrapper import JenkinsWrapper
from jenkins import NotFoundException
from git_wrapper import GitWrapper, GitWrapperError
import os
import json
# Logging
logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s')
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
# Watchdog static constant variables
_SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
_BUILD_DURATION_THRESHOLD = datetime.timedelta(minutes=60)
_CI_START_THRESHOLD = datetime.timedelta(minutes=30)
_AWAITING_JENKINS_THRESHOLD = datetime.timedelta(minutes=5)
_WATCHDOG_DIR = os.path.expanduser('~')
_PR_REPORTS_CONFIG_KEY = 'pr_reports'
_CI_BUILD_FAIL_MESSAGE = 'ERROR: py3: commands failed'
_CI_BUILD_SUCCESS_MESSAGE = 'py3: commands succeeded'
_GITHUB_CI_CHECK_NAME = 'OpenVINO-ONNX'
INTERNAL_ERROR_MESSAGE_HEADER = '!!! --- !!! INTERNAL WATCHDOG ERROR !!! --- !!!'
ERROR_MESSAGE_HEADER = '!!! OpenVino-ONNX CI Error !!!'
WARNING_MESSAGE_HEADER = 'OpenVino-ONNX CI WARNING'
INFO_MESSAGE_HEADER = 'OpenVino-ONNX CI INFO'
class Watchdog:
"""Class describing OpenVino-ONNX-CI Watchdog.
Watchdog connects to GitHub and retrieves the list of current pull requests (PRs) in
OpenVino repository. Then it connects to specified Jenkins server to
check CI jobs associated with every PR. Watchdog verifies time durations for Jenkins
initial response, job queue and execution against time treshold constants. Every fail
is logged and reported through MS Teams communicators.
:param jenkins_token: Token used for Jenkins
:param jenkins_server: Jenkins server address
:param jenkins_user: Username used to connect to Jenkins
:param github_credentials: Credentials used to connect to GitHub
:param msteams_url: URL used to connect to MS Teams channel
:param ci_job_name: OpenVino-ONNX CI job name used in Jenkins
:param watchdog_job_name: Watchdog job name used in Jenkins
:type jenkins_token: String
:type jenkins_server: String
:type jenkins_user: String
:type github_credentials: String
:type msteams_url: String
:type ci_job_name: String
:type watchdog_job_name: String
.. note::
Watchdog and OpenVino-ONNX CI job must be placed on the same Jenkins server.
"""
def __init__(self, jenkins_token, jenkins_server, jenkins_user, github_credentials, git_org,
git_project, msteams_url, ci_job_name, watchdog_job_name):
self._config_path = os.path.join(_WATCHDOG_DIR, '{}/.{}_ci_watchdog.json'.format(_WATCHDOG_DIR, git_project))
# Jenkins Wrapper object for CI job
self._jenkins = JenkinsWrapper(jenkins_token,
jenkins_user=jenkins_user,
jenkins_server=jenkins_server)
# Load GitHub token and log in, retrieve pull requests
self._git = GitWrapper(github_credentials, repository=git_org, project=git_project)
# Create MS Teams api object
self._msteams_hook = MSTeamsCommunicator(msteams_url)
self._ci_job_name = ci_job_name.lower()
self._watchdog_job_name = watchdog_job_name
# Read config file
self._config = self._read_config_file()
# Time at Watchdog initiation
self._now_time = datetime.datetime.now()
self._current_prs = {}
self._ms_teams_enabled = True
def run(self, quiet=False):
"""Run main watchdog logic.
Retrieve list of pull requests and pass it to the method responsible for checking them.
:param quiet: Flag for disabling sending report through communicator
:type quiet: Boolean
"""
try:
pull_requests = self._git.get_pull_requests()
except GitWrapperError:
message = 'Failed to retrieve Pull Requests!'
log.exception(message)
self._queue_message(message, message_severity='internal')
# Check all pull requests
for pr in pull_requests:
try:
self._check_pr(pr)
except Exception as e:
log.exception(str(e))
self._queue_message(str(e), message_severity='internal', pr=pr)
self._update_config()
self._send_message(quiet=quiet)
def _read_config_file(self):
"""Read Watchdog config file stored on the system.
The file stores every fail already reported along with timestamp. This
mechanism is used to prevent Watchdog from reporting same failure
multiple times. In case there's no config under the expected path,
appropriate data structure is created and returned.
:return: Returns dict of dicts with reported fails with their timestamps
:rtype: dict of dicts
"""
if os.path.isfile(self._config_path):
log.info('Reading config file in: {}'.format(self._config_path))
file = open(self._config_path, 'r')
data = json.load(file)
else:
log.info('No config file found in: {}'.format(self._config_path))
data = {_PR_REPORTS_CONFIG_KEY: {}}
return data
def _check_pr(self, pr):
"""Check pull request (if there's no reason to skip).
Retrieve list of statuses for every PR's last commit and interpret them. Filters out statuses
unrelated to OpenVino-ONNX Jenkins CI and passes relevant statuses to method that interprets them.
If no commit statuses related to Jenkins are available after time defined by
**_AWAITING_JENKINS_THRESHOLD** calls appropriate method to check for builds waiting in queue.
:param pr: GitHub Pull Requests
:type pr: github.PullRequest.PullRequest
"""
log.info('===============================================')
log.info('Checking PR#{}'.format(pr.number))
# Get last Jenkins status
last_status = self._get_last_status(pr)
# Append PR checked in current run for Watchdog config
self._current_prs[str(pr.number)] = self._get_pr_timestamps(pr, last_status)
if self._should_ignore(pr) or self._updated_since_last_run(pr):
log.info('Ignoring PR#{}'.format(pr.number))
return
# Calculate time passed since PR update (any commit, merge or comment)
pr_time_delta = self._now_time - pr.updated_at
if last_status:
# Interpret found CI statuses
log.info('Last status: {} at {}'.format(last_status.description, last_status.updated_at))
self._interpret_status(last_status, pr)
elif pr_time_delta > _CI_START_THRESHOLD:
# If there's no status after assumed time - check if build is waiting in queue
log.info('CI for PR {}: NO JENKINS STATUS YET'.format(pr.number))
self._check_missing_status(pr)
@staticmethod
def _get_pr_timestamps(pr, last_status):
"""Get dict containing PR timestamp and last status timestamp.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Dictionary with PR and last status update timestamps
:rtype: dict
"""
pr_timestamp = time.mktime(pr.updated_at.timetuple())
if last_status:
status_timestamp = time.mktime(last_status.updated_at.timetuple())
else:
status_timestamp = None
pr_dict = {'pr_timestamp': pr_timestamp,
'status_timestamp': status_timestamp}
return pr_dict
@staticmethod
def _get_last_status(pr):
"""Get last commit status posted from Jenkins.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Either last PR status posted from Jenkins or None
:rtype: github.CommitStatus.CommitStatus
"""
# Find last commit in PR
last_commit = pr.get_commits().reversed[0]
# Get statuses and filter them to contain only those related to Jenkins CI
# and check if CI in Jenkins started
statuses = last_commit.get_statuses()
jenk_statuses = [stat for stat in statuses if
_GITHUB_CI_CHECK_NAME in stat.context]
try:
last_status = jenk_statuses[0]
except IndexError:
last_status = None
return last_status
@staticmethod
def _should_ignore(pr):
"""Determine if PR should be ignored.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Returns True if PR should be ignored
:rtype: Bool
"""
# Ignore PR if it has WIP label or WIP in title
if 'WIP' in pr.title:
log.info('PR#{} should be ignored. WIP tag in title.'.format(pr.number))
return True
label_names = [label.name for label in pr.labels]
if 'WIP' in label_names:
log.info('PR#{} should be ignored. WIP label present.'.format(pr.number))
return True
# Ignore PR if base ref is not master
if 'master' not in pr.base.ref:
log.info('PR#{} should be ignored. Base ref is not master'.format(pr.number))
return True
# Ignore PR if mergeable state is 'dirty' or 'behind'.
# Practically this ignores PR in case of merge conflicts
ignored_mergeable_states = ['behind', 'dirty', 'draft']
if pr.mergeable_state in ignored_mergeable_states:
log.info('PR#{} should be ignored. Mergeable state is {}. '.format(pr.number, pr.mergeable_state))
return True
# If no criteria for ignoring PR are met - return false
return False
def _updated_since_last_run(self, pr):
# Ignore if PR was already checked and there was no update in meantime
pr_number = str(pr.number)
current_pr_timestamps = self._current_prs.get(pr_number)
last_pr_timestamps = self._config[_PR_REPORTS_CONFIG_KEY].get(pr_number)
if current_pr_timestamps == last_pr_timestamps:
log.info('PR#{} - No update since last check'.format(pr.number))
return True
else:
return False
def _check_missing_status(self, pr):
"""Verify if missing status is expected.
This method checks if CI build for last was scheduled and still waits in queue for
executor.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
"""
pr_time_delta = self._now_time - pr.updated_at
try:
build_number = self._build_scheduled(pr)
if self._build_in_queue(pr, build_number):
message = ('PR# {}: build waiting in queue after {} minutes.'
.format(pr.number, pr_time_delta.seconds / 60))
severity = 'warning'
else:
message = ('PR# {}: missing status on GitHub after {} minutes.'
.format(pr.number, pr_time_delta.seconds / 60))
severity = 'error'
self._queue_message(message, message_severity=severity, pr=pr)
except TypeError:
log.info('Committer outside of OpenVino organization')
def _build_scheduled(self, pr):
"""Check if Jenkins build corresponding to PR was scheduled.
This method takes last Jenkins build for given PR and compares hash from Jenkins console output
and sha from PR object to determine if CI build for appropriate commit was scheduled.
:param pr: Single PR being currently checked
:type pr: github.PullRequest.PullRequest
:return: Returns build number or -1 if no build found
:rtype: int
"""
pr_number = str(pr.number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
try:
# Retrieve console output from last Jenkins build for job corresponding to this PR
last_build_number = self._jenkins.get_job_info(project_name_full)['lastBuild']['number']
console_output = self._jenkins.get_build_console_output(project_name_full, last_build_number)
# Check if CI build was scheduled - commit hash on GH must match hash in last Jenkins build console output
# Retrieve hash from Jenkins output
match_string = '(?:Obtained .ci/[a-zA-Z/]+Jenkinsfile from ([a-z0-9]{40}))'
retrieved_sha = re.search(match_string, console_output).group(1)
if retrieved_sha == pr.get_commits().reversed[0].sha:
return last_build_number
else:
return -1
except (NotFoundException, AttributeError, requests.exceptions.HTTPError):
message = ('PR #{}: Jenkins build corresponding to commit {} not found!'
.format(pr_number, pr.get_commits().reversed[0].sha))
self._queue_message(message, message_severity='error', pr=pr)
return -1
def _build_in_queue(self, pr, build_number):
"""Check if Jenkins build waits in queue.
This method verifies if CI build is waiting in queue based on console output.
:param pr: Single PR being currently checked
:param build_number: Jenkins build number to retrieve console output from
:type pr: github.PullRequest.PullRequest
:type build_number: int
:return: Returns True if CI build is waiting in queue
:rtype: Bool
"""
pr_number = str(pr.number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
# Retrieve console output
try:
console_output = self._jenkins.get_build_console_output(project_name_full, build_number)
except NotFoundException:
return False
# Check if build is waiting in queue (and not already running on an executor)
if 'Waiting for next available executor on' in console_output \
and 'Running on' not in console_output:
log.info('CI for PR %s: WAITING IN QUEUE', pr_number)
return True
else:
return False
def _interpret_status(self, status, pr):
"""
Verify GitHub status passed to the method.
This method verifies last commit status for given PR, calling appropriate methods
to further validate the status.
:param status: GitHub commit status
:param pr: Single PR being currently checked
:type status: github.CommitStatus.CommitStatus
:type pr: github.PullRequest.PullRequest
"""
try:
# Retrieve build number for Jenkins build related to this PR
build_number = self._retrieve_build_number(status.target_url)
# CI build finished - verify if expected output is present
finished_statuses = ['Build finished', 'This commit cannot be built', 'This commit looks good']
pending_statuses = ['This commit is being built', 'Testing in progress',
'This commit is scheduled to be built']
if any(phrase in status.description for phrase in finished_statuses):
self._check_finished(pr, build_number)
# CI build in progress - verify timeouts for build queue and duration
elif any(phrase in status.description for phrase in pending_statuses):
self._check_in_progress(pr, build_number)
else:
message = 'ONNX CI job for PR# {}: unrecognized status: {}'.format(pr.number, status.description)
self._queue_message(message, message_severity='error', pr=pr)
except Exception:
# Log Watchdog internal error in case any status can't be properly verified
message = 'Failed to verify status "{}" for PR# {}'.format(status.description, pr.number)
log.exception(message)
self._queue_message(message, message_severity='internal', pr=pr)
def _retrieve_build_number(self, url):
"""Retrieve Jenkins CI job build number from URL address coming from GitHub commit status.
:param url: URL address from GitHub commit status
:type url: String
:return: Returns build number
:rtype: int
"""
# Retrieve the build number from url string
match_obj = re.search('(?:/PR-[0-9]+/)([0-9]+)', url)
try:
number = int(match_obj.group(1))
return number
except Exception:
log.exception('Failed to retrieve build number from url link: %s', url)
raise
def _queue_message(self, message, message_severity='info', pr=None):
"""Add a message to message queue in communicator object.
The queued message is constructed based on message string passed as
a method argument and message header. Message header is mapped to message severity
also passed as an argument.
:param message: Message content
:param message_severity: Message severity level
:type message: String
:type message_severity: int
"""
log.info(message)
internal = False
if 'internal' in message_severity:
message_header = INTERNAL_ERROR_MESSAGE_HEADER
internal = True
elif 'error' in message_severity:
message_header = ERROR_MESSAGE_HEADER
elif 'warning' in message_severity:
message_header = WARNING_MESSAGE_HEADER
else:
message_header = INFO_MESSAGE_HEADER
# If message is related to PR attatch url
if pr:
message = message + '\n' + pr.html_url
send = message_header + '\n' + message
if self._ms_teams_enabled:
self._msteams_hook.queue_message(send)
def _check_finished(self, pr, build_number):
"""Verify if finished build output contains expected string for either fail or success.
:param pr: Single PR being currently checked
:param build_number: Jenkins CI job build number
:type pr: github.PullRequest.PullRequest
:type build_number: int
"""
pr_number = str(pr.number)
log.info('CI for PR %s: FINISHED', pr_number)
# Check if FINISH was valid FAIL / SUCCESS
project_name_full = self._ci_job_name + '/PR-' + pr_number
build_output = self._jenkins.get_build_console_output(project_name_full, build_number)
if _CI_BUILD_FAIL_MESSAGE not in build_output \
and _CI_BUILD_SUCCESS_MESSAGE not in build_output:
message = ('ONNX CI job for PR #{}: finished but no tests success or fail '
'confirmation is present in console output!'.format(pr_number))
self._queue_message(message, message_severity='error', pr=pr)
def _send_message(self, quiet=False):
"""Send messages queued in MS Teams objects to designated channel.
Queued messages are being sent as a single communication.
:param quiet: Flag for disabling sending report through communicator
:type quiet: Boolean
"""
if any(messages for messages in self._msteams_hook.messages):
try:
watchdog_build = self._jenkins.get_job_info(self._watchdog_job_name)['lastBuild']
watchdog_build_number = watchdog_build['number']
watchdog_build_link = watchdog_build['url']
except Exception:
watchdog_build_number = 'UNKNOWN'
watchdog_build_link = self._jenkins.jenkins_server
send = self._watchdog_job_name + '- build ' + str(
watchdog_build_number) + ' - ' + watchdog_build_link
if self._ms_teams_enabled:
self._msteams_hook.send_message(send, quiet=quiet)
else:
log.info('Nothing to report.')
def _check_in_progress(self, pr, build_number):
"""Check if CI build succesfully started.
Checks if build started within designated time threshold, and job is
currently running - it didn't cross the time threshold.
:param pr: Single PR being currently checked
:param build_number: Jenkins CI job build number
:type pr: github.PullRequest.PullRequest
:type build_number: int
"""
pr_number = str(pr.number)
log.info('CI for PR %s: TESTING IN PROGRESS', pr_number)
project_name_full = self._ci_job_name + '/PR-' + pr_number
build_info = self._jenkins.get_build_info(project_name_full, build_number)
build_datetime = datetime.datetime.fromtimestamp(build_info['timestamp'] / 1000.0)
build_delta = self._now_time - build_datetime
log.info('Build %s: IN PROGRESS, started: %s minutes ago', str(build_number),
str(build_delta))
# If build still waiting in queue
if build_delta > _CI_START_THRESHOLD and self._build_in_queue(pr, build_number):
message = ('ONNX CI job build #{}, for PR #{} waiting in queue after {} '
'minutes'.format(build_number, pr_number, str(build_delta.seconds / 60)))
self._queue_message(message, message_severity='warning', pr=pr)
elif build_delta > _BUILD_DURATION_THRESHOLD:
# CI job take too long, possibly froze - communicate failure
message = ('ONNX CI job build #{}, for PR #{} started,'
'but did not finish in designated time of {} '
'minutes!'.format(build_number, pr_number,
str(_BUILD_DURATION_THRESHOLD.seconds / 60)))
self._queue_message(message, message_severity='error', pr=pr)
def _update_config(self):
"""Update Watchdog config file with PRs checked in current Watchdog run, remove old entries.
:param current_prs: List of PR numbers checked during current Watchdog run
:type current_prs: list of ints
"""
# Cleanup config of old reports
log.info('Writing to config file at: {}'.format(self._config_path))
new_config = {_PR_REPORTS_CONFIG_KEY: self._current_prs}
file = open(self._config_path, 'w+')
json.dump(new_config, file)

View File

@@ -1,17 +0,0 @@
# See help here: https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/enabling-and-disabling-version-updates
version: 2
updates:
# Enable version updates for nGraph Python API
- package-ecosystem: pip
directory: "/ngraph/python"
schedule:
interval: weekly
day: monday
time: "13:00"
open-pull-requests-limit: 10
reviewers:
- postrational
labels:
- "category: dependencies"

View File

@@ -1,6 +0,0 @@
### Details:
- *item1*
- *...*
### Tickets:
- *ticket-id*

View File

@@ -1,44 +0,0 @@
name: Documentation
on: [push, pull_request]
jobs:
Build_Doc:
runs-on: ubuntu-20.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v2
with:
submodules: recursive
lfs: true
- name: Install dependencies
run: |
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive
python3 -m pip install lxml
# install doxygen
mkdir doxygen
cd doxygen
git clone https://github.com/doxygen/doxygen.git
cd doxygen
git checkout Release_1_9_1
mkdir build
cd build
cmake ..
cmake --build . -j`nproc`
sudo make install
- name: CMake doc
run: |
mkdir build
cd build
cmake -DENABLE_DOCS=ON ..
- name: Build doc
run: cmake --build . --target openvino_docs
working-directory: build
- name: 'Upload doc'
uses: actions/upload-artifact@v2
with:
name: openvino_doc
path: build/docs/html/

View File

@@ -38,28 +38,30 @@ jobs:
with:
name: ngraph_code_style_diff
path: ngraph_code_style_diff.patch
ShellCheck:
Java:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-java@v1
with:
submodules: recursive
- name: Install ShellCheck
run: sudo apt --assume-yes install shellcheck
java-version: '11'
- name: Install dependencies
run: |
sudo apt --assume-yes install libusb-1.0-0-dev
python3 -m pip install -r ./inference-engine/ie_bridges/python/requirements.txt
wget -nc https://github.com/google/google-java-format/releases/download/google-java-format-1.9/google-java-format-1.9-all-deps.jar
- name: CMake
- name: Check code style
run: |
mkdir build
cd build
cmake ..
java -jar google-java-format-1.9-all-deps.jar --set-exit-if-changed -a -i $(find . -type f -name "*.java")
- name: ShellCheck
run: make ie_shellcheck
working-directory: build
- name: Create code style diff
if: failure()
run: |
git diff >java_code_style_diff.patch
- uses: actions/upload-artifact@v2
if: failure()
with:
name: java_code_style_diff
path: java_code_style_diff.patch

View File

@@ -1,17 +0,0 @@
name: Files Size
on: [push, pull_request]
jobs:
Check_Files_Size:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: git ls-tree
run: |
git ls-tree -r -t -l --full-name HEAD | sort -n -r -k 4
- name: git lfs ls-files
run: |
git lfs ls-files --size

View File

@@ -12,9 +12,6 @@ jobs:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
@@ -40,20 +37,12 @@ jobs:
# MO requirements
pip install -r requirements.txt
pip install -r requirements_dev.txt
# requrements for CMake
sudo apt --assume-yes install libusb-1.0-0-dev
working-directory: model-optimizer
- name: Pylint
run: pylint -d C,R,W mo/ mo.py extensions/
working-directory: model-optimizer
- name: CMake
run: |
mkdir build
cd build
cmake ..
- name: UT
run: |
export PYTHONPATH=$PYTHONPATH:`pwd`
@@ -62,42 +51,3 @@ jobs:
mkdir ../mo-ut-logs
python3 -m xmlrunner discover -p *_test.py --output=../mo-ut-logs
working-directory: model-optimizer
build_wheel:
name: Build Python wheel
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Install dependencies
run: |
python3 -m pip install --upgrade pip
python3 -m pip install wheel setuptools
python3 -m pip install tensorflow==2.3.0
- name: Build
run: |
python3 setup.py sdist bdist_wheel
working-directory: model-optimizer
- name: Test package content
run: |
echo "src = open('openvino_mo.egg-info/SOURCES.txt', 'rt').read().split()" | tee -a test_wheel.py
echo "ref = open('automation/package_BOM.txt', 'rt').read().split()" | tee -a test_wheel.py
echo "for name in ref:" | tee -a test_wheel.py
echo " if name.endswith('.py'):" | tee -a test_wheel.py
echo " assert name in src or './' + name in src, name + ' file missed'" | tee -a test_wheel.py
python3 test_wheel.py
working-directory: model-optimizer
- name: Test conversion
run: |
wget -q http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224.tgz
tar -xf mobilenet_v1_1.0_224.tgz
python3 -m pip install model-optimizer/dist/*.whl
python3 -c "import sys, subprocess, mo_tf; subprocess.run([sys.executable, mo_tf.__file__, '--input_model', 'mobilenet_v1_1.0_224_frozen.pb', '--input_shape', '[1,224,224,3]'], check=True)"
- uses: actions/upload-artifact@v2
with:
name: mo_wheel
path: "model-optimizer/dist/*.whl"

6
.gitmodules vendored
View File

@@ -13,8 +13,4 @@
[submodule "inference-engine/samples/thirdparty/gflags"]
path = inference-engine/samples/thirdparty/gflags
url = https://github.com/gflags/gflags.git
ignore = dirty
[submodule "thirdparty/xbyak"]
path = thirdparty/xbyak
url = https://github.com/herumi/xbyak.git
ignore = dirty
ignore = dirty

View File

@@ -1,24 +1,31 @@
# Copyright (C) 2018-2021 Intel Corporation
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required(VERSION 3.13)
cmake_policy(SET CMP0054 NEW)
# TODO: for make instal / package we need to use 3.13.3 version because
# it allows to install targets created outside of current projects
# See https://blog.kitware.com/cmake-3-13-0-available-for-download/
cmake_minimum_required(VERSION 3.13 FATAL_ERROR)
project(OpenVINO)
set(OpenVINO_MAIN_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR})
set(IE_MAIN_SOURCE_DIR ${OpenVINO_MAIN_SOURCE_DIR}/inference-engine)
find_package(IEDevScripts REQUIRED
PATHS "${OpenVINO_MAIN_SOURCE_DIR}/cmake/developer_package"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
list(APPEND CMAKE_MODULE_PATH "${OpenVINO_MAIN_SOURCE_DIR}/cmake")
include(CTest)
include(cmake/features.cmake)
include(features)
# These options are shared with 3rdparty plugins by means of developer package
include(cmake/dependencies.cmake)
# include developer package
include(developer_package)
# These options are shared with 3rdparty plugins
# by means of developer package
include(check_features)
include(dependencies)
# resolving dependencies for the project
message (STATUS "PROJECT ............................... " ${PROJECT_NAME})
@@ -30,15 +37,8 @@ message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
# remove file with exported developer targets to force its regeneration
file(REMOVE "${CMAKE_BINARY_DIR}/inference_engine_targets.cmake")
foreach(component IN LISTS openvino_export_components)
file(REMOVE "${CMAKE_BINARY_DIR}/${component}_dev_targets.cmake")
unset(${component} CACHE)
endforeach()
#
# Build
#
file(REMOVE "${CMAKE_BINARY_DIR}/targets_developer.cmake")
file(REMOVE "${CMAKE_BINARY_DIR}/targets.cmake")
function(build_ngraph)
function(ngraph_set option value)
@@ -47,48 +47,36 @@ function(build_ngraph)
endif()
endfunction()
set(NGRAPH_BUILD_DIR ${CMAKE_LIBRARY_OUTPUT_DIRECTORY} CACHE STRING "" FORCE)
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${OpenVINO_MAIN_SOURCE_DIR}/ngraph/cmake/Modules/")
if (ENABLE_SANITIZER)
ngraph_set(NGRAPH_ADDRESS_SANITIZER ON)
ngraph_set(NGRAPH_ADDRESS_SANITIZER TRUE)
else ()
ngraph_set(NGRAPH_ADDRESS_SANITIZER OFF)
ngraph_set(NGRAPH_ADDRESS_SANITIZER FALSE)
endif ()
ngraph_set(NGRAPH_PYTHON_BUILD_ENABLE OFF)
ngraph_set(NGRAPH_PYTHON_BUILD_ENABLE FALSE)
if(ENABLE_TESTS AND NOT ANDROID)
ngraph_set(NGRAPH_UNIT_TEST_ENABLE ON)
ngraph_set(NGRAPH_UNIT_TEST_ENABLE TRUE)
else()
ngraph_set(NGRAPH_UNIT_TEST_ENABLE OFF)
ngraph_set(NGRAPH_UNIT_TEST_ENABLE FALSE)
endif()
if(NOT (ANDROID OR WINDOWS_STORE OR (MSVC AND (ARM OR AARCH64)) ))
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE ON)
if(NOT ANDROID)
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE TRUE)
else()
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE OFF)
endif()
ngraph_set(NGRAPH_INTERPRETER_ENABLE ON)
if(TREAT_WARNING_AS_ERROR)
ngraph_set(NGRAPH_WARNINGS_AS_ERRORS ON)
else()
ngraph_set(NGRAPH_WARNINGS_AS_ERRORS OFF)
endif()
if(ENABLE_SANITIZER)
ngraph_set(NGRAPH_ADDRESS_SANITIZER_ENABLE ON)
else()
ngraph_set(NGRAPH_ADDRESS_SANITIZER_ENABLE OFF)
endif()
if(ENABLE_THREAD_SANITIZER)
ngraph_set(NGRAPH_THREAD_SANITIZER_ENABLE ON)
else()
ngraph_set(NGRAPH_THREAD_SANITIZER_ENABLE OFF)
ngraph_set(NGRAPH_ONNX_IMPORT_ENABLE FALSE)
endif()
ngraph_set(NGRAPH_INTERPRETER_ENABLE TRUE)
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
ie_add_compiler_flags(-Wno-error=uninitialized -Wno-error=literal-conversion)
elseif(UNIX)
ie_add_compiler_flags(-Wno-error=maybe-uninitialized -Wno-error=return-type)
ie_add_compiler_flags(-Wno-error=maybe-uninitialized -Wno-error=return-type -fPIC)
endif()
if(ANDROID)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=defaulted-function-deleted -Wno-error=unused-command-line-argument")
endif()
# WA for GCC 7.0
@@ -97,80 +85,61 @@ function(build_ngraph)
elseif(WIN32)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4308 /wd4146 /wd4703 /wd4244 /wd4819")
endif()
# Preserve the original flags for further use
set(CMAKE_ORIGINAL_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE}")
set(CMAKE_ORIGINAL_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
set(CMAKE_ORIGINAL_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE}")
set(CMAKE_ORIGINAL_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE}")
set(CMAKE_ORIGINAL_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE}")
if(ENABLE_LTO)
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_RELEASE ON)
ie_enable_lto()
endif()
ie_cpack_add_component(ngraph)
set(SDL_cmake_included ON)
set(NGRAPH_COMPONENT_PREFIX "deployment_tools/ngraph/")
# set(NGRAPH_COMPONENT_PREFIX "deployment_tools/ngraph/")
add_subdirectory(ngraph)
set(NGRAPH_LIBRARIES ngraph PARENT_SCOPE)
set(NGRAPH_REF_LIBRARIES ngraph_reference PARENT_SCOPE)
endfunction()
file(REMOVE "${CMAKE_BINARY_DIR}/openvino_targets_developer.cmake")
unset(OpenVINODeveloperPackageTargets CACHE)
function(openvino_developer_export_targets)
cmake_parse_arguments(EXPORT "" "COMPONENT" "TARGETS" ${ARGN})
if(EXPORT_UNPARSED_ARGUMENTS)
message(FATAL_ERROR "openvino_developer_export_targets has unparsed arguments: ${EXPORT_UNPARSED_ARGUMENTS}")
endif()
set(${EXPORT_COMPONENT} "${${EXPORT_COMPONENT}};${EXPORT_TARGETS}")
set(OpenVINODeveloperPackageTargets "${OpenVINODeveloperPackageTargets};${ARGV}")
# to allow exporting of aliased targets with the original names
foreach(target_name IN LISTS ${EXPORT_COMPONENT})
foreach(target_name ${OpenVINODeveloperPackageTargets})
if(TARGET "${target_name}")
get_target_property(original_name ${target_name} ALIASED_TARGET)
if(TARGET "${original_name}")
message(STATUS "The name ${target_name} is an ALIAS for ${original_name}. "
"It will be exported to the InferenceEngineDeveloperPackage with the original name.")
list(REMOVE_ITEM ${EXPORT_COMPONENT} ${target_name})
list(APPEND ${EXPORT_COMPONENT} ${original_name})
list(REMOVE_ITEM OpenVINODeveloperPackageTargets ${target_name})
list(APPEND OpenVINODeveloperPackageTargets ${original_name})
endif()
endif()
endforeach()
list(REMOVE_DUPLICATES ${EXPORT_COMPONENT})
set(${EXPORT_COMPONENT} "${${EXPORT_COMPONENT}}" CACHE INTERNAL
"A list of OpenVINO ${EXPORT_COMPONENT} exported targets" FORCE)
list(APPEND openvino_export_components ${EXPORT_COMPONENT})
list(REMOVE_DUPLICATES openvino_export_components)
set(openvino_export_components "${openvino_export_components}" CACHE INTERNAL
"A list of OpenVINO exported components" FORCE)
list(REMOVE_DUPLICATES OpenVINODeveloperPackageTargets)
set(OpenVINODeveloperPackageTargets "${OpenVINODeveloperPackageTargets}" CACHE INTERNAL
"Paths to extra Inference Engine plugins" FORCE)
endfunction()
add_subdirectory(thirdparty)
add_subdirectory(openvino)
build_ngraph()
add_subdirectory(inference-engine)
add_subdirectory(model-optimizer)
add_subdirectory(docs)
#
# Shellcheck
#
ie_shellcheck_process(DIRECTORY "${OpenVINO_MAIN_SOURCE_DIR}"
SKIP "${OpenVINO_MAIN_SOURCE_DIR}/bin"
"${OpenVINO_MAIN_SOURCE_DIR}/build"
"${OpenVINO_MAIN_SOURCE_DIR}/thirdparty"
"${IE_MAIN_SOURCE_DIR}/tests/ie_test_utils/common_test_utils/gtest"
"${IE_MAIN_SOURCE_DIR}/samples/thirdparty"
"${IE_MAIN_SOURCE_DIR}/thirdparty"
"${IE_MAIN_SOURCE_DIR}/temp"
# TODO fix and enable back:
"${OpenVINO_MAIN_SOURCE_DIR}/scripts/install_dependencies"
"${OpenVINO_MAIN_SOURCE_DIR}/scripts/demo"
"${OpenVINO_MAIN_SOURCE_DIR}/ngraph"
"${IE_MAIN_SOURCE_DIR}/scripts")
#
# cpack
#
# install setupvars

18
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,18 @@
# How to Contribute
We welcome community contributions to the OpenVINO™ repository.
If you have an idea how to improve the product, please share it
with us doing the following steps:
* Make sure you can build the product and run all tests and samples with your patch
* In case of a larger feature, provide relevant unit tests and one or more sample
* Submit a pull request at https://github.com/openvinotoolkit/openvino/pulls
## OpenVINO™ Coding Style Guide
We basically use the Google style (https://google.github.io/styleguide/cppguide.html) with some exceptions:
* 4 spaces instead of 2 spaces for indentations
* Limitation of 160 symbols for the line length
* Exceptions are allowed
* Using namespace are allowed in cpp and prohibited in headers
* Underscore symbol before member in classes/structures
* thisStyleForFunctions()
* theSameStyleForVariables

58
CONTRIBUTING_DOCS.md Normal file
View File

@@ -0,0 +1,58 @@
# Contribute to Documentation
If you want to contribute to a project documentation and make it better, your help is very welcome.
This guide puts together the guidelines to help you figure out how you can offer your feedback and contribute to the documentation.
## Contribute in Multiple ways
There are multiple ways to help improve our documentation:
* [Log an issue](https://jira.devtools.intel.com/projects/CVS/issues): Enter an issue for the OpenVINO™ documentation component for minor issues such as typos.
* Make a suggestion: Send your documentation suggestion to the mailing list.
* Contribute via GitHub: Submit pull requests in the [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/docs) documentation repository.
## Contribute via GitHub
Use the following steps to contribute in the OpenVINO™ Toolkit documentation
### Use Documentation Guidelines
The documentation for our project is written using Markdown. Use our [guidelines](./docs/documentation_guidelines.md) and best practices to write consistent, readable documentation:
* **[Authoring Guidelines](./docs/documentation_guidelines.md#authoring-guidelines)**
* **[Structure Guidelines](./docs/documentation_guidelines.md#structure-guidelines)**
* **[Formatting Guidelines](./docs/documentation_guidelines.md#structure-guidelines)**
* **[Graphics Guidelines](./docs/documentation_guidelines.md#graphics-guidelines)**
### Add New Document to the Documentation
> **NOTE**: Please check if that information can be added to existing documents instead of creating a new one.
1. Fork the [OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino) repository.
2. Create a new branch.
3. Create a new markdown file in an appropriate folder.
> **REQUIRED**: The document title must contain a document label in a form: `{#openvino_docs_<name>}`. For example: `Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ {#openvino_docs_MO_DG_IR_and_opsets}`.
4. Add your file to the documentation structure. Open the documentation structure file [docs/doxygen/ie_docs.xml](./docs/doxygen/ie_docs.xml) and add your file path to the appropriate section.
5. Commit changes to your branch.
6. Create a pull request.
7. Once the pull request is created, automatic checks are started. All checks must pass to continue.
8. Discuss, review, and update your contributions.
9. Get merged once the maintainer approves.
### Edit Existing Document
1. Fork the [OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino) repository.
2. Create a new branch.
3. Edit the documentation markdown file and commit changes to the branch.
4. Create a pull request.
5. Once the pull request is created, automatic checks are started. All checks must pass to continue.
6. Discuss, review, and update your contributions.
7. Get merged once the maintainer approves.
### Delete Document from the Documentation
1. Fork the [OpenVINO™ Toolkit](https://github.com/openvinotoolkit/openvino) repository.
2. Create a new branch.
3. Remove the documentation file.
4. Remove your file from the documentation structure. Open the documentation structure file [docs/doxygen/ie_docs.xml](./docs/doxygen/ie_docs.xml) and remove all occurences of your file path.
5. Remove all references to that file from other documents or replace with links to alternatives topics (if any).
6. Commit changes to your branch.
7. Create a pull request.
8. Once the pull request is created, automatic checks are started. All checks must pass to continue.
9. Discuss, review, and update your contributions.
10. Get merged once the maintainer approves.

15
Jenkinsfile vendored
View File

@@ -1,18 +1,9 @@
#!groovy
properties([
parameters([
booleanParam(defaultValue: false,
description: 'Cancel the rest of parallel stages if one of them fails and return status immediately',
name: 'failFast'),
booleanParam(defaultValue: true,
description: 'Whether to propagate commit status to GitHub',
name: 'propagateStatus'),
string(defaultValue: '',
description: 'Pipeline shared library version (branch/tag/commit). Determined automatically if empty',
name: 'library_version')
description: 'Cancel the rest of parallel stages if one of them fails and return status immediately',
name: 'failFast')
])
])
loadOpenVinoLibrary {
entrypoint(this)
}
dldtPipelineEntrypoint(this)

View File

@@ -1,21 +1,19 @@
# OpenVINO™ Toolkit
[![Stable release](https://img.shields.io/badge/version-2021.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.3)
# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository
[![Stable release](https://img.shields.io/badge/version-2021.1-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.1)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes several components: namely [Model Optimizer], [nGraph] and
[Inference Engine], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
This open source version includes two components: namely [Model Optimizer] and
[Inference Engine], as well as CPU, GPU and heterogeneous plugins to accelerate
deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
source and public models in popular formats such as Caffe\*, TensorFlow\*,
MXNet\* and ONNX\*.
## Repository components:
* [Inference Engine]
* [nGraph]
* [Model Optimizer]
## License
@@ -23,19 +21,23 @@ Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0](
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
## Resources:
* Docs: https://docs.openvinotoolkit.org/
* Wiki: https://github.com/openvinotoolkit/openvino/wiki
* Issue tracking: https://github.com/openvinotoolkit/openvino/issues
* Storage: https://storage.openvinotoolkit.org/
* Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib
* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)
* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
## Documentation
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [OpenVINO™ Inference Engine Build Instructions](build-instruction.md)
* [Get Started with Deep Learning Deployment Toolkit on Linux](get-started-linux.md)\*
* [Introduction to Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
## How to Contribute
See [CONTRIBUTING](./CONTRIBUTING.md) for contribution to the code.
See [CONTRIBUTING_DOCS](./CONTRIBUTING_DOCS.md) for contribution to the documentation.
Thank you!
## Support
Please report questions, issues and suggestions using:
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
* The `openvino` [tag on StackOverflow]\*
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
@@ -46,4 +48,3 @@ Please report questions, issues and suggestions using:
[Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine
[Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
[tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino
[nGraph]:https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_DevGuide.html

View File

@@ -1,12 +0,0 @@
# Security Policy
## Report a Vulnerability
Please report security issues or vulnerabilities to the [Intel® Security Center].
For more information on how Intel® works to resolve security issues, see
[Vulnerability Handling Guidelines].
[Intel® Security Center]:https://www.intel.com/security
[Vulnerability Handling Guidelines]:https://www.intel.com/content/www/us/en/security-center/vulnerability-handling-guidelines.html

691
build-instruction.md Normal file
View File

@@ -0,0 +1,691 @@
# Build OpenVINO™ Inference Engine
## Contents
- [Introduction](#introduction)
- [Build on Linux\* Systems](#build-on-linux-systems)
- [Software Requirements](#software-requirements)
- [Build Steps](#build-steps)
- [Additional Build Options](#additional-build-options)
- [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
- [Hardware Requirements](#hardware-requirements)
- [Native Compilation](#native-compilation)
- [Cross Compilation Using Docker\*](#cross-compilation-using-docker)
- [Additional Build Options](#additional-build-options-1)
- [Build on Windows* Systems](#build-on-windows-systems)
- [Software Requirements](#software-requirements-1)
- [Build Steps](#build-steps-1)
- [Additional Build Options](#additional-build-options-2)
- [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
- [Build on macOS\* Systems](#build-on-macos-systems)
- [Software Requirements](#software-requirements-2)
- [Build Steps](#build-steps-2)
- [Additional Build Options](#additional-build-options-3)
- [Build on Android\* Systems](#build-on-android-systems)
- [Software Requirements](#software-requirements-3)
- [Build Steps](#build-steps-3)
- [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
- [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
- [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
- [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
- [Next Steps](#next-steps)
- [Additional Resources](#additional-resources)
## Introduction
The Inference Engine can infer models in different formats with various input
and output formats.
The open source version of Inference Engine includes the following plugins:
| PLUGIN | DEVICE TYPES |
| ---------------------| -------------|
| CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
| GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
| GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa\* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
| MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
| Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
## Build on Linux\* Systems
The software was validated on:
- Ubuntu\* 18.04 (64-bit) with default GCC\* 7.5.0
- Ubuntu\* 20.04 (64-bit) with default GCC\* 9.3.0
- CentOS\* 7.6 (64-bit) with default GCC\* 4.8.5
### Software Requirements
- [CMake]\* 3.13 or higher
- GCC\* 4.8 or higher to build the Inference Engine
- Python 3.6 or higher for Inference Engine Python API wrapper
- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
> **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
### Build Steps
1. Clone submodules:
```sh
cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_build_dependencies.sh` script in the
project root folder.
```sh
chmod +x install_build_dependencies.sh
```
```sh
./install_build_dependencies.sh
```
3. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
before running the build. If you don't want to use the GPU plugin, use the
`-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
Intel® Graphics Compute Runtime for OpenCL™ Driver.
4. Create a build folder:
```sh
mkdir build && cd build
```
5. Inference Engine uses a CMake-based build system. In the created `build`
directory, run `cmake` to fetch project dependencies and create Unix
makefiles, then run `make` to build the project:
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
make --jobs=$(nproc --all)
```
### Additional Build Options
You can use the following additional build options:
- The default build uses an internal JIT GEMM implementation.
- To switch to an OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option with
`BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify a path to the
OpenBLAS headers and library. For example, the following options on CentOS\*:
`-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
- To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL`
and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked
MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded
from the Intel® [MKL-DNN repository].
- Threading Building Blocks (TBB) is used by default. To build the Inference
Engine with OpenMP\* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by
the CMake-based script. If you want to use the automatically downloaded
packages but you already have installed TBB or OpenCV packages configured in
your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
environment variables before running the `cmake` command, otherwise they
will not be downloaded and the build may fail if incompatible versions were
installed.
- If the CMake-based build script can not find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, refer to the
[Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
section for details.
- To build the Python API wrapper:
1. Install all additional packages listed in the
`/inference-engine/ie_bridges/python/requirements.txt` file:
```sh
pip install -r requirements.txt
```
2. Use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following
options:
```
-DPYTHON_EXECUTABLE=`which python3.7` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7
```
- To switch the CPU and GPU plugins off/on, use the `cmake` options
`-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build for Raspbian Stretch* OS
> **NOTE**: Only the MYRIAD plugin is supported.
### Hardware Requirements
* Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
> **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
* [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
* [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
### Native Compilation
Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
1. Install dependencies:
```bash
sudo apt-get update
sudo apt-get install -y git cmake libusb-1.0-0-dev
```
2. Go to the cloned `openvino` repository:
```bash
cd openvino
```
3. Initialize submodules:
```bash
git submodule update --init --recursive
```
4. Create a build folder:
```bash
mkdir build && cd build
```
5. Build the Inference Engine:
```bash
cmake -DCMAKE_BUILD_TYPE=Release \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_GNA=OFF .. && make
```
### Cross Compilation Using Docker*
This compilation was tested on the following configuration:
* Host: Ubuntu\* 18.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
* Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
1. Install Docker\*:
```bash
sudo apt-get install -y docker.io
```
2. Add a current user to `docker` group:
```bash
sudo usermod -a -G docker $USER
```
Log out and log in for this to take effect.
3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
with the following content:
```docker
FROM debian:stretch
USER root
RUN dpkg --add-architecture armhf && \
apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
crossbuild-essential-armhf \
git \
wget \
libusb-1.0-0-dev:armhf \
libgtk-3-dev:armhf \
libavcodec-dev:armhf \
libavformat-dev:armhf \
libswscale-dev:armhf \
libgstreamer1.0-dev:armhf \
libgstreamer-plugins-base1.0-dev:armhf \
libpython3-dev:armhf \
python3-pip \
python-minimal \
python-argparse
RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
tar xf cmake-3.14.3.tar.gz && \
(cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
```
It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
4. Build a Docker\* image:
```bash
docker image build -t ie_cross_armhf ie_cross_armhf
```
5. Run Docker\* container with mounted source code folder from host:
```bash
docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
```
6. While in the container:
1. Go to the cloned `openvino` repository:
```bash
cd openvino
```
2. Create a build folder:
```bash
mkdir build && cd build
```
3. Build the Inference Engine:
```bash
cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
-DTHREADS_PTHREAD_ARG="-pthread" \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
```
7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
in the `openvino/bin/armv7l/` directory and the OpenCV*
installation in the `openvino/inference-engine/temp`.
>**NOTE**: Native applications that link to cross-compiled Inference Engine
library require an extra compilation flag `-march=armv7-a`.
### Additional Build Options
You can use the following additional build options:
- Required versions of OpenCV packages are downloaded automatically by the
CMake-based script. If you want to use the automatically downloaded packages
but you already have installed OpenCV packages configured in your environment,
you may need to clean the `OpenCV_DIR` environment variable before running
the `cmake` command; otherwise they won't be downloaded and the build may
fail if incompatible versions were installed.
- If the CMake-based build script cannot find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, see: [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
for details.
- To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip`
packages using `apt-get`; then install `numpy` and `cython` python modules
via `pip3`, adding the following options:
```sh
-DENABLE_PYTHON=ON \
-DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.5
```
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build on Windows* Systems
The software was validated on:
- Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2019
### Software Requirements
- [CMake]\*3.13 or higher
- Microsoft\* Visual Studio 2017, 2019
- (Optional) Intel® Graphics Driver for Windows* (26.20) [driver package].
- Python 3.6 or higher for Inference Engine Python API wrapper
> **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
### Build Steps
1. Clone submodules:
```sh
git submodule update --init --recursive
```
2. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to [download and install
the Intel® Graphics Driver for Windows (26.20) [driver package] before
running the build. If you don't want to use the GPU plugin, use the
`-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
Intel® Graphics Driver.
3. Create build directory:
```sh
mkdir build
```
4. In the `build` directory, run `cmake` to fetch project dependencies and
generate a Visual Studio solution.
For Microsoft\* Visual Studio 2017:
```sh
cmake -G "Visual Studio 15 2017 Win64" -DCMAKE_BUILD_TYPE=Release ..
```
For Microsoft\* Visual Studio 2019:
```sh
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Release ..
```
5. Build generated solution in Visual Studio or run
`cmake --build . --config Release` to build from the command line.
6. Before running the samples, add paths to the TBB and OpenCV binaries used for
the build to the `%PATH%` environment variable. By default, TBB binaries are
downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.5.0/opencv/bin`
folder.
### Additional Build Options
- Internal JIT GEMM implementation is used by default.
- To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake
option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include`
and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Download
a prebuilt OpenBLAS\* package via the [OpenBLAS] link. mingw64* runtime
dependencies can be downloaded via the [mingw64\* runtime dependencies] link.
- To switch to the optimized MKL-ML\* GEMM implementation, use the
`-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to
unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be
downloaded from the Intel&reg; [MKL-DNN repository for Windows].
- Threading Building Blocks (TBB) is used by default. To build the Inference
Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by
the CMake-based script. If you want to use the automatically-downloaded
packages but you already have installed TBB or OpenCV packages configured in
your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
environment variables before running the `cmake` command; otherwise they won't
be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
section for details.
- To switch off/on the CPU and GPU plugins, use the `cmake` options
`-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
-DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
-DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
```
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
### Building Inference Engine with Ninja* Build System
```sh
call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
set CXX=icl
set CC=icl
:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
set TBBROOT=
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
```
## Build on macOS* Systems
> **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports
inference on Intel CPUs only.
The software was validated on:
- macOS\* 10.15, 64-bit
### Software Requirements
- [CMake]\* 3.13 or higher
- Clang\* compiler from Xcode\* 10.1 or higher
- Python\* 3.6 or higher for the Inference Engine Python API wrapper
> **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
### Build Steps
1. Clone submodules:
```sh
cd openvino
git submodule update --init --recursive
```
2. Create a build folder:
```sh
mkdir build && cd build
```
3. Inference Engine uses a CMake-based build system. In the created `build`
directory, run `cmake` to fetch project dependencies and create Unix makefiles,
then run `make` to build the project:
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
make --jobs=$(nproc --all)
```
### Additional Build Options
You can use the following additional build options:
- Internal JIT GEMM implementation is used by default.
- To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and
`-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML
with the `include` and `lib` folders. MKL-ML\* [package for Mac] can be downloaded
[here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
- Threading Building Blocks (TBB) is used by default. To build the Inference
Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by
the CMake-based script. If you want to use the automatically downloaded
packages but you already have installed TBB or OpenCV packages configured in
your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR`
environment variables before running the `cmake` command, otherwise they won't
be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package
that is supported on your platform, or if you want to use a custom build of
the OpenCV library, refer to the
[Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine)
section for details.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To
specify an exact Python version, use the following options:
- If you installed Python through Homebrew*, set the following flags:
```sh
-DPYTHON_EXECUTABLE=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/bin/python3.7m \
-DPYTHON_LIBRARY=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
-DPYTHON_INCLUDE_DIR=/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/include/python3.7m
```
- If you installed Python another way, you can use the following commands to find where the `dylib` and `include_dir` are located, respectively:
```sh
find /usr/ -name 'libpython*m.dylib'
find /usr/ -type d -name python3.7m
```
- nGraph-specific compilation options:
`-DNGRAPH_ONNX_IMPORT_ENABLE=ON` enables the building of the nGraph ONNX importer.
`-DNGRAPH_DEBUG_ENABLE=ON` enables additional debug prints.
## Build on Android* Systems
This section describes how to build Inference Engine for Android x86 (64-bit) operating systems.
### Software Requirements
- [CMake]\* 3.13 or higher
- Android NDK (this guide has been validated with r20 release)
> **NOTE**: Building samples and demos from the Intel® Distribution of OpenVINO™ toolkit package requires CMake\* 3.10 or higher.
### Build Steps
1. Download and unpack Android NDK: https://developer.android.com/ndk/downloads. Let's assume that `~/Downloads` is used as a working folder.
```sh
cd ~/Downloads
wget https://dl.google.com/android/repository/android-ndk-r20-linux-x86_64.zip
unzip android-ndk-r20-linux-x86_64.zip
mv android-ndk-r20 android-ndk
```
2. Clone submodules
```sh
cd openvino
git submodule update --init --recursive
```
3. Create a build folder:
```sh
mkdir build
```
4. Change working directory to `build` and run `cmake` to create makefiles. Then run `make`.
```sh
cd build
cmake .. \
-DCMAKE_TOOLCHAIN_FILE=~/Downloads/android-ndk/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=x86_64 \
-DANDROID_PLATFORM=21 \
-DANDROID_STL=c++_shared \
-DENABLE_OPENCV=OFF
make --jobs=$(nproc --all)
```
* `ANDROID_ABI` specifies target architecture (`x86_64`)
* `ANDROID_PLATFORM` - Android API version
* `ANDROID_STL` specifies that shared C++ runtime is used. Copy `~/Downloads/android-ndk/sources/cxx-stl/llvm-libc++/libs/x86_64/libc++_shared.so` from Android NDK along with built binaries
## Use Custom OpenCV Builds for Inference Engine
> **NOTE**: The recommended and tested version of OpenCV is 4.4.0.
Required versions of OpenCV packages are downloaded automatically during the
building Inference Engine library. If the build script can not find and download
the OpenCV package that is supported on your platform, you can use one of the
following options:
* Download the most suitable version from the list of available pre-build
packages from [https://download.01.org/opencv/2020/openvinotoolkit] from the
`<release_version>/inference_engine` directory.
* Use a system-provided OpenCV package (e.g with running the
`apt install libopencv-dev` command). The following modules must be enabled:
`imgcodecs`, `videoio`, `highgui`.
* Get the OpenCV package using a package manager: pip, conda, conan etc. The
package must have the development components included (header files and CMake
scripts).
* Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
After you got the built OpenCV library, perform the following preparation steps
before running the Inference Engine build:
1. Set the `OpenCV_DIR` environment variable to the directory where the
`OpenCVConfig.cmake` file of you custom OpenCV build is located.
2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF`
option for CMake-based build script for Inference Engine.
## Add Inference Engine to Your Project
For CMake projects, set the `InferenceEngine_DIR` environment variable:
```sh
export InferenceEngine_DIR=/path/to/openvino/build/
```
Then you can find Inference Engine by `find_package`:
```cmake
find_package(InferenceEngine)
include_directories(${InferenceEngine_INCLUDE_DIRS})
target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
```
## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
> **NOTE**: These steps are only required if you want to perform inference on
Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using
the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started].
### For Linux, Raspbian\* Stretch OS
1. Add the current Linux user to the `users` group; you will need to log out and
log in for it to take effect:
```sh
sudo usermod -a -G users "$(whoami)"
```
2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel®
Neural Compute Stick 2, install the USB rules as follows:
```sh
cat <<EOF > 97-myriad-usbboot.rules
SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
EOF
```
```sh
sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
```
```sh
sudo udevadm control --reload-rules
```
```sh
sudo udevadm trigger
```
```sh
sudo ldconfig
```
```sh
rm 97-myriad-usbboot.rules
```
## Next Steps
Congratulations, you have built the Inference Engine. To get started with the
OpenVINO™, proceed to the Get Started guides:
* [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
## Notice
To enable some additional nGraph features and use your custom nGraph library with the OpenVINO™ binary package,
make sure the following:
- nGraph library was built with the same version which is used in the Inference Engine.
- nGraph library and the Inference Engine were built with the same compilers. Otherwise you might face application binary interface (ABI) problems.
To prepare your custom nGraph library for distribution, which includes collecting all headers, copy
binaries, and so on, use the `install` CMake target.
This target collects all dependencies, prepares the nGraph package and copies it to a separate directory.
## Additional Resources
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
---
\* Other names and brands may be claimed as the property of others.
[Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
[CMake]:https://cmake.org/download/
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
[MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
[MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
[OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
[mingw64\* runtime dependencies]:https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download
[https://download.01.org/opencv/2020/openvinotoolkit]:https://download.01.org/opencv/2020/openvinotoolkit
[build instructions]:https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html
[driver package]:https://downloadcenter.intel.com/download/29335/Intel-Graphics-Windows-10-DCH-Drivers
[Intel® Neural Compute Stick 2 Get Started]:https://software.intel.com/en-us/neural-compute-stick/get-started
[OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download

View File

@@ -0,0 +1,39 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON CACHE BOOL "" FORCE)
endif()
#64 bits platform
if (CMAKE_SIZEOF_VOID_P EQUAL 8)
message(STATUS "Detected 64 bit architecture")
SET(ARCH_64 ON)
else()
message(STATUS "Detected 32 bit architecture")
SET(ARCH_64 OFF)
endif()
if (NOT ENABLE_MKL_DNN)
set(ENABLE_MKL OFF)
endif()
if(ENABLE_AVX512F)
if ((CMAKE_CXX_COMPILER_ID STREQUAL "MSVC") AND (MSVC_VERSION VERSION_LESS 1920))
# 1920 version of MSVC 2019. In MSVC 2017 AVX512F not work
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 6))
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 10))
# TBD: clarify which AppleClang version supports avx512
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9))
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
endif()
print_enabled_features()

View File

@@ -0,0 +1,211 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT TARGET ie_coverage_clean)
add_custom_target(ie_coverage_clean)
set_target_properties(ie_coverage_clean PROPERTIES FOLDER coverage)
endif()
if(NOT TARGET ie_coverage_init)
add_custom_target(ie_coverage_init)
set_target_properties(ie_coverage_init PROPERTIES FOLDER coverage)
endif()
if(NOT TARGET ie_coverage)
add_custom_target(ie_coverage)
set_target_properties(ie_coverage PROPERTIES FOLDER coverage)
endif()
set(IE_COVERAGE_REPORTS "${CMAKE_BINARY_DIR}/coverage")
set(IE_COVERAGE_SCRIPT_DIR "${CMAKE_CURRENT_SOURCE_DIR}/cmake/coverage")
include(CMakeParseArguments)
#
# ie_coverage_clean(REPOSITORY <repo> DIRECTORY <dir>)
#
function(ie_coverage_clean)
cmake_parse_arguments(IE_COVERAGE "" "REPOSITORY;DIRECTORY" "" ${ARGN})
add_custom_target(ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
COMMAND lcov --zerocounters --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
COMMENT "Add zero counters for coverage for ${IE_COVERAGE_REPOSITORY}"
VERBATIM)
add_custom_target(ie_coverage_clean_${IE_COVERAGE_REPOSITORY}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_REPORTS=${IE_COVERAGE_REPORTS}"
-D "IE_COVERAGE_DIRECTORY=${IE_COVERAGE_DIRECTORY}"
-D "CMAKE_BINARY_DIRECTORY=${CMAKE_BINARY_DIR}"
-D "CMAKE_SOURCE_DIRECTORY=${CMAKE_SOURCE_DIR}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_clean.cmake"
COMMENT "Clean previously created HTML report files for ${IE_COVERAGE_REPOSITORY}"
DEPENDS "${IE_COVERAGE_SCRIPT_DIR}/coverage_clean.cmake"
VERBATIM)
set_target_properties(ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
ie_coverage_clean_${IE_COVERAGE_REPOSITORY}
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_clean ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
ie_coverage_clean_${IE_COVERAGE_REPOSITORY})
endfunction()
#
# ie_coverage_capture(INFO_FILE <info_file>
# BASE_DIRECTORY <base dir>
# DIRECTORY <gcda dir>)
#
function(ie_coverage_capture)
cmake_parse_arguments(IE_COVERAGE "" "INFO_FILE;BASE_DIRECTORY;DIRECTORY" "" ${ARGN})
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}.info")
set(output_base_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}_base.info")
set(output_tests_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}_tests.info")
add_custom_command(OUTPUT ${output_base_file}
COMMAND ${CMAKE_COMMAND} -E make_directory "${IE_COVERAGE_REPORTS}"
COMMAND lcov --no-external --capture --initial --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
--base-directory "${IE_COVERAGE_BASE_DIRECTORY}"
--output-file ${output_base_file}
COMMENT "Capture initial coverage data ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_command(OUTPUT ${output_tests_file}
COMMAND ${CMAKE_COMMAND} -E make_directory "${IE_COVERAGE_REPORTS}"
COMMAND lcov --no-external --capture --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
--base-directory "${IE_COVERAGE_BASE_DIRECTORY}"
--output-file ${output_tests_file}
COMMENT "Capture test coverage data ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_OUTPUT_FILE=${output_file}"
-D "IE_COVERAGE_INPUT_FILES=${output_base_file};${output_tests_file}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_merge.cmake"
COMMENT "Generate total coverage data ${IE_COVERAGE_INFO_FILE}"
DEPENDS ${output_base_file} ${output_tests_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_INFO_FILE}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_INFO_FILE}_info
PROPERTIES FOLDER coverage)
endfunction()
#
# ie_coverage_extract(INPUT <info_file> OUTPUT <output_file> PATTERNS <patterns ...>)
#
function(ie_coverage_extract)
cmake_parse_arguments(IE_COVERAGE "" "INPUT;OUTPUT" "PATTERNS" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INPUT}.info")
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
set(commands lcov --quiet)
foreach(pattern IN LISTS IE_COVERAGE_PATTERNS)
list(APPEND commands --extract ${input_file} ${pattern})
endforeach()
list(APPEND commands --output-file ${output_file})
add_custom_command(OUTPUT ${output_file}
COMMAND ${commands}
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ie_coverage_${IE_COVERAGE_INPUT}_info)
endfunction()
#
# ie_coverage_remove(INPUT <info_file> OUTPUT <output_file> PATTERNS <patterns ...>)
#
function(ie_coverage_remove)
cmake_parse_arguments(IE_COVERAGE "" "INPUT;OUTPUT" "PATTERNS" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INPUT}.info")
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
set(commands lcov --quiet)
foreach(pattern IN LISTS IE_COVERAGE_PATTERNS)
list(APPEND commands --remove ${input_file} ${pattern})
endforeach()
list(APPEND commands --output-file ${output_file})
add_custom_command(OUTPUT ${output_file}
COMMAND ${commands}
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ie_coverage_${IE_COVERAGE_INPUT}_info)
endfunction()
#
# ie_coverage_merge(OUTPUT <output file> INPUTS <input files ...>)
#
function(ie_coverage_merge)
cmake_parse_arguments(IE_COVERAGE "" "OUTPUT" "INPUTS" ${ARGN})
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
foreach(input_info_file IN LISTS IE_COVERAGE_INPUTS)
set(input_file ${IE_COVERAGE_REPORTS}/${input_info_file}.info)
list(APPEND dependencies ie_coverage_${input_info_file}_info)
list(APPEND input_files ${input_file})
endforeach()
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_OUTPUT_FILE=${output_file}"
-D "IE_COVERAGE_INPUT_FILES=${input_files}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_merge.cmake"
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_files}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ${dependencies})
endfunction()
#
# ie_coverage_genhtml(INFO_FILE <info_file> PREFIX <prefix>)
#
function(ie_coverage_genhtml)
cmake_parse_arguments(IE_COVERAGE "" "INFO_FILE;PREFIX" "" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}.info")
set(output_directory "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}")
add_custom_command(OUTPUT "${output_directory}/index.html"
COMMAND genhtml ${input_file} --title "${IE_COVERAGE_INFO_FILE}" --legend
--no-branch-coverage --demangle-cpp
--output-directory "${output_directory}"
--num-spaces 4 --quiet
--prefix "${IE_COVERAGE_PREFIX}"
DEPENDS ${input_file}
COMMENT "Generate HTML report for ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml
DEPENDS "${output_directory}/index.html")
set_target_properties(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml ie_coverage_${IE_COVERAGE_INFO_FILE}_info)
add_dependencies(ie_coverage ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml)
endfunction()

View File

@@ -0,0 +1,30 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT DEFINED IE_COVERAGE_REPORTS)
message(FATAL_ERROR "IE_COVERAGE_REPORTS variable is not defined")
return()
endif()
file(REMOVE_RECURSE "${IE_COVERAGE_REPORTS}")
if(NOT DEFINED IE_COVERAGE_DIRECTORY)
message(FATAL_ERROR "IE_COVERAGE_DIRECTORY variable is not defined")
return()
endif()
# remove .gcno files which are kept from the previous build
file(GLOB_RECURSE gcno_files "${IE_COVERAGE_DIRECTORY}/*.gcno")
foreach(file IN LISTS gcno_files)
string(REPLACE ".gcno" "" temp_file "${file}")
string(REGEX REPLACE "CMakeFiles/.+dir/" "" temp_file "${temp_file}")
string(REPLACE "${CMAKE_BINARY_DIRECTORY}" "${CMAKE_SOURCE_DIRECTORY}" source_file "${temp_file}")
if(NOT EXISTS "${source_file}")
file(REMOVE "${file}")
string(REPLACE "${CMAKE_BINARY_DIRECTORY}/" "" file "${file}")
message("Removing ${file}")
endif()
endforeach()

View File

@@ -4,18 +4,26 @@
set_temp_directory(TEMP "${IE_MAIN_SOURCE_DIR}")
if(CMAKE_CROSSCOMPILING AND CMAKE_HOST_SYSTEM_NAME MATCHES Linux AND CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(protoc_version "3.7.1")
include(dependency_solver)
RESOLVE_DEPENDENCY(SYSTEM_PROTOC_ROOT
ARCHIVE_LIN "protoc-${protoc_version}-linux-x86_64.tar.gz"
TARGET_PATH "${TEMP}/protoc-${protoc_version}-linux-x86_64"
SHA256 "a1bedd5c05ca51e49f8f254faa3d7331e05b3a806c151fb111d582f154d0fee8"
)
debug_message(STATUS "host protoc-${protoc_version} root path = " ${SYSTEM_PROTOC_ROOT})
if(CMAKE_CROSSCOMPILING)
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(HOST_X86_64 ON)
endif()
set(protoc_version "3.7.1")
if(CMAKE_HOST_SYSTEM_NAME MATCHES Linux)
RESOLVE_DEPENDENCY(SYSTEM_PROTOC_ROOT
ARCHIVE_LIN "protoc-${protoc_version}-linux-x86_64.tar.gz"
TARGET_PATH "${TEMP}/protoc-${protoc_version}-linux-x86_64")
debug_message(STATUS "host protoc-${protoc_version} root path = " ${SYSTEM_PROTOC_ROOT})
else()
message(FATAL_ERROR "Unsupported host system (${CMAKE_HOST_SYSTEM_NAME}) and arch (${CMAKE_HOST_SYSTEM_PROCESSOR}) for cross-compilation")
endif()
reset_deps_cache(SYSTEM_PROTOC)
message("${SYSTEM_PROTOC_ROOT}/bin")
find_program(
SYSTEM_PROTOC
NAMES protoc

View File

@@ -0,0 +1,226 @@
# Copyright (C) 2018 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
list(APPEND CMAKE_MODULE_PATH
"${OpenVINO_MAIN_SOURCE_DIR}/cmake/download"
"${OpenVINO_MAIN_SOURCE_DIR}/cmake/cross_compile"
)
include(CPackComponent)
unset(IE_CPACK_COMPONENTS_ALL CACHE)
set(IE_CPACK_IE_DIR deployment_tools/inference_engine)
# Search packages for the host system instead of packages for the target system
# in case of cross compilation these macros should be defined by the toolchain file
if(NOT COMMAND find_host_package)
macro(find_host_package)
find_package(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_program)
macro(find_host_program)
find_program(${ARGN})
endmacro()
endif()
#
# ie_cpack_set_library_dir()
#
# Set library directory for cpack
#
function(ie_cpack_set_library_dir)
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH)
if(ARCH STREQUAL "x86_64" OR ARCH STREQUAL "amd64") # Windows detects Intel's 64-bit CPU as AMD64
set(ARCH intel64)
elseif(ARCH STREQUAL "i386")
set(ARCH ia32)
endif()
if(WIN32)
set(IE_CPACK_LIBRARY_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
set(IE_CPACK_RUNTIME_PATH ${IE_CPACK_IE_DIR}/bin/${ARCH}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
set(IE_CPACK_ARCHIVE_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
else()
set(IE_CPACK_LIBRARY_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH} PARENT_SCOPE)
set(IE_CPACK_RUNTIME_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH} PARENT_SCOPE)
set(IE_CPACK_ARCHIVE_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH} PARENT_SCOPE)
endif()
endfunction()
ie_cpack_set_library_dir()
#
# ie_cpack_add_component(NAME ...)
#
# Wraps original `cpack_add_component` and adds component to internal IE list
#
macro(ie_cpack_add_component NAME)
list(APPEND IE_CPACK_COMPONENTS_ALL ${NAME})
set(IE_CPACK_COMPONENTS_ALL "${IE_CPACK_COMPONENTS_ALL}" CACHE STRING "" FORCE)
cpack_add_component(${NAME} ${ARGN})
endmacro()
macro(ie_cpack)
set(CPACK_GENERATOR "TGZ")
string(REPLACE "/" "_" CPACK_PACKAGE_VERSION "${CI_BUILD_NUMBER}")
if(WIN32)
set(CPACK_PACKAGE_NAME inference-engine_${CMAKE_BUILD_TYPE})
else()
set(CPACK_PACKAGE_NAME inference-engine)
endif()
set(CPACK_INCLUDE_TOPLEVEL_DIRECTORY OFF)
set(CPACK_ARCHIVE_COMPONENT_INSTALL ON)
set(CPACK_PACKAGE_VENDOR "Intel")
set(CPACK_COMPONENTS_ALL ${ARGN})
set(CPACK_STRIP_FILES ON)
if(OS_FOLDER)
set(CPACK_SYSTEM_NAME "${OS_FOLDER}")
endif()
include(CPack)
endmacro()
# prepare temporary folder
function(set_temp_directory temp_variable source_tree_dir)
if (DEFINED ENV{DL_SDK_TEMP} AND NOT $ENV{DL_SDK_TEMP} STREQUAL "")
message(STATUS "DL_SDK_TEMP environment is set : $ENV{DL_SDK_TEMP}")
if (WIN32)
string(REPLACE "\\" "\\\\" temp $ENV{DL_SDK_TEMP})
else()
set(temp $ENV{DL_SDK_TEMP})
endif()
if (ENABLE_ALTERNATIVE_TEMP)
set(ALTERNATIVE_PATH ${source_tree_dir}/temp)
endif()
else ()
set(temp ${source_tree_dir}/temp)
endif()
set("${temp_variable}" "${temp}" CACHE PATH "Path to temp directory")
if(ALTERNATIVE_PATH)
set(ALTERNATIVE_PATH "${ALTERNATIVE_PATH}" PARENT_SCOPE)
endif()
endfunction()
include(coverage/coverage)
# External dependencies
find_package(Threads)
# Detect target
include(target_flags)
# printing debug messages
include(debug)
# linking libraries without discarding symbols
include(whole_archive)
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH_FOLDER)
if(X86_64)
set(ARCH_FOLDER intel64)
elseif(X86)
set(ARCH_FOLDER ia32)
endif()
if(OS_FOLDER)
message ("**** OS FOLDER IS: [${OS_FOLDER}]")
if("${OS_FOLDER}" STREQUAL "ON")
message ("**** USING OS FOLDER: [${CMAKE_SYSTEM_NAME}]")
set(BIN_FOLDER "bin/${CMAKE_SYSTEM_NAME}/${ARCH_FOLDER}")
else()
set(BIN_FOLDER "bin/${OS_FOLDER}/${ARCH_FOLDER}")
endif()
else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
debug_message(STATUS "CMAKE_BUILD_TYPE not defined, 'Release' will be used")
set(CMAKE_BUILD_TYPE "Release")
endif()
# allow to override default OUTPUT_ROOT root
if(NOT DEFINED OUTPUT_ROOT)
set(OUTPUT_ROOT ${OpenVINO_MAIN_SOURCE_DIR})
endif()
# Enable postfixes for Debug/Release builds
set(IE_DEBUG_POSTFIX_WIN "d")
set(IE_RELEASE_POSTFIX_WIN "")
set(IE_DEBUG_POSTFIX_LIN "")
set(IE_RELEASE_POSTFIX_LIN "")
set(IE_DEBUG_POSTFIX_MAC "d")
set(IE_RELEASE_POSTFIX_MAC "")
if(WIN32)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_WIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_WIN})
elseif(APPLE)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_MAC})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_MAC})
else()
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_LIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_LIN})
endif()
set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
if (WIN32 OR CMAKE_GENERATOR STREQUAL "Xcode")
# Support CMake multiconfiguration for Visual Studio or Xcode build
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
else ()
if (${CMAKE_BUILD_TYPE} STREQUAL "Debug" )
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
else()
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
endif()
endif()
message(STATUS "CMAKE_BUILD_TYPE: ${CMAKE_BUILD_TYPE}")
add_definitions(-DIE_BUILD_POSTFIX=\"${IE_BUILD_POSTFIX}\")
if(NOT UNIX)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
else()
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
endif()
if(APPLE)
# WA for Xcode generator + object libraries issue:
# https://gitlab.kitware.com/cmake/cmake/issues/20260
# http://cmake.3232098.n2.nabble.com/XCODE-DEPEND-HELPER-make-Deletes-Targets-Before-and-While-They-re-Built-td7598277.html
set(CMAKE_XCODE_GENERATE_TOP_LEVEL_PROJECT_ONLY ON)
set(CMAKE_MACOSX_RPATH ON)
endif()
# Use solution folders
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
set(CMAKE_POLICY_DEFAULT_CMP0054 NEW)
include(sdl)
include(os_flags)
include(sanitizer)
include(cross_compiled_func)
function(set_ci_build_number)
set(OpenVINO_MAIN_SOURCE_DIR "${CMAKE_SOURCE_DIR}")
include(version)
set(CI_BUILD_NUMBER "${CI_BUILD_NUMBER}" PARENT_SCOPE)
endfunction()
set_ci_build_number()

View File

@@ -1,246 +0,0 @@
# Copyright (C) 2018 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required(VERSION 3.13)
if(NOT DEFINED IEDevScripts_DIR)
message(FATAL_ERROR "IEDevScripts_DIR is not defined")
endif()
set(OLD_CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH})
set(CMAKE_MODULE_PATH "${IEDevScripts_DIR}")
function(set_ci_build_number)
set(repo_root "${CMAKE_SOURCE_DIR}")
include(version)
set(CI_BUILD_NUMBER "${CI_BUILD_NUMBER}" PARENT_SCOPE)
endfunction()
set_ci_build_number()
include(features)
include(message)
#
# Detect target
#
include(target_flags)
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH_FOLDER)
if(X86_64)
set(ARCH_FOLDER intel64)
elseif(X86)
set(ARCH_FOLDER ia32)
elseif(MSVC AND ARM)
set(ARCH_FOLDER arm)
elseif(MSVC AND AARCH64)
set(ARCH_FOLDER arm64)
endif()
#
# Prepare temporary folder
#
function(set_temp_directory temp_variable source_tree_dir)
if (DEFINED ENV{DL_SDK_TEMP} AND NOT $ENV{DL_SDK_TEMP} STREQUAL "")
message(STATUS "DL_SDK_TEMP environment is set : $ENV{DL_SDK_TEMP}")
file(TO_CMAKE_PATH $ENV{DL_SDK_TEMP} temp)
if (ENABLE_ALTERNATIVE_TEMP)
set(ALTERNATIVE_PATH ${source_tree_dir}/temp)
endif()
else ()
set(temp ${source_tree_dir}/temp)
endif()
set("${temp_variable}" "${temp}" CACHE PATH "Path to temp directory")
if(ALTERNATIVE_PATH)
set(ALTERNATIVE_PATH "${ALTERNATIVE_PATH}" PARENT_SCOPE)
endif()
endfunction()
#
# For cross-compilation
#
# Search packages for the host system instead of packages for the target system
# in case of cross compilation these macros should be defined by the toolchain file
if(NOT COMMAND find_host_package)
macro(find_host_package)
find_package(${ARGN})
endmacro()
endif()
if(NOT COMMAND find_host_program)
macro(find_host_program)
find_program(${ARGN})
endmacro()
endif()
#
# Common scripts
#
include(packaging)
include(coverage/coverage)
include(shellcheck/shellcheck)
# printing debug messages
include(debug)
if(OS_FOLDER)
message ("**** OS FOLDER IS: [${OS_FOLDER}]")
if(OS_FOLDER STREQUAL "ON")
message ("**** USING OS FOLDER: [${CMAKE_SYSTEM_NAME}]")
set(BIN_FOLDER "bin/${CMAKE_SYSTEM_NAME}/${ARCH_FOLDER}")
else()
set(BIN_FOLDER "bin/${OS_FOLDER}/${ARCH_FOLDER}")
endif()
else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
if(NOT DEFINED CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
message(STATUS "CMAKE_BUILD_TYPE not defined, 'Release' will be used")
set(CMAKE_BUILD_TYPE "Release")
else()
set(RELEASE_TYPES "Debug" "Release" "RelWithDebInfo" "MinSizeRel")
list(FIND RELEASE_TYPES ${CMAKE_BUILD_TYPE} INDEX_FOUND)
if (INDEX_FOUND EQUAL -1)
message(FATAL_ERROR "CMAKE_BUILD_TYPE must be one of Debug, Release, RelWithDebInfo, or MinSizeRel")
endif()
endif()
message(STATUS "CMAKE_BUILD_TYPE: ${CMAKE_BUILD_TYPE}")
if(USE_BUILD_TYPE_SUBFOLDER)
set(BIN_FOLDER "${BIN_FOLDER}/${CMAKE_BUILD_TYPE}")
endif()
# allow to override default OUTPUT_ROOT root
if(NOT DEFINED OUTPUT_ROOT)
if(NOT DEFINED OpenVINO_MAIN_SOURCE_DIR)
message(FATAL_ERROR "OpenVINO_MAIN_SOURCE_DIR is not defined")
endif()
set(OUTPUT_ROOT ${OpenVINO_MAIN_SOURCE_DIR})
endif()
# Enable postfixes for Debug/Release builds
set(IE_DEBUG_POSTFIX_WIN "d")
set(IE_RELEASE_POSTFIX_WIN "")
set(IE_DEBUG_POSTFIX_LIN "")
set(IE_RELEASE_POSTFIX_LIN "")
set(IE_DEBUG_POSTFIX_MAC "d")
set(IE_RELEASE_POSTFIX_MAC "")
if(WIN32)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_WIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_WIN})
elseif(APPLE)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_MAC})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_MAC})
else()
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_LIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_LIN})
endif()
set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
if (MSVC OR CMAKE_GENERATOR STREQUAL "Xcode")
# Support CMake multiconfiguration for Visual Studio or Xcode build
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
else ()
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
else()
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
endif()
endif()
add_definitions(-DIE_BUILD_POSTFIX=\"${IE_BUILD_POSTFIX}\")
if(NOT UNIX)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
else()
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/lib)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/lib)
endif()
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
if(APPLE)
set(CMAKE_MACOSX_RPATH ON)
# WA for Xcode generator + object libraries issue:
# https://gitlab.kitware.com/cmake/cmake/issues/20260
# http://cmake.3232098.n2.nabble.com/XCODE-DEPEND-HELPER-make-Deletes-Targets-Before-and-While-They-re-Built-td7598277.html
set(CMAKE_XCODE_GENERATE_TOP_LEVEL_PROJECT_ONLY ON)
endif()
# Use solution folders
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
# Enable CMAKE_<LANG>_COMPILER_ID AppleClang
set(CMAKE_POLICY_DEFAULT_CMP0025 NEW)
# LTO
if(ENABLE_LTO)
set(CMAKE_POLICY_DEFAULT_CMP0069 NEW)
include(CheckIPOSupported)
check_ipo_supported(RESULT IPO_SUPPORTED
OUTPUT OUTPUT_MESSAGE
LANGUAGES C CXX)
if(NOT IPO_SUPPORTED)
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optmization" FORCE)
message(WARNING "IPO / LTO is not supported: ${OUTPUT_MESSAGE}")
endif()
endif()
# General flags
include(compile_flags/sdl)
include(compile_flags/os_flags)
include(compile_flags/sanitizer)
include(compile_flags/fuzzing)
include(download/dependency_solver)
include(cross_compile/cross_compiled_func)
include(faster_build)
include(whole_archive)
include(linux_name)
include(models)
include(api_validator/api_validator)
include(vs_version/vs_version)
include(plugins/plugins)
include(add_ie_target)
if(ENABLE_FUZZING)
enable_fuzzing()
endif()
# macro to mark target as conditionally compiled
function(ie_mark_target_as_cc TARGET_NAME)
if(NOT (SELECTIVE_BUILD STREQUAL "ON"))
return()
endif()
if(NOT TARGET ${TARGET_NAME})
message(FATAL_ERROR "${TARGET_NAME} does not represent target")
endif()
get_target_property(sources ${TARGET_NAME} SOURCES)
set_source_files_properties(${sources} PROPERTIES OBJECT_DEPENDS ${GENERATED_HEADER})
endfunction()
# Code style utils
include(cpplint/cpplint)
include(clang_format/clang_format)
# Restore state
set(CMAKE_MODULE_PATH ${OLD_CMAKE_MODULE_PATH})

View File

@@ -1,128 +0,0 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(WIN32)
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
set(UWP_SDK_PATH "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64")
message(STATUS "Trying to find apivalidator in: ${UWP_SDK_PATH}")
find_host_program(UWP_API_VALIDATOR
NAMES apivalidator
PATHS "${UWP_SDK_PATH}"
DOC "ApiValidator for UWP compliance")
if(UWP_API_VALIDATOR)
message(STATUS "Found apivalidator: ${UWP_API_VALIDATOR}")
endif()
endif()
function(_ie_add_api_validator_post_build_step_recursive)
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
get_target_property(IS_IMPORTED ${API_VALIDATOR_TARGET} IMPORTED)
if(IS_IMPORTED)
return()
endif()
get_target_property(LIBRARY_TYPE ${API_VALIDATOR_TARGET} TYPE)
if(LIBRARY_TYPE STREQUAL "EXECUTABLE" OR LIBRARY_TYPE STREQUAL "SHARED_LIBRARY")
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
if(LINKED_LIBRARIES)
foreach(ITEM IN LISTS LINKED_LIBRARIES)
if(NOT TARGET ${ITEM})
continue()
endif()
get_target_property(LIBRARY_TYPE_DEPENDENCY ${ITEM} TYPE)
if(LIBRARY_TYPE_DEPENDENCY STREQUAL "SHARED_LIBRARY")
_ie_add_api_validator_post_build_step_recursive(TARGET ${ITEM})
endif()
endforeach()
endif()
endif()
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
endfunction()
set(VALIDATED_LIBRARIES "" CACHE INTERNAL "")
function(_ie_add_api_validator_post_build_step)
set(UWP_API_VALIDATOR_APIS "${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/x64/UniversalDDIs.xml")
set(UWP_API_VALIDATOR_EXCLUSION "${UWP_SDK_PATH}/BinaryExclusionlist.xml")
if((NOT UWP_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
return()
endif()
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
if(NOT API_VALIDATOR_TARGET)
message(FATAL_ERROR "RunApiValidator requires TARGET to validate!")
endif()
if(NOT TARGET ${API_VALIDATOR_TARGET})
message(FATAL_ERROR "${API_VALIDATOR_TARGET} is not a TARGET in the project tree.")
endif()
# collect targets
_ie_add_api_validator_post_build_step_recursive(TARGET ${API_VALIDATOR_TARGET})
# remove targets which were tested before
foreach(item IN LISTS VALIDATED_LIBRARIES)
list(REMOVE_ITEM API_VALIDATOR_TARGETS ${item})
endforeach()
list(REMOVE_DUPLICATES API_VALIDATOR_TARGETS)
if(NOT API_VALIDATOR_TARGETS)
return()
endif()
# apply check
macro(api_validator_get_target_name)
get_target_property(IS_IMPORTED ${target} IMPORTED)
if(IS_IMPORTED)
get_target_property(target_location ${target} LOCATION)
get_filename_component(target_name "${target_location}" NAME_WE)
else()
set(target_name ${target})
endif()
endmacro()
foreach(target IN LISTS API_VALIDATOR_TARGETS)
api_validator_get_target_name()
set(output_file "${CMAKE_BINARY_DIR}/api_validator/${target_name}.txt")
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
COMMAND ${CMAKE_COMMAND}
-D UWP_API_VALIDATOR=${UWP_API_VALIDATOR}
-D UWP_API_VALIDATOR_TARGET=$<TARGET_FILE:${target}>
-D UWP_API_VALIDATOR_APIS=${UWP_API_VALIDATOR_APIS}
-D UWP_API_VALIDATOR_EXCLUSION=${UWP_API_VALIDATOR_EXCLUSION}
-D UWP_API_VALIDATOR_OUTPUT=${output_file}
-D CMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake"
BYPRODUCTS ${output_file}
COMMENT "[apiValidator] Check ${target_name} for OneCore compliance"
VERBATIM)
endforeach()
# update list of validated libraries
list(APPEND VALIDATED_LIBRARIES ${API_VALIDATOR_TARGETS})
set(VALIDATED_LIBRARIES "${VALIDATED_LIBRARIES}" CACHE INTERNAL "" FORCE)
endfunction()
#
# ie_add_api_validator_post_build_step(TARGET <name>)
#
macro(ie_add_api_validator_post_build_step)
_ie_add_api_validator_post_build_step(${ARGV})
endmacro()

View File

@@ -1,73 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_policy(SET CMP0012 NEW)
foreach(var UWP_API_VALIDATOR UWP_API_VALIDATOR_TARGET
UWP_API_VALIDATOR_APIS UWP_API_VALIDATOR_EXCLUSION
UWP_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
if(NOT DEFINED ${var})
message(FATAL_ERROR "Variable ${var} is not defined")
endif()
endforeach()
# create command
if(NOT EXISTS "${UWP_API_VALIDATOR_APIS}")
message(FATAL_ERROR "${UWP_API_VALIDATOR_APIS} does not exist")
endif()
set(command "${UWP_API_VALIDATOR}"
-SupportedApiXmlFiles:${UWP_API_VALIDATOR_APIS}
-DriverPackagePath:${UWP_API_VALIDATOR_TARGET})
if(EXISTS "${UWP_API_VALIDATOR_EXCLUSION}")
list(APPEND command
-BinaryExclusionListXmlFile:${UWP_API_VALIDATOR_EXCLUSION}
-StrictCompliance:TRUE)
set(UWP_HAS_BINARY_EXCLUSION ON)
endif()
# execute
execute_process(COMMAND ${command}
OUTPUT_VARIABLE output_message
ERROR_VARIABLE error_message
RESULT_VARIABLE exit_code
OUTPUT_STRIP_TRAILING_WHITESPACE)
file(WRITE "${UWP_API_VALIDATOR_OUTPUT}" "${output_message}\n\n\n${error_message}")
# post-process output
get_filename_component(name "${UWP_API_VALIDATOR_TARGET}" NAME)
if(NOT UWP_HAS_BINARY_EXCLUSION)
if(CMAKE_TOOLCHAIN_FILE MATCHES "onecoreuap.toolchain.cmake$")
# empty since we compile with static MSVC runtime
else()
set(exclusion_dlls "msvcp140.dll" "vcruntime140.dll")
endif()
# remove exclusions from error_message
foreach(dll IN LISTS exclusion_dlls)
string(REGEX REPLACE
"ApiValidation: Error: ${name} has unsupported API call to \"${dll}![^\"]+\"\n"
"" error_message "${error_message}")
endforeach()
# throw error if error_message still contains any errors
if(error_message)
message(FATAL_ERROR "${error_message}")
endif()
endif()
# write output
if(UWP_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
message(FATAL_ERROR "${error_message}")
endif()
message("ApiValidator: ${name} has passed the OneCore compliance")

View File

@@ -1,25 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
macro(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${FUZZING_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${FUZZING_LINKER_FLAGS}")
unset(FUZZING_COMPILER_FLAGS)
unset(FUZZING_LINKER_FLAGS)
endmacro()
function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
if(ENABLE_FUZZING)
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
endif()
endfunction(add_fuzzer)

View File

@@ -1,211 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT TARGET ie_coverage_clean)
add_custom_target(ie_coverage_clean)
set_target_properties(ie_coverage_clean PROPERTIES FOLDER coverage)
endif()
if(NOT TARGET ie_coverage_init)
add_custom_target(ie_coverage_init)
set_target_properties(ie_coverage_init PROPERTIES FOLDER coverage)
endif()
if(NOT TARGET ie_coverage)
add_custom_target(ie_coverage)
set_target_properties(ie_coverage PROPERTIES FOLDER coverage)
endif()
set(IE_COVERAGE_REPORTS "${CMAKE_BINARY_DIR}/coverage")
set(IE_COVERAGE_SCRIPT_DIR "${IEDevScripts_DIR}/coverage")
include(CMakeParseArguments)
#
# ie_coverage_clean(REPOSITORY <repo> DIRECTORY <dir>)
#
function(ie_coverage_clean)
cmake_parse_arguments(IE_COVERAGE "" "REPOSITORY;DIRECTORY" "" ${ARGN})
add_custom_target(ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
COMMAND lcov --zerocounters --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
COMMENT "Add zero counters for coverage for ${IE_COVERAGE_REPOSITORY}"
VERBATIM)
add_custom_target(ie_coverage_clean_${IE_COVERAGE_REPOSITORY}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_REPORTS=${IE_COVERAGE_REPORTS}"
-D "IE_COVERAGE_DIRECTORY=${IE_COVERAGE_DIRECTORY}"
-D "CMAKE_BINARY_DIRECTORY=${CMAKE_BINARY_DIR}"
-D "CMAKE_SOURCE_DIRECTORY=${CMAKE_SOURCE_DIR}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_clean.cmake"
COMMENT "Clean previously created HTML report files for ${IE_COVERAGE_REPOSITORY}"
DEPENDS "${IE_COVERAGE_SCRIPT_DIR}/coverage_clean.cmake"
VERBATIM)
set_target_properties(ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
ie_coverage_clean_${IE_COVERAGE_REPOSITORY}
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_clean ie_coverage_zerocounters_${IE_COVERAGE_REPOSITORY}
ie_coverage_clean_${IE_COVERAGE_REPOSITORY})
endfunction()
#
# ie_coverage_capture(INFO_FILE <info_file>
# BASE_DIRECTORY <base dir>
# DIRECTORY <gcda dir>)
#
function(ie_coverage_capture)
cmake_parse_arguments(IE_COVERAGE "" "INFO_FILE;BASE_DIRECTORY;DIRECTORY" "" ${ARGN})
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}.info")
set(output_base_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}_base.info")
set(output_tests_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}_tests.info")
add_custom_command(OUTPUT ${output_base_file}
COMMAND ${CMAKE_COMMAND} -E make_directory "${IE_COVERAGE_REPORTS}"
COMMAND lcov --no-external --capture --initial --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
--base-directory "${IE_COVERAGE_BASE_DIRECTORY}"
--output-file ${output_base_file}
COMMENT "Capture initial coverage data ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_command(OUTPUT ${output_tests_file}
COMMAND ${CMAKE_COMMAND} -E make_directory "${IE_COVERAGE_REPORTS}"
COMMAND lcov --no-external --capture --quiet
--directory "${IE_COVERAGE_DIRECTORY}"
--base-directory "${IE_COVERAGE_BASE_DIRECTORY}"
--output-file ${output_tests_file}
COMMENT "Capture test coverage data ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_OUTPUT_FILE=${output_file}"
-D "IE_COVERAGE_INPUT_FILES=${output_base_file};${output_tests_file}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_merge.cmake"
COMMENT "Generate total coverage data ${IE_COVERAGE_INFO_FILE}"
DEPENDS ${output_base_file} ${output_tests_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_INFO_FILE}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_INFO_FILE}_info
PROPERTIES FOLDER coverage)
endfunction()
#
# ie_coverage_extract(INPUT <info_file> OUTPUT <output_file> PATTERNS <patterns ...>)
#
function(ie_coverage_extract)
cmake_parse_arguments(IE_COVERAGE "" "INPUT;OUTPUT" "PATTERNS" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INPUT}.info")
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
set(commands lcov --quiet)
foreach(pattern IN LISTS IE_COVERAGE_PATTERNS)
list(APPEND commands --extract ${input_file} ${pattern})
endforeach()
list(APPEND commands --output-file ${output_file})
add_custom_command(OUTPUT ${output_file}
COMMAND ${commands}
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ie_coverage_${IE_COVERAGE_INPUT}_info)
endfunction()
#
# ie_coverage_remove(INPUT <info_file> OUTPUT <output_file> PATTERNS <patterns ...>)
#
function(ie_coverage_remove)
cmake_parse_arguments(IE_COVERAGE "" "INPUT;OUTPUT" "PATTERNS" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INPUT}.info")
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
set(commands lcov --quiet)
foreach(pattern IN LISTS IE_COVERAGE_PATTERNS)
list(APPEND commands --remove ${input_file} ${pattern})
endforeach()
list(APPEND commands --output-file ${output_file})
add_custom_command(OUTPUT ${output_file}
COMMAND ${commands}
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_file}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ie_coverage_${IE_COVERAGE_INPUT}_info)
endfunction()
#
# ie_coverage_merge(OUTPUT <output file> INPUTS <input files ...>)
#
function(ie_coverage_merge)
cmake_parse_arguments(IE_COVERAGE "" "OUTPUT" "INPUTS" ${ARGN})
set(output_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_OUTPUT}.info")
foreach(input_info_file IN LISTS IE_COVERAGE_INPUTS)
set(input_file ${IE_COVERAGE_REPORTS}/${input_info_file}.info)
list(APPEND dependencies ie_coverage_${input_info_file}_info)
list(APPEND input_files ${input_file})
endforeach()
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D "IE_COVERAGE_OUTPUT_FILE=${output_file}"
-D "IE_COVERAGE_INPUT_FILES=${input_files}"
-P "${IE_COVERAGE_SCRIPT_DIR}/coverage_merge.cmake"
COMMENT "Generate coverage data ${IE_COVERAGE_OUTPUT}"
DEPENDS ${input_files}
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_OUTPUT}_info
DEPENDS ${output_file})
set_target_properties(ie_coverage_${IE_COVERAGE_OUTPUT}_info
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_OUTPUT}_info ${dependencies})
endfunction()
#
# ie_coverage_genhtml(INFO_FILE <info_file> PREFIX <prefix>)
#
function(ie_coverage_genhtml)
cmake_parse_arguments(IE_COVERAGE "" "INFO_FILE;PREFIX" "" ${ARGN})
set(input_file "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}.info")
set(output_directory "${IE_COVERAGE_REPORTS}/${IE_COVERAGE_INFO_FILE}")
add_custom_command(OUTPUT "${output_directory}/index.html"
COMMAND genhtml ${input_file} --title "${IE_COVERAGE_INFO_FILE}" --legend
--no-branch-coverage --demangle-cpp
--output-directory "${output_directory}"
--num-spaces 4 --quiet
--prefix "${IE_COVERAGE_PREFIX}"
DEPENDS ${input_file}
COMMENT "Generate HTML report for ${IE_COVERAGE_INFO_FILE}"
VERBATIM)
add_custom_target(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml
DEPENDS "${output_directory}/index.html")
set_target_properties(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml
PROPERTIES FOLDER coverage)
add_dependencies(ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml ie_coverage_${IE_COVERAGE_INFO_FILE}_info)
add_dependencies(ie_coverage ie_coverage_${IE_COVERAGE_INFO_FILE}_genhtml)
endfunction()

View File

@@ -1,30 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT DEFINED IE_COVERAGE_REPORTS)
message(FATAL_ERROR "IE_COVERAGE_REPORTS variable is not defined")
return()
endif()
file(REMOVE_RECURSE "${IE_COVERAGE_REPORTS}")
if(NOT DEFINED IE_COVERAGE_DIRECTORY)
message(FATAL_ERROR "IE_COVERAGE_DIRECTORY variable is not defined")
return()
endif()
# remove .gcno files which are kept from the previous build
file(GLOB_RECURSE gcno_files "${IE_COVERAGE_DIRECTORY}/*.gcno")
foreach(file IN LISTS gcno_files)
string(REPLACE ".gcno" "" temp_file "${file}")
string(REGEX REPLACE "CMakeFiles/.+dir/" "" temp_file "${temp_file}")
string(REPLACE "${CMAKE_BINARY_DIRECTORY}" "${CMAKE_SOURCE_DIRECTORY}" source_file "${temp_file}")
if(NOT EXISTS "${source_file}")
file(REMOVE "${file}")
string(REPLACE "${CMAKE_BINARY_DIRECTORY}/" "" file "${file}")
message("Removing ${file}")
endif()
endforeach()

View File

@@ -1,106 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(ENABLE_CPPLINT)
find_package(Python3 COMPONENTS Interpreter)
if(NOT Python3_Interpreter_FOUND)
message(WARNING "Python3 interpreter was not found (required for cpplint check)")
set(ENABLE_CPPLINT OFF)
endif()
endif()
if(ENABLE_CPPLINT)
add_custom_target(cpplint_all ALL)
set_target_properties(cpplint_all PROPERTIES FOLDER cpplint)
set(CPPLINT_ALL_OUTPUT_FILES "" CACHE INTERNAL "All cpplint output files")
endif()
function(add_cpplint_target TARGET_NAME)
if(NOT ENABLE_CPPLINT)
return()
endif()
set(options "")
set(oneValueArgs "")
set(multiValueArgs FOR_TARGETS FOR_SOURCES EXCLUDE_PATTERNS CUSTOM_FILTERS)
cmake_parse_arguments(CPPLINT "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
foreach(target IN LISTS CPPLINT_FOR_TARGETS)
get_target_property(target_sources "${target}" SOURCES)
list(APPEND CPPLINT_FOR_SOURCES ${target_sources})
endforeach()
list(REMOVE_DUPLICATES CPPLINT_FOR_SOURCES)
set(custom_filter "")
foreach(filter IN LISTS CPPLINT_CUSTOM_FILTERS)
string(CONCAT custom_filter "${custom_filter}" "," "${filter}")
endforeach()
set(all_output_files "")
foreach(source_file IN LISTS CPPLINT_FOR_SOURCES)
set(exclude FALSE)
foreach(pattern IN LISTS CPPLINT_EXCLUDE_PATTERNS)
if(source_file MATCHES "${pattern}")
set(exclude ON)
break()
endif()
endforeach()
if(exclude)
continue()
endif()
# ignore object libraries
if(NOT EXISTS "${source_file}")
continue()
endif()
file(RELATIVE_PATH source_file_relative "${CMAKE_CURRENT_SOURCE_DIR}" "${source_file}")
set(output_file "${CMAKE_CURRENT_BINARY_DIR}/cpplint/${source_file_relative}.cpplint")
string(REPLACE ".." "__" output_file "${output_file}")
get_filename_component(output_dir "${output_file}" DIRECTORY)
file(MAKE_DIRECTORY "${output_dir}")
add_custom_command(
OUTPUT
"${output_file}"
COMMAND
"${CMAKE_COMMAND}"
-D "CPPLINT_SCRIPT=${IEDevScripts_DIR}/cpplint/cpplint.py"
-D "INPUT_FILE=${source_file}"
-D "OUTPUT_FILE=${output_file}"
-D "WORKING_DIRECTORY=${CMAKE_CURRENT_SOURCE_DIR}"
-D "SKIP_RETURN_CODE=${ENABLE_CPPLINT_REPORT}"
-D "CUSTOM_FILTER=${custom_filter}"
-P "${IEDevScripts_DIR}/cpplint/cpplint_run.cmake"
DEPENDS
"${source_file}"
"${IEDevScripts_DIR}/cpplint/cpplint.py"
"${IEDevScripts_DIR}/cpplint/cpplint_run.cmake"
COMMENT
"[cpplint] ${source_file}"
VERBATIM)
list(APPEND all_output_files "${output_file}")
endforeach()
set(CPPLINT_ALL_OUTPUT_FILES
${CPPLINT_ALL_OUTPUT_FILES} ${all_output_files}
CACHE INTERNAL
"All cpplint output files")
add_custom_target(${TARGET_NAME} ALL
DEPENDS ${all_output_files}
COMMENT "[cpplint] ${TARGET_NAME}")
set_target_properties(${TARGET_NAME} PROPERTIES FOLDER cpplint)
if(CPPLINT_FOR_TARGETS)
foreach(target IN LISTS CPPLINT_FOR_TARGETS)
add_dependencies(${target} ${TARGET_NAME})
endforeach()
endif()
add_dependencies(cpplint_all ${TARGET_NAME})
endfunction()

View File

@@ -1,25 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function (Download from to fatal result output sha256)
if((NOT EXISTS "${to}"))
message(STATUS "Downloading from ${from} to ${to} ...")
file(DOWNLOAD ${from} ${to}
TIMEOUT 3600
LOG log
STATUS status
SHOW_PROGRESS
EXPECTED_HASH SHA256=${sha256})
set (${output} ${status} PARENT_SCOPE)
else()
set (${output} 0 PARENT_SCOPE)
endif()
set(${result} "ON" PARENT_SCOPE)
endfunction(Download)
include(download/download_and_apply)
include(download/download_and_extract)

View File

@@ -1,72 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
find_package(Wget QUIET)
function (DownloadAndCheck from to fatal result sha256)
set(status_res "ON")
set(output 1)
get_filename_component(download_dir ${to} DIRECTORY)
if (NOT EXISTS ${download_dir})
file(MAKE_DIRECTORY ${download_dir})
endif()
if(NOT EXISTS "${to}")
if (${from} MATCHES "(http:)|(https:)|(ftp:)")
message(STATUS "Downloading from ${from} to ${to} ...")
find_program(aria2c "aria2c")
if (${aria2c} STREQUAL "aria2c-NOTFOUND")
if (NOT WGET_FOUND)
Download(${from} ${to} ${fatal} ${result} output ${sha256})
list(GET output 0 status_code)
else()
foreach(index RANGE 5)
message(STATUS "${WGET_EXECUTABLE} --no-cache --no-check-certificate
--retry-connrefused --waitretry=1 --read-timeout=20 --timeout=15 --tries=5 ${from}")
execute_process(COMMAND ${WGET_EXECUTABLE} "--no-cache" "--no-check-certificate"
"--retry-connrefused" "--waitretry=1" "--read-timeout=20" "--timeout=15" "--tries=5"
"${from}" "-O" "${to}"
TIMEOUT 2000
RESULT_VARIABLE status_code)
file(SHA256 ${to} CHECKSUM)
if (${CHECKSUM} STREQUAL ${sha256})
break()
endif()
endforeach()
if (NOT ${CHECKSUM} STREQUAL ${sha256})
message(FATAL_ERROR "Hash mismatch:\n"
"expected: ${sha256}\n"
"got: ${CHECKSUM}")
endif()
endif()
else()
message(STATUS "${aria2c} ,*.*.*.* -d ${download_dir} ${from}")
execute_process(COMMAND "${aria2c}" "-s10" "-x10" "--dir=${download_dir}" "${from}"
TIMEOUT 2000
RESULT_VARIABLE status_code)
endif()
if(NOT status_code EQUAL 0)
if (fatal)
message(FATAL_ERROR "fatal error: downloading '${from}' failed
status_code: ${status_code}
status_string: ${status_string}
log: ${log}")
else()
set(status_res "ARCHIVE_DOWNLOAD_FAIL")
message("error: downloading '${from}' failed
status_code: ${status_code}")
endif()
endif()
else()
message(STATUS "Copying from local folder ${from} to ${to} ... ")
file(COPY ${from} DESTINATION ${download_dir})
endif()
endif()
file(REMOVE ${to}.md5)
set(${result} "${status_res}" PARENT_SCOPE)
endfunction(DownloadAndCheck)

View File

@@ -1,26 +0,0 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(CMakeParseArguments)
function(ie_faster_build TARGET_NAME)
if(NOT ENABLE_FASTER_BUILD)
return()
endif()
cmake_parse_arguments(IE_FASTER_BUILD "UNITY" "" "PCH" ${ARGN})
if(IE_FASTER_BUILD_UNITY)
set_target_properties(${TARGET_NAME}
PROPERTIES
UNITY_BUILD ON
)
endif()
if(IE_FASTER_BUILD_PCH)
target_precompile_headers(${TARGET_NAME}
${IE_FASTER_BUILD_PCH}
)
endif()
endfunction()

View File

@@ -1,86 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(options)
include(target_flags)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX;NOT CMAKE_CROSSCOMPILING; CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
if(UNIX)
ie_option(USE_BUILD_TYPE_SUBFOLDER "Create dedicated sub-folder per build type for output binaries" ON)
else()
ie_option(USE_BUILD_TYPE_SUBFOLDER "Create dedicated sub-folder per build type for output binaries" OFF)
endif()
# FIXME: ARM cross-compiler generates several "false positive" warnings regarding __builtin_memcpy buffer overflow
ie_dependent_option (TREAT_WARNING_AS_ERROR "Treat build warnings as errors" ON "X86 OR X86_64" OFF)
ie_option (ENABLE_INTEGRITYCHECK "build DLLs with /INTEGRITYCHECK flag" OFF)
ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)
ie_option (ENABLE_THREAD_SANITIZER "enable checking data races via ThreadSanitizer" OFF)
ie_dependent_option (ENABLE_COVERAGE "enable code coverage" OFF "CMAKE_CXX_COMPILER_ID STREQUAL GNU" OFF)
# Defines CPU capabilities
ie_dependent_option (ENABLE_SSE42 "Enable SSE4.2 optimizations" ON "X86_64 OR X86" OFF)
ie_dependent_option (ENABLE_AVX2 "Enable AVX2 optimizations" ON "X86_64 OR X86" OFF)
ie_dependent_option (ENABLE_AVX512F "Enable AVX512 optimizations" ON "X86_64 OR X86" OFF)
# Type of build, we add this as an explicit option to default it to ON
# FIXME: Ah this moment setting this to OFF will only build ngraph a static library
ie_option (BUILD_SHARED_LIBS "Build as a shared library" ON)
ie_dependent_option (ENABLE_FASTER_BUILD "Enable build features (PCH, UNITY) to speed up build time" OFF "CMAKE_VERSION VERSION_GREATER_EQUAL 3.16" OFF)
if(NOT DEFINED ENABLE_CPPLINT)
ie_dependent_option (ENABLE_CPPLINT "Enable cpplint checks during the build" ON "UNIX;NOT ANDROID" OFF)
endif()
if(NOT DEFINED ENABLE_CPPLINT_REPORT)
ie_dependent_option (ENABLE_CPPLINT_REPORT "Build cpplint report instead of failing the build" OFF "ENABLE_CPPLINT" OFF)
endif()
ie_option (ENABLE_CLANG_FORMAT "Enable clang-format checks during the build" ON)
ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
ie_option (ENABLE_ALTERNATIVE_TEMP "in case of dependency conflict, to avoid modification in master, use local copy of dependency" ON)
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "CMAKE_CXX_COMPILER_ID MATCHES ^(Apple)?Clang$; NOT WIN32" OFF)
#
# Check features
#
if(ENABLE_AVX512F)
if ((CMAKE_CXX_COMPILER_ID STREQUAL "MSVC") AND (MSVC_VERSION VERSION_LESS 1920))
# 1920 version of MSVC 2019. In MSVC 2017 AVX512F not work
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "Clang") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 6))
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 10))
# TBD: clarify which AppleClang version supports avx512
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
if ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") AND (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9))
set(ENABLE_AVX512F OFF CACHE BOOL "" FORCE)
endif()
endif()
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON CACHE BOOL "" FORCE)
endif()

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(UNIX AND ENABLE_ERROR_HIGHLIGHT)
function(message)
string(ASCII 27 ESC)
set(RESET "${ESC}[m")
set(RED "${ESC}[31;1m")
set(YELLOW "${ESC}[33;1m")
list(GET ARGV 0 MessageType)
list(REMOVE_AT ARGV 0)
foreach(arg IN LISTS ARGV)
set(_msg "${_msg}${arg}")
endforeach()
if(MessageType STREQUAL FATAL_ERROR OR MessageType STREQUAL SEND_ERROR)
_message(${MessageType} "${RED}${_msg}${RESET}")
elseif(MessageType STREQUAL WARNING)
_message(${MessageType} "${YELLOW}${_msg}${RESET}")
else()
_message(${MessageType} "${_msg}")
endif()
endfunction()
endif()

View File

@@ -1,45 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Usage: ie_option(<option_variable> "description" <initial value or boolean expression> [IF <condition>])
include (CMakeDependentOption)
macro (ie_option variable description value)
option(${variable} "${description}" ${value})
list(APPEND IE_OPTIONS ${variable})
endmacro()
macro (ie_dependent_option variable description def_value condition fallback_value)
cmake_dependent_option(${variable} "${description}" ${def_value} "${condition}" ${fallback_value})
list(APPEND IE_OPTIONS ${variable})
endmacro()
macro (ie_option_enum variable description value)
set(OPTIONS)
set(ONE_VALUE_ARGS)
set(MULTI_VALUE_ARGS ALLOWED_VALUES)
cmake_parse_arguments(IE_OPTION_ENUM "${OPTIONS}" "${ONE_VALUE_ARGS}" "${MULTI_VALUE_ARGS}" ${ARGN})
if(NOT ${value} IN_LIST IE_OPTION_ENUM_ALLOWED_VALUES)
message(FATAL_ERROR "variable must be one of ${IE_OPTION_ENUM_ALLOWED_VALUES}")
endif()
list(APPEND IE_OPTIONS ${variable})
set(${variable} ${value} CACHE STRING "${description}")
endmacro()
function (print_enabled_features)
if(NOT COMMAND set_ci_build_number)
message(FATAL_ERROR "CI_BUILD_NUMBER is not set yet")
endif()
message(STATUS "Inference Engine enabled features: ")
message(STATUS "")
message(STATUS " CI_BUILD_NUMBER: ${CI_BUILD_NUMBER}")
foreach(_var ${IE_OPTIONS})
message(STATUS " ${_var} = ${${_var}}")
endforeach()
message(STATUS "")
endfunction()

View File

@@ -1,58 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
include(CPackComponent)
unset(IE_CPACK_COMPONENTS_ALL CACHE)
set(IE_CPACK_IE_DIR deployment_tools/inference_engine)
#
# ie_cpack_set_library_dir()
#
# Set library directory for cpack
#
function(ie_cpack_set_library_dir)
if(WIN32)
set(IE_CPACK_LIBRARY_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH_FOLDER}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
set(IE_CPACK_RUNTIME_PATH ${IE_CPACK_IE_DIR}/bin/${ARCH_FOLDER}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
set(IE_CPACK_ARCHIVE_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH_FOLDER}/${CMAKE_BUILD_TYPE} PARENT_SCOPE)
else()
set(IE_CPACK_LIBRARY_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH_FOLDER} PARENT_SCOPE)
set(IE_CPACK_RUNTIME_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH_FOLDER} PARENT_SCOPE)
set(IE_CPACK_ARCHIVE_PATH ${IE_CPACK_IE_DIR}/lib/${ARCH_FOLDER} PARENT_SCOPE)
endif()
endfunction()
ie_cpack_set_library_dir()
#
# ie_cpack_add_component(NAME ...)
#
# Wraps original `cpack_add_component` and adds component to internal IE list
#
macro(ie_cpack_add_component NAME)
list(APPEND IE_CPACK_COMPONENTS_ALL ${NAME})
set(IE_CPACK_COMPONENTS_ALL "${IE_CPACK_COMPONENTS_ALL}" CACHE STRING "" FORCE)
cpack_add_component(${NAME} ${ARGN})
endmacro()
macro(ie_cpack)
set(CPACK_GENERATOR "TGZ")
string(REPLACE "/" "_" CPACK_PACKAGE_VERSION "${CI_BUILD_NUMBER}")
if(WIN32)
set(CPACK_PACKAGE_NAME inference-engine_${CMAKE_BUILD_TYPE})
else()
set(CPACK_PACKAGE_NAME inference-engine)
endif()
set(CPACK_INCLUDE_TOPLEVEL_DIRECTORY OFF)
set(CPACK_ARCHIVE_COMPONENT_INSTALL ON)
set(CPACK_PACKAGE_VENDOR "Intel")
set(CPACK_COMPONENTS_ALL ${ARGN})
set(CPACK_STRIP_FILES ON)
if(OS_FOLDER)
set(CPACK_SYSTEM_NAME "${OS_FOLDER}")
endif()
include(CPack)
endmacro()

View File

@@ -1,65 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(file_content
"<ie>
<plugins>
</plugins>
</ie>")
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${file_content}")
endif()
# get list of plugin files
file(GLOB plugin_files "${IE_CONFIGS_DIR}/*.xml")
function(check_plugin_exists plugin_name outvar)
set(${outvar} OFF PARENT_SCOPE)
# check if config file already has this plugin
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
foreach(line IN LISTS content)
string(REGEX MATCH "location=\"([^\"]*)\"" location "${line}")
get_filename_component(location "${CMAKE_MATCH_1}" NAME_WE)
if("${CMAKE_SHARED_MODULE_PREFIX}${plugin_name}" MATCHES "${location}")
# plugin has already registered
set(${outvar} ON PARENT_SCOPE)
endif()
endforeach()
endfunction()
set(plugin_files_to_add)
foreach(plugin_file IN LISTS plugin_files)
get_filename_component(plugin_name "${plugin_file}" NAME_WE)
check_plugin_exists("${plugin_name}" exists)
if(NOT exists)
list(APPEND plugin_files_to_add "${plugin_file}")
endif()
endforeach()
# add plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
foreach(line IN LISTS content)
if("${line}" MATCHES "</plugins>")
foreach(plugin_file IN LISTS plugin_files_to_add)
file(READ "${plugin_file}" content)
set(newContent "${newContent}
${content}")
endforeach()
endif()
if(newContent)
set(newContent "${newContent}\n${line}")
else()
set(newContent "${line}")
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -1,35 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
return()
endif()
# remove plugin file
file(REMOVE "${IE_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
# remove plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
set(skip_plugin OFF)
foreach(line IN LISTS content)
if("${line}" MATCHES "${IE_PLUGIN_NAME}")
set(skip_plugin ON)
endif()
if(NOT skip_plugin)
if(newContent)
set(newContent "${newContent}\n${line}")
else()
set(newContent "${line}")
endif()
endif()
if("${line}" MATCHES "</plugin>")
set(skip_plugin OFF)
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -1,49 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(CMakeParseArguments)
find_host_program(shellcheck_PROGRAM NAMES shellcheck DOC "Path to shellcheck tool")
function(ie_shellcheck_process)
if(NOT shellcheck_PROGRAM)
message(WARNING "shellcheck tool is not found")
return()
endif()
cmake_parse_arguments(IE_SHELLCHECK "" "DIRECTORY" "SKIP" ${ARGN})
set(IE_SHELLCHECK_SCRIPT "${IEDevScripts_DIR}/shellcheck/shellcheck_process.cmake")
file(GLOB_RECURSE scripts "${IE_SHELLCHECK_DIRECTORY}/*.sh")
foreach(script IN LISTS scripts)
# check if we need to skip scripts
unset(skip_script)
foreach(skip_directory IN LISTS IE_SHELLCHECK_SKIP)
if(script MATCHES "${skip_directory}/*")
set(skip_script ON)
endif()
endforeach()
if(skip_script)
continue()
endif()
get_filename_component(dir_name "${script}" DIRECTORY)
string(REPLACE "${IE_SHELLCHECK_DIRECTORY}" "${CMAKE_BINARY_DIR}/shellcheck" output_file ${script})
set(output_file "${output_file}.txt")
get_filename_component(script_name "${script}" NAME)
add_custom_command(OUTPUT ${output_file}
COMMAND ${CMAKE_COMMAND}
-D IE_SHELLCHECK_PROGRAM=${shellcheck_PROGRAM}
-D IE_SHELL_SCRIPT=${script}
-D IE_SHELLCHECK_OUTPUT=${output_file}
-P ${IE_SHELLCHECK_SCRIPT}
DEPENDS ${script} ${IE_SHELLCHECK_SCRIPT}
COMMENT "Check script ${script_name}"
VERBATIM)
list(APPEND outputs ${output_file})
endforeach()
add_custom_target(ie_shellcheck DEPENDS ${outputs})
endfunction()

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT DEFINED IE_SHELLCHECK_PROGRAM)
message(FATAL_ERROR "IE_SHELLCHECK_PROGRAM is not defined")
endif()
if(NOT DEFINED IE_SHELL_SCRIPT)
message(FATAL_ERROR "IE_SHELL_SCRIPT is not defined")
endif()
if(NOT DEFINED IE_SHELLCHECK_OUTPUT)
message(FATAL_ERROR "IE_SHELLCHECK_OUTPUT is not defined")
endif()
set(rules "SC1091,SC2164,SC2162,SC1090")
execute_process(COMMAND ${IE_SHELLCHECK_PROGRAM} --exclude=${rules} ${IE_SHELL_SCRIPT}
OUTPUT_VARIABLE error_message
RESULT_VARIABLE exit_code
OUTPUT_STRIP_TRAILING_WHITESPACE)
file(WRITE "${IE_SHELLCHECK_OUTPUT}" "${error_message}")
if(NOT exit_code EQUAL 0)
message(FATAL_ERROR "${error_message}")
endif()

View File

@@ -1,57 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Target system specific flags
if(CMAKE_CL_64)
set(MSVC64 ON)
endif()
if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -dumpmachine
OUTPUT_VARIABLE OPENVINO_GCC_TARGET_MACHINE
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(OPENVINO_GCC_TARGET_MACHINE MATCHES "amd64|x86_64|AMD64")
set(MINGW64 ON)
endif()
endif()
macro(_ie_process_msvc_generator_platform flag_name)
# if cmake -A <ARM|ARM64> is passed
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(AARCH64 ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM")
set(ARM ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "x64")
set(X86_64 ON)
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
set(X86 ON)
else()
set(${flag_name} ON)
endif()
endmacro()
if(MSVC64 OR MINGW64)
_ie_process_msvc_generator_platform(X86_64)
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
_ie_process_msvc_generator_platform(X86)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)")
set(AARCH64 ON)
endif()
# in case of cross-compilation (or -m32) CMAKE_SYSTEM_PROCESSOR is equal to
# CMAKE_HOST_SYSTEM_PROCESSOR which is X86_64; patch this until a better solution
if(CMAKE_SIZEOF_VOID_P EQUAL 4 AND X86_64)
unset(X86_64)
set(X86 ON)
endif()
if(UNIX AND NOT APPLE)
set(LINUX ON)
endif()

View File

@@ -1,90 +0,0 @@
# Copyright (C) 2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
macro(ie_parse_ci_build_number)
if(CI_BUILD_NUMBER MATCHES "^([0-9]+)\.([0-9]+)\.([0-9]+)\-.*")
set(IE_VERSION_MAJOR ${CMAKE_MATCH_1})
set(IE_VERSION_MINOR ${CMAKE_MATCH_2})
set(IE_VERSION_PATCH ${CMAKE_MATCH_3})
set(IE_VS_VER_HAS_VERSION 1)
else()
set(IE_VS_VER_HAS_VERSION 0)
endif()
endmacro()
ie_parse_ci_build_number()
if(IE_VS_VER_HAS_VERSION)
set(IE_VS_VER_FILEVERSION_QUAD "${IE_VERSION_MAJOR},${IE_VERSION_MINOR},${IE_VERSION_PATCH},0")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${IE_VERSION_MAJOR},${IE_VERSION_MINOR},${IE_VERSION_PATCH},0")
set(IE_VS_VER_FILEVERSION_STR "${IE_VERSION_MAJOR}.${IE_VERSION_MINOR}.${IE_VERSION_PATCH}.0")
endif()
set(IE_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(IE_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
set(IE_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
set(IE_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2020, Intel Corporation")
set(IE_VS_VER_COMMENTS_STR "https://docs.openvinotoolkit.org/")
#
# ie_add_vs_version_file(NAME <name>
# FILEDESCRIPTION <file description>
# [COMPANY_NAME <company name>]
# [FILEVERSION <file version>]
# [INTERNALNAME <internal name>]
# [COPYRIGHT <name>]
# [PRODUCTNAME <name>]
# [PRODUCTVERSION <name>]
# [COMMENTS <name>]
# [FILEVERSION_QUAD <name>]
# [PRODUCTVERSION_QUAD <name>])
#
function(ie_add_vs_version_file)
if(NOT WIN32)
return()
endif()
cmake_parse_arguments(VS_VER "" "COMPANY_NAME;NAME;FILEDESCRIPTION;FILEVERSION;INTERNALNAME;COPYRIGHT;PRODUCTNAME;PRODUCTVERSION;COMMENTS;FILEVERSION_QUAD;PRODUCTVERSION_QUAD" "" ${ARGN})
if(NOT TARGET ${VS_VER_NAME})
message(FATAL_ERROR "${VS_VER_NAME} must define a target")
endif()
macro(_vs_ver_update_variable name)
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
set(IE_VS_VER_${name} "${IE_${VS_VER_NAME}_VS_VER_${name}}")
elseif(VS_VER_${name})
set(IE_VS_VER_${name} "${VS_VER_${name}}")
endif()
endmacro()
_vs_ver_update_variable(FILEVERSION_QUAD)
_vs_ver_update_variable(PRODUCTVERSION_QUAD)
macro(_vs_ver_update_str_variable name)
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
set(IE_VS_VER_${name}_STR "${IE_${VS_VER_NAME}_VS_VER_${name}}")
elseif(VS_VER_${name})
set(IE_VS_VER_${name}_STR "${VS_VER_${name}}")
endif()
endmacro()
_vs_ver_update_str_variable(COMPANY_NAME)
_vs_ver_update_str_variable(FILEDESCRIPTION)
_vs_ver_update_str_variable(FILEVERSION)
_vs_ver_update_str_variable(INTERNALNAME)
_vs_ver_update_str_variable(COPYRIGHT)
_vs_ver_update_str_variable(PRODUCTNAME)
_vs_ver_update_str_variable(PRODUCTVERSION)
_vs_ver_update_str_variable(COMMENTS)
set(IE_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(IE_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
set(vs_version_output "${CMAKE_CURRENT_BINARY_DIR}/vs_version.rc")
configure_file("${IEDevScripts_DIR}/vs_version/vs_version.rc.in" "${vs_version_output}" @ONLY)
source_group("src" FILES ${vs_version_output})
target_sources(${VS_VER_NAME} PRIVATE ${vs_version_output})
endfunction()

View File

@@ -1,39 +0,0 @@
#include <winver.h>
VS_VERSION_INFO VERSIONINFO
#if @IE_VS_VER_HAS_VERSION@
FILEVERSION @IE_VS_VER_FILEVERSION_QUAD@
PRODUCTVERSION @IE_VS_VER_PRODUCTVERSION_QUAD@
#endif
FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
#ifdef _DEBUG
FILEFLAGS 1
#else
FILEFLAGS 0
#endif
FILEOS VOS__WINDOWS32
FILETYPE VFT_DLL
FILESUBTYPE 0
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "040904E4"
BEGIN
VALUE "CompanyName", "@IE_VS_VER_COMPANY_NAME_STR@\0"
VALUE "FileDescription", "@IE_VS_VER_FILEDESCRIPTION_STR@\0"
#if @IE_VS_VER_HAS_VERSION@
VALUE "FileVersion", "@IE_VS_VER_FILEVERSION_STR@\0"
#endif
VALUE "InternalName", "@IE_VS_VER_INTERNALNAME_STR@\0"
VALUE "LegalCopyright", "@IE_VS_VER_COPYRIGHT_STR@\0"
VALUE "OriginalFilename", "@IE_VS_VER_ORIGINALFILENAME_STR@\0"
VALUE "ProductName", "@IE_VS_VER_PRODUCTNAME_STR@\0"
VALUE "ProductVersion", "@IE_VS_VER_PRODUCTVERSION_STR@\0"
VALUE "Comments", "@IE_VS_VER_COMMENTS_STR@\0"
END
END
BLOCK "VarFileInfo"
BEGIN
VALUE "Translation", 0x0409, 1252
END
END

View File

@@ -2,9 +2,10 @@
# SPDX-License-Identifier: Apache-2.0
#
include (download/download)
include ("download")
function (resolve_archive_dependency VAR COMPONENT ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC ARCHIVE_ANDROID TARGET_PATH FOLDER ENVIRONMENT)
function (resolve_archive_dependency VAR COMPONENT ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC ARCHIVE_ANDROID TARGET_PATH FOLDER ENVIRONMENT SHA256)
if (ENVIRONMENT AND (DEFINED ${ENVIRONMENT} OR DEFINED ENV{${ENVIRONMENT}}))
set(HAS_ENV "TRUE")
endif()
@@ -12,9 +13,9 @@ function (resolve_archive_dependency VAR COMPONENT ARCHIVE ARCHIVE_UNIFIED ARCHI
if (NOT DEFINED HAS_ENV)
if (ARCHIVE)
#TODO: check whether this is platform specific binary with same name per or it is in common folder
DownloadAndExtract(${COMPONENT} ${ARCHIVE} ${TARGET_PATH} result_path ${FOLDER} ${SHA256})
DownloadAndExtract(${COMPONENT} ${ARCHIVE} ${TARGET_PATH} result_path ${FOLDER})
else()
DownloadAndExtractPlatformSpecific(${COMPONENT} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID} ${TARGET_PATH} result_path ${FOLDER} ${SHA256})
DownloadAndExtractPlatformSpecific(${COMPONENT} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID} ${TARGET_PATH} result_path ${FOLDER})
endif()
set (${VAR} ${result_path} PARENT_SCOPE)
@@ -53,7 +54,7 @@ endfunction(read_version)
function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
list(REMOVE_AT ARGV 0)
set(SUPPORTED_ARGS FOLDER ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC ARCHIVE_ANDROID TARGET_PATH ENVIRONMENT GITHUB_PULL_REQUEST VERSION_REGEX SHA256)
set(SUPPORTED_ARGS FOLDER ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC ARCHIVE_ANDROID TARGET_PATH ENVIRONMENT GITHUB_PULL_REQUEST VERSION_REGEX)
#unnecessary vars
@@ -112,9 +113,6 @@ function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
set (FOLDER FALSE)
endif()
if (NOT DEFINED SHA256)
message(FATAL_ERROR "SHA is not specified for: " ${NAME_OF_CMAKE_VAR})
endif()
#for each dependency type have to do separate things
@@ -123,7 +121,7 @@ function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
message(FATAL_ERROR "TARGET_PATH should be defined for every dependency")
endif()
resolve_archive_dependency(RESULT ${NAME_OF_CMAKE_VAR} ${ARCHIVE} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID} ${TARGET_PATH} ${FOLDER} ${ENVIRONMENT} ${SHA256})
resolve_archive_dependency(RESULT ${NAME_OF_CMAKE_VAR} ${ARCHIVE} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID} ${TARGET_PATH} ${FOLDER} ${ENVIRONMENT})
set(${NAME_OF_CMAKE_VAR} ${RESULT} PARENT_SCOPE)
if (VERSION_REGEX)
GetNameAndUrlToDownload(archive RELATIVE_URL ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${ARCHIVE_ANDROID})

View File

@@ -0,0 +1,24 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function (Download from to fatal result output)
if((NOT EXISTS "${to}"))
message(STATUS "Downloading from ${from} to ${to} ...")
file(DOWNLOAD ${from} ${to}
TIMEOUT 3600
LOG log
STATUS status
SHOW_PROGRESS)
set (${output} ${status} PARENT_SCOPE)
else()
set (${output} 0 PARENT_SCOPE)
endif()
set(${result} "ON" PARENT_SCOPE)
endfunction(Download)
include ("download_and_apply")
include ("download_and_extract")

View File

@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
#
function (DownloadAndApply URL apply_to sha256)
function (DownloadAndApply URL apply_to)
if (EXISTS ${apply_to})
file(READ ${apply_to} patchFile4Bytes LIMIT 4)
@@ -16,7 +16,7 @@ function (DownloadAndApply URL apply_to sha256)
file(REMOVE ${apply_to})
endif()
DownloadAndCheck(${URL} ${apply_to} TRUE result ${sha256})
DownloadAndCheck(${URL} ${apply_to} TRUE result)
else ()
set (MIGHT_BE_APPLIED 1)
endif()

View File

@@ -0,0 +1,58 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include (FindWget)
function (DownloadAndCheck from to fatal result)
set(status_res "ON")
set(output 1)
get_filename_component(download_dir ${to} DIRECTORY)
if (NOT EXISTS ${download_dir})
file(MAKE_DIRECTORY ${download_dir})
endif()
if(NOT EXISTS "${to}")
if (${from} MATCHES "(http:)|(https:)|(ftp:)")
message(STATUS "Downloading from ${from} to ${to} ...")
find_program(aria2c "aria2c")
if (${aria2c} STREQUAL "aria2c-NOTFOUND")
if (NOT ${WGET_FOUND})
Download(${from} ${to} ${fatal} ${result} output)
list(GET output 0 status_code)
else()
message(STATUS "${WGET_EXECUTABLE} --no-cache ${from}")
execute_process(COMMAND ${WGET_EXECUTABLE} "--no-cache" "--no-check-certificate" "${from}" "-O" "${to}"
TIMEOUT 2000
RESULT_VARIABLE status_code)
endif()
else()
message(STATUS "${aria2c} ,*.*.*.* -d ${download_dir} ${from}")
execute_process(COMMAND "${aria2c}" "-s10" "-x10" "--dir=${download_dir}" "${from}"
TIMEOUT 2000
RESULT_VARIABLE status_code)
endif()
if(NOT status_code EQUAL 0)
if (fatal)
message(FATAL_ERROR "fatal error: downloading '${from}' failed
status_code: ${status_code}
status_string: ${status_string}
log: ${log}")
else()
set(status_res "ARCHIVE_DOWNLOAD_FAIL")
message("error: downloading '${from}' failed
status_code: ${status_code}")
endif()
endif()
else()
message(STATUS "Copying from local folder ${from} to ${to} ... ")
file(COPY ${from} DESTINATION ${download_dir})
endif()
endif()
file(REMOVE ${to}.md5)
set(${result} "${status_res}" PARENT_SCOPE)
endfunction(DownloadAndCheck)

View File

@@ -2,8 +2,8 @@
# SPDX-License-Identifier: Apache-2.0
#
include(download/extract)
include(download/download_and_check)
include ("extract")
include ("download_and_check")
function (GetNameAndUrlToDownload name url archive_name_unified archive_name_win archive_name_lin archive_name_mac archive_name_android)
if (archive_name_unified)
@@ -41,23 +41,22 @@ function (DownloadAndExtractPlatformSpecific
archive_name_android
unpacked_path
result_path
folder
sha256)
folder)
GetNameAndUrlToDownload(archive_name RELATIVE_URL ${archive_name_unified} ${archive_name_win} ${archive_name_lin} ${archive_name_mac} ${archive_name_android} )
if (NOT archive_name OR NOT RELATIVE_URL)
return()
endif()
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} result_path2 ${folder} TRUE FALSE TRUE ${sha256})
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} result_path2 ${folder} TRUE FALSE TRUE)
set (${result_path} ${result_path2} PARENT_SCOPE)
endfunction(DownloadAndExtractPlatformSpecific)
#download from common folder
function (DownloadAndExtract component archive_name unpacked_path result_path folder sha256)
function (DownloadAndExtract component archive_name unpacked_path result_path folder)
set (RELATIVE_URL "${archive_name}")
set(fattal TRUE)
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} result_path2 ${folder} ${fattal} result TRUE ${sha256})
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} result_path2 ${folder} ${fattal} result TRUE)
if (NOT ${result})
DownloadAndExtractPlatformSpecific(${component} ${archive_name} ${archive_name} ${archive_name} ${unpacked_path} ${result_path2} ${folder})
@@ -68,9 +67,9 @@ function (DownloadAndExtract component archive_name unpacked_path result_path fo
endfunction(DownloadAndExtract)
function (DownloadAndExtractInternal URL archive_path unpacked_path folder fattal resultExt sha256)
function (DownloadAndExtractInternal URL archive_path unpacked_path folder fattal resultExt)
set (status "ON")
DownloadAndCheck(${URL} ${archive_path} ${fattal} result1 ${sha256})
DownloadAndCheck(${URL} ${archive_path} ${fattal} result1)
if ("${result1}" STREQUAL "ARCHIVE_DOWNLOAD_FAIL")
#check alternative url as well
set (status "OFF")
@@ -106,11 +105,11 @@ function (ExtractWithVersion URL archive_path unpacked_path folder result)
set (${result} ${status} PARENT_SCOPE)
endfunction (ExtractWithVersion)
function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal resultExt sha256)
function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal resultExt)
debug_message("checking wether archive downloaded : ${archive_path}")
set (downloadStatus "NOTOK")
if (NOT EXISTS ${archive_path})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result ${sha256})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
if (${result})
set (downloadStatus "OK")
endif()
@@ -119,7 +118,7 @@ function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal
if (ENABLE_UNSAFE_LOCATIONS)
ExtractWithVersion(${URL} ${archive_path} ${unpacked_path} ${folder} result)
if(NOT ${result})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result ${sha256})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
if (${result})
set (downloadStatus "OK")
endif()
@@ -127,7 +126,7 @@ function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal
else()
debug_message("archive found on FS : ${archive_path}, however we cannot check it's checksum and think that it is invalid")
file(REMOVE_RECURSE "${archive_path}")
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result ${sha256})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
if (${result})
set (downloadStatus "OK")
endif()
@@ -147,9 +146,10 @@ endfunction(DownloadOrExtractInternal)
file(REMOVE ${CMAKE_BINARY_DIR}/dependencies_64.txt)
function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked_path result_path folder fattal resultExt use_alternatives sha256)
function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked_path result_path folder fattal resultExt use_alternatives)
set (archive_path ${TEMP}/download/${archive_name})
set (status "ON")
set (on_master FALSE)
if(DEFINED IE_PATH_TO_DEPS)
set(URL "${IE_PATH_TO_DEPS}/${RELATIVE_URL}")
@@ -169,11 +169,18 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
debug_message ("checking that unpacked directory exist: ${unpacked_path}")
if (NOT EXISTS ${unpacked_path})
DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} status ${sha256})
DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} status)
else(NOT EXISTS ${unpacked_path})
#path exists, so we would like to check what was unpacked version
set (version_file ${unpacked_path}/ie_dependency.info)
if (DEFINED TEAMCITY_GIT_BRANCH)
if(${TEAMCITY_GIT_BRANCH} STREQUAL "master")
set(on_master TRUE)
debug_message ("On master branch, update data in DL_SDK_TEMP if necessary")
endif()
endif()
if (NOT EXISTS ${version_file} AND NOT ${ENABLE_ALTERNATIVE_TEMP})
clean_message(FATAL_ERROR "error: Dependency doesn't contain version file. Please select actions: \n"
"if you are not sure about your FS dependency - remove it : \n"
@@ -194,24 +201,24 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
endif()
if (NOT EXISTS ${version_file} OR NOT ${dependency_url} STREQUAL ${URL})
if (${use_alternatives} AND ALTERNATIVE_PATH)
if (${use_alternatives} AND ALTERNATIVE_PATH AND NOT ${on_master})
#creating alternative_path
string(REPLACE ${TEMP} ${ALTERNATIVE_PATH} unpacked_path ${unpacked_path})
string(REPLACE ${TEMP} ${ALTERNATIVE_PATH} archive_path ${archive_path})
debug_message("dependency different: use local path for fetching updated version: ${alternative_path}")
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} ${result_path} ${folder} ${fattal} ${resultExt} FALSE ${sha256})
CheckOrDownloadAndExtract(${component} ${RELATIVE_URL} ${archive_name} ${unpacked_path} ${result_path} ${folder} ${fattal} ${resultExt} FALSE)
else()
debug_message("dependency updated: download it again")
file(REMOVE_RECURSE "${unpacked_path}")
DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} status ${sha256})
DownloadOrExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} status)
endif()
endif ()
endif()
endif()
if (${use_alternatives})
if (${use_alternatives} OR ${on_master})
set (${resultExt} "${status}" PARENT_SCOPE)
set (${result_path} ${unpacked_path} PARENT_SCOPE)
endif()

View File

@@ -2,33 +2,49 @@
# SPDX-License-Identifier: Apache-2.0
#
include (target_flags)
include (options)
# these options are aimed to optimize build time on development system
if(X86_64)
set(ENABLE_MKL_DNN_DEFAULT ON)
else()
set(ENABLE_MKL_DNN_DEFAULT OFF)
endif()
ie_option (ENABLE_MKL_DNN "MKL-DNN plugin for inference engine" ${ENABLE_MKL_DNN_DEFAULT})
ie_option (ENABLE_TESTS "unit, behavior and functional tests" OFF)
ie_dependent_option (ENABLE_CLDNN "clDnn based plugin for inference engine" ON "X86_64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
ie_option (ENABLE_MKL_DNN "MKL-DNN plugin for inference engine" ${ENABLE_MKL_DNN_DEFAULT})
ie_dependent_option (ENABLE_CLDNN "clDnn based plugin for inference engine" ON "WIN32 OR X86_64;NOT APPLE;NOT MINGW" OFF)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF "LINUX OR WIN32;NOT CMAKE_CROSSCOMPILING" OFF)
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
# FIXME: ARM cross-compiler generates several "false positive" warnings regarding __builtin_memcpy buffer overflow
ie_dependent_option (TREAT_WARNING_AS_ERROR "Treat build warnings as errors" ON "X86 OR X86_64" OFF)
ie_option (ENABLE_INTEGRITYCHECK "build DLLs with /INTEGRITYCHECK flag" OFF)
ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)
ie_option (ENABLE_THREAD_SANITIZER "enable checking data races via ThreadSanitizer" OFF)
ie_dependent_option (COVERAGE "enable code coverage" OFF "CMAKE_CXX_COMPILER_ID STREQUAL GNU" OFF)
# Define CPU capabilities
ie_dependent_option (ENABLE_SSE42 "Enable SSE4.2 optimizations" ON "X86_64 OR X86" OFF)
ie_dependent_option (ENABLE_AVX2 "Enable AVX2 optimizations" ON "X86_64 OR X86" OFF)
ie_dependent_option (ENABLE_AVX512F "Enable AVX512 optimizations" ON "X86_64 OR X86" OFF)
ie_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable." OFF)
ie_option (ENABLE_DOCS "Build docs using Doxygen" OFF)
ie_option(ENABLE_TEMPLATE_PLUGIN "Register template plugin into plugins.xml" OFF)
ie_option_enum(SELECTIVE_BUILD "Enable OpenVINO conditional compilation or statistics collection. \
In case SELECTIVE_BUILD is enabled, the SELECTIVE_BUILD_STAT variable should contain the path to the collected InelSEAPI statistics. \
Usage: -DSELECTIVE_BUILD=ON -DSELECTIVE_BUILD_STAT=/path/*.csv" OFF
ALLOWED_VALUES ON OFF COLLECT)
ie_option(ENABLE_ERROR_HIGHLIGHT "Highlight errors and warnings during compile time" OFF)
#
# Process options
#
print_enabled_features()
# Documentation build
ie_option (ENABLE_DOCS "build docs using Doxygen" OFF)

30
cmake/fuzzing.cmake Normal file
View File

@@ -0,0 +1,30 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
# Communicate libfuzzer is enabled
set(WITH_LIBFUZZER ON PARENT_SCOPE)
add_compile_definitions(WITH_LIBFUZZER)
# Enable libfuzzer and code coverage
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${FUZZING_LINKER_FLAGS}" PARENT_SCOPE)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${FUZZING_LINKER_FLAGS}")
endif()
endfunction(enable_fuzzing)
function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
if(WITH_LIBFUZZER)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=fuzzer" PARENT_SCOPE)
endif()
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
endfunction(add_fuzzer)

27
cmake/options.cmake Normal file
View File

@@ -0,0 +1,27 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Usage: ie_option(<option_variable> "description" <initial value or boolean expression> [IF <condition>])
include (CMakeDependentOption)
include (version)
macro (ie_option variable description value)
option(${variable} "${description}" ${value})
list(APPEND IE_OPTIONS ${variable})
endmacro()
macro (ie_dependent_option variable description def_value condition fallback_value)
cmake_dependent_option(${variable} "${description}" ${def_value} "${condition}" ${fallback_value})
list(APPEND IE_OPTIONS ${variable})
endmacro()
function (print_enabled_features)
message(STATUS "Inference Engine enabled features: ")
message(STATUS "")
message(STATUS " CI_BUILD_NUMBER: ${CI_BUILD_NUMBER}")
foreach(_var ${IE_OPTIONS})
message(STATUS " ${_var} = ${${_var}}")
endforeach()
message(STATUS "")
endfunction()

View File

@@ -126,35 +126,44 @@ function(ie_avx512_optimization_flags flags)
endif()
endfunction()
function(ie_arm_neon_optimization_flags flags)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# nothing
elseif(ANDROID)
if(ANDROID_ABI STREQUAL "arm64-v8a")
set(${flags} "-mfpu=neon" PARENT_SCOPE)
elseif(ANDROID_ABI STREQUAL "armeabi-v7a-hard with NEON")
set(${flags} "-march=armv7-a -mfloat-abi=hard -mhard-float -D_NDK_MATH_NO_SOFTFP=1 -mfpu=neon" PARENT_SCOPE)
elseif((ANDROID_ABI STREQUAL "armeabi-v7a with NEON") OR
(ANDROID_ABI STREQUAL "armeabi-v7a" AND
DEFINED CMAKE_ANDROID_ARM_NEON AND CMAKE_ANDROID_ARM_NEON))
set(${flags} "-march=armv7-a -mfloat-abi=softfp -mfpu=neon" PARENT_SCOPE)
endif()
else()
if(AARCH64)
set(${flags} "-O2 -ftree-vectorize" PARENT_SCOPE)
elseif(ARM)
set(${flags} "-mfpu=neon" PARENT_SCOPE)
endif()
endif()
endfunction()
#
# Enables Link Time Optimization compilation
#
macro(ie_enable_lto)
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_RELEASE ON)
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND OFF)
ProcessorCount(N)
if(UNIX)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -ipo")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -ipo")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -ipo-jobs${N}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} -ipo-jobs${N}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} -ipo-jobs${N}")
else()
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /Qipo")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /Qipo")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /Qipo-jobs:${N}")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /Qipo-jobs:${N}")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} /Qipo-jobs:${N}")
endif()
elseif(UNIX)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -flto")
# LTO causes issues with gcc 4.8.5 during cmake pthread check
if(NOT CMAKE_C_COMPILER_VERSION VERSION_LESS 4.9)
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -flto")
endif()
# modify linker and ar
if(LINUX)
set(CMAKE_AR "gcc-ar")
set(CMAKE_RANLIB "gcc-ranlib")
endif()
elseif(MSVC AND OFF)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /GL")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} /GL")
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} /LTCG:STATUS")
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /LTCG:STATUS")
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} /LTCG:STATUS")
endif()
endmacro()
#
@@ -167,17 +176,6 @@ macro(ie_add_compiler_flags)
endforeach()
endmacro()
#
# Forced includes certain header file to all target source files
#
function(ov_force_include target scope header_file)
if(MSVC)
target_compile_options(${target} ${scope} /FI"${header_file}")
else()
target_compile_options(${target} ${scope} -include "${header_file}")
endif()
endfunction()
#
# Compilation and linker flags
#
@@ -197,15 +195,15 @@ if(NOT DEFINED CMAKE_CXX_STANDARD)
endif()
if(ENABLE_COVERAGE)
ie_add_compiler_flags(--coverage)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --coverage")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} --coverage")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --coverage")
endif()
if(NOT CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
ie_add_compiler_flags(-fsigned-char)
if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsigned-char")
endif()
# Honor visibility properties for all target types
set(CMAKE_POLICY_DEFAULT_CMP0063 NEW)
set(CMAKE_CXX_VISIBILITY_PRESET hidden)
set(CMAKE_C_VISIBILITY_PRESET hidden)
@@ -229,7 +227,6 @@ if(WIN32)
# Compiler specific flags
ie_add_compiler_flags(/bigobj)
ie_add_compiler_flags(/MP)
# Disable noisy warnings
@@ -254,14 +251,10 @@ if(WIN32)
ie_add_compiler_flags(/Qdiag-disable:161,177,556,1744,1879,2586,2651,3180,11075,15335)
endif()
# Debug information flags, by default CMake adds /Zi option
# but provides no way to specify CMAKE_COMPILE_PDB_NAME on root level
# In order to avoid issues with ninja we are replacing default flag instead of having two of them
# and observing warning D9025 about flag override
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG}")
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
# Debug information flags
set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /Z7")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /Z7")
else()
# TODO: enable for C sources as well
# ie_add_compiler_flags(-Werror)
@@ -273,7 +266,6 @@ else()
ie_add_compiler_flags(-fdiagnostics-show-option)
ie_add_compiler_flags(-Wundef)
ie_add_compiler_flags(-Wreturn-type)
ie_add_compiler_flags(-Wunused-variable)
# Disable noisy warnings

View File

@@ -4,14 +4,6 @@
include(CheckCXXCompilerFlag)
if (ENABLE_SANITIZER OR ENABLE_THREAD_SANITIZER)
# This is workaround for https://gitlab.kitware.com/cmake/cmake/-/issues/16609.
# It ensures pthread is searched without ASAN linking.
# Line bellow must be before adding -fsanitize=address or -fsanitize=thread to
# build options for the trick to work.
find_package(Threads REQUIRED)
endif()
if (ENABLE_SANITIZER)
set(SANITIZER_COMPILER_FLAGS "-g -fsanitize=address -fno-omit-frame-pointer")
CHECK_CXX_COMPILER_FLAG("-fsanitize-recover=address" SANITIZE_RECOVER_SUPPORTED)

42
cmake/target_flags.cmake Normal file
View File

@@ -0,0 +1,42 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Target system specific flags
if(CMAKE_CL_64)
set(MSVC64 ON)
endif()
if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
execute_process(COMMAND ${CMAKE_CXX_COMPILER} -dumpmachine
OUTPUT_VARIABLE OPENVINO_GCC_TARGET_MACHINE
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(OPENVINO_GCC_TARGET_MACHINE MATCHES "amd64|x86_64|AMD64")
set(MINGW64 ON)
endif()
endif()
if(MSVC64 OR MINGW64)
set(X86_64 ON)
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
set(X86_64 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
set(X86 ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
set(ARM ON)
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)")
set(AARCH64 ON)
endif()
# in case of cross-compilation (or -m32) CMAKE_SYSTEM_PROCESSOR is equal to
# CMAKE_HOST_SYSTEM_PROCESSOR which is X86_64; patch this until a better solution
if(CMAKE_SIZEOF_VOID_P EQUAL 4 AND X86_64)
unset(X86_64)
set(X86 ON)
endif()
if(UNIX AND NOT APPLE)
set(LINUX ON)
endif()

View File

@@ -1,24 +0,0 @@
# Copyright (C) 2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_CXX_FLAGS_INIT "-m32")
set(CMAKE_C_FLAGS_INIT "-m32")
set(CMAKE_SHARED_LINKER_FLAGS_INIT "-m32")
set(CMAKE_MODULE_LINKER_FLAGS_INIT "-m32")
set(CMAKE_EXE_LINKER_FLAGS_INIT "-m32")
# Hints for OpenVINO
macro(_set_if_not_defined var val)
if(NOT DEFINED ${var})
set(${var} ${val} CACHE BOOL "" FORCE)
endif()
endmacro()
# need libusb 32-bits version
_set_if_not_defined(ENABLE_VPU OFF)
# fix conversion from uint64_t / int64_t to size_t
_set_if_not_defined(NGRAPH_ONNX_IMPORT_ENABLE OFF)

View File

@@ -1,39 +0,0 @@
# Copyright (C) 2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
#
# Flags for 3rd party projects
#
set(use_static_runtime ON)
if(use_static_runtime)
foreach(lang C CXX)
foreach(build_type "" "_DEBUG" "_MINSIZEREL" "_RELEASE" "_RELWITHDEBINFO")
set(flag_var "CMAKE_${lang}_FLAGS${build_type}")
string(REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}")
endforeach()
endforeach()
endif()
function(onecoreuap_set_runtime var)
set(${var} ${use_static_runtime} CACHE BOOL "" FORCE)
endfunction()
# ONNX
onecoreuap_set_runtime(ONNX_USE_MSVC_STATIC_RUNTIME)
# pugixml
onecoreuap_set_runtime(STATIC_CRT)
# protobuf
onecoreuap_set_runtime(protobuf_MSVC_STATIC_RUNTIME)
# clDNN
onecoreuap_set_runtime(CLDNN__COMPILE_LINK_USE_STATIC_RUNTIME)
# google-test
if(use_static_runtime)
set(gtest_force_shared_crt OFF CACHE BOOL "" FORCE)
else()
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
endif()
unset(use_static_runtime)

View File

@@ -1,74 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
#
# Define CMAKE_SYSTEM_VERSION if not defined
#
if(NOT DEFINED CMAKE_SYSTEM_VERSION)
# Sometimes CMAKE_HOST_SYSTEM_VERSION has form 10.x.y while we need
# form 10.x.y.z Adding .0 at the end fixes the issue
if(CMAKE_HOST_SYSTEM_VERSION MATCHES "^10\.0\.[0-9]+$")
set(CMAKE_SYSTEM_VERSION "${CMAKE_HOST_SYSTEM_VERSION}.0")
else()
set(CMAKE_SYSTEM_VERSION "${CMAKE_HOST_SYSTEM_VERSION}")
endif()
endif()
if(NOT DEFINED CMAKE_SYSTEM_PROCESSOR)
set(CMAKE_SYSTEM_PROCESSOR ${CMAKE_HOST_SYSTEM_PROCESSOR})
endif()
message(STATUS "Building for Windows OneCore compliance (using OneCoreUap.lib, ${CMAKE_SYSTEM_VERSION})")
#
# OneCore flags
#
set(_onecoreuap_arch "x64")
if(CMAKE_GENERATOR_PLATFORM)
set(_onecoreuap_arch ${CMAKE_GENERATOR_PLATFORM})
endif()
if(_onecoreuap_arch STREQUAL "x64")
# Forcefull make VS search for C++ libreries in these folders prior to other c++ standard libraries localizations.
add_link_options("/LIBPATH:\"\$\(VC_LibraryPath_VC_x64_OneCore\)\"")
set(CMAKE_C_STANDARD_LIBRARIES "\$\(UCRTContentRoot\)lib/\$\(TargetUniversalCRTVersion\)/um/\$\(Platform\)/OneCoreUap.lib" CACHE STRING "" FORCE)
set(CMAKE_CXX_STANDARD_LIBRARIES "\$\(UCRTContentRoot\)lib/\$\(TargetUniversalCRTVersion\)/um/\$\(Platform\)/OneCoreUap.lib" CACHE STRING "" FORCE)
elseif(_onecoreuap_arch STREQUAL "X86")
add_link_options("/LIBPATH:\"\$\(VCInstallDir\)lib/onecore\"")
add_link_options("/LIBPATH:\"\$\(VC_LibraryPath_VC_x86_OneCore\)\"")
set(CMAKE_C_STANDARD_LIBRARIES "\$\(UCRTContentRoot\)lib/\$\(TargetUniversalCRTVersion\)/um/x86/OneCoreUap.lib" CACHE STRING "" FORCE)
set(CMAKE_CXX_STANDARD_LIBRARIES "\$\(UCRTContentRoot\)lib/\$\(TargetUniversalCRTVersion\)/um/x86/OneCoreUap.lib" CACHE STRING "" FORCE)
else()
message(FATAL_ERROR "Unsupported architecture ${_onecoreuap_arch}. Only X86 or X86_64 are supported")
endif()
unset(_onecoreuap_arch)
# compile flags
set(includes "/I\"\$\(UniversalCRT_IncludePath\)\"")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${includes}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${includes}")
unset(includes)
# linker flags
foreach(lib kernel32 user32 advapi32 ole32 mscoree combase)
set(linker_flags "/NODEFAULTLIB:${lib}.lib ${linker_flags}")
endforeach()
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${linker_flags}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${linker_flags}")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${linker_flags}")
unset(linker_flags)
#
# Static runtime to overcome apiValidator tool restrictions
#
include("${CMAKE_CURRENT_LIST_DIR}/mt.runtime.win32.toolchain.cmake")

View File

@@ -1,38 +0,0 @@
# Copyright (C) 2018-2020 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME WindowsStore)
#
# Define CMAKE_SYSTEM_VERSION if not defined
#
if(NOT DEFINED CMAKE_SYSTEM_VERSION)
# Sometimes CMAKE_HOST_SYSTEM_VERSION has form 10.x.y while we need
# form 10.x.y.z Adding .0 at the end fixes the issue
if(CMAKE_HOST_SYSTEM_VERSION MATCHES "^10\.0\.[0-9]+$")
set(CMAKE_SYSTEM_VERSION "${CMAKE_HOST_SYSTEM_VERSION}.0")
else()
set(CMAKE_SYSTEM_VERSION "${CMAKE_HOST_SYSTEM_VERSION}")
endif()
endif()
if(NOT DEFINED CMAKE_SYSTEM_PROCESSOR)
set(CMAKE_SYSTEM_PROCESSOR ${CMAKE_HOST_SYSTEM_PROCESSOR})
endif()
#
# Compilation flags
#
file(WRITE "${CMAKE_CURRENT_BINARY_DIR}/src/uwp.hpp"
"#ifdef WINAPI_FAMILY\n"
"#undef WINAPI_FAMILY\n"
"#define WINAPI_FAMILY WINAPI_FAMILY_DESKTOP_APP\n"
"#endif\n")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /FI\"${CMAKE_CURRENT_BINARY_DIR}/src/uwp.hpp\"")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /FI\"${CMAKE_CURRENT_BINARY_DIR}/src/uwp.hpp\"")
set(CMAKE_VS_GLOBALS "WindowsTargetPlatformMinVersion=${CMAKE_SYSTEM_VERSION}")

View File

@@ -3,24 +3,18 @@
#
function (branchName VAR)
if(NOT DEFINED repo_root)
message(FATAL_ERROR "repo_root is not defined")
endif()
execute_process(
COMMAND git rev-parse --abbrev-ref HEAD
WORKING_DIRECTORY ${repo_root}
WORKING_DIRECTORY ${OpenVINO_MAIN_SOURCE_DIR}
OUTPUT_VARIABLE GIT_BRANCH
OUTPUT_STRIP_TRAILING_WHITESPACE)
set (${VAR} ${GIT_BRANCH} PARENT_SCOPE)
endfunction()
function (commitHash VAR)
if(NOT DEFINED repo_root)
message(FATAL_ERROR "repo_root is not defined")
endif()
execute_process(
COMMAND git rev-parse HEAD
WORKING_DIRECTORY ${repo_root}
WORKING_DIRECTORY ${OpenVINO_MAIN_SOURCE_DIR}
OUTPUT_VARIABLE GIT_COMMIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE)
set (${VAR} ${GIT_COMMIT_HASH} PARENT_SCOPE)

View File

@@ -3,53 +3,33 @@
#
if(NOT ENABLE_DOCKER)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wall)
endif()
add_subdirectory(snippets)
add_subdirectory(examples)
# Detect nGraph
find_package(ngraph QUIET
PATHS "${CMAKE_BINARY_DIR}/ngraph"
NO_DEFAULT_PATH)
find_package(ngraph QUIET)
if(NOT ngraph_FOUND)
set(ngraph_DIR ${CMAKE_BINARY_DIR}/ngraph)
endif()
# Detect InferenceEngine
find_package(InferenceEngine QUIET
PATHS "${CMAKE_BINARY_DIR}"
NO_DEFAULT_PATH)
find_package(InferenceEngine QUIET)
if(NOT InferenceEngine_FOUND)
set(InferenceEngine_DIR ${CMAKE_BINARY_DIR})
endif()
if (NGRAPH_ONNX_IMPORT_ENABLE)
add_subdirectory(onnx_custom_op)
endif()
add_subdirectory(template_extension)
set(all_docs_targets
ie_docs_snippets
ie_docs_examples
template_extension
templatePlugin TemplateBehaviorTests TemplateFunctionalTests)
foreach(target_name IN LISTS all_docs_targets)
if (TARGET ${target_name})
set_target_properties(${target_name} PROPERTIES FOLDER docs)
if(WIN32)
set_target_properties(${target_name} PROPERTIES COMPILE_PDB_NAME ${target_name})
endif()
endif()
endforeach()
endif()
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check")
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation")
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation")
set(POT_DOCS_DIR "" CACHE PATH "Path to post-training-compression-tool documentation")
set(GST_DOCS_DIR "" CACHE PATH "Path to gst-video-analytics documentation")
function(build_docs)
find_package(Doxygen REQUIRED dot)
find_package(Python3 COMPONENTS Interpreter)
@@ -63,155 +43,84 @@ function(build_docs)
message(FATAL_ERROR "Python3 is required to build the documentation")
endif()
execute_process(
COMMAND ${Python3_EXECUTABLE} -m pip show lxml
RESULT_VARIABLE PIP_EXIT_CODE
OUTPUT_QUIET
)
if (NOT ${PIP_EXIT_CODE} EQUAL 0)
message(FATAL_ERROR "lxml package is not installed. Please use \"pip install lxml\".")
endif()
if(NOT LATEX_FOUND)
message(FATAL_ERROR "LATEX is required to build the documentation")
endif()
set(DOCS_BUILD_DIR "${CMAKE_CURRENT_BINARY_DIR}")
set(DOCS_BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}")
set(DOXYGEN_DIR "${OpenVINO_MAIN_SOURCE_DIR}/docs/doxygen")
set(IE_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/inference-engine")
set(PYTHON_API_IN "${IE_SOURCE_DIR}/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx")
set(PYTHON_API_OUT "${DOCS_BUILD_DIR}/python_api/ie_api.pyx")
set(PYTHON_API_OUT "${DOCS_BINARY_DIR}/python_api/ie_api.pyx")
set(C_API "${IE_SOURCE_DIR}/ie_bridges/c/include")
set(PLUGIN_API_DIR "${DOCS_BUILD_DIR}/IE_PLUGIN_DG")
set(NGRAPH_DIR "${OpenVINO_MAIN_SOURCE_DIR}/ngraph")
set(NGRAPH_PY_DIR "${NGRAPH_DIR}/python/src/ngraph/")
set(NGRAPH_CPP_DIR "${NGRAPH_DIR}/core/include/" "${NGRAPH_DIR}/frontend/onnx_import/include")
set(PLUGIN_API_DIR "${DOCS_BINARY_DIR}/IE_PLUGIN_DG")
# Preprocessing scripts
set(DOXY_MD_FILTER "${DOXYGEN_DIR}/doxy_md_filter.py")
set(DOXY_LAYOUT_SCRIPT "${DOXYGEN_DIR}/build_main_layout.py")
set(DOXY_LOG_SCRIPT "${DOXYGEN_DIR}/log.py")
set(PYX_FILTER "${DOXYGEN_DIR}/pyx_filter.py")
# assets dir
set(ASSETS_DIR "${DOXYGEN_DIR}/assets")
# header and footer
set(HEADER_SOURCE "${DOXYGEN_DIR}/header.html.in")
set(FOOTER_SOURCE "${DOXYGEN_DIR}/footer.html.in")
set(HEADER_BUILD "${DOCS_BUILD_DIR}/header.html")
set(FOOTER_BUILD "${DOCS_BUILD_DIR}/footer.html")
configure_file(${HEADER_SOURCE} ${HEADER_BUILD} @ONLY)
configure_file(${FOOTER_SOURCE} ${FOOTER_BUILD} @ONLY)
file(GLOB_RECURSE doc_source_files
LIST_DIRECTORIES true RELATIVE ${OpenVINO_MAIN_SOURCE_DIR}
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.svg"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.svg")
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg")
configure_file(${PYTHON_API_IN} ${PYTHON_API_OUT} @ONLY)
set(NGRAPH_CPP_CONFIG_SOURCE "${DOXYGEN_DIR}/ngraph_cpp_api.config")
set(NGRAPH_PY_CONFIG_SOURCE "${DOXYGEN_DIR}/ngraph_py_api.config")
set(IE_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_docs.config")
set(C_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_c_api.config")
set(PY_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_py_api.config")
set(PLUGIN_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_plugin_api.config")
set(NGRAPH_CPP_CONFIG_BUILD "${DOCS_BUILD_DIR}/ngraph_cpp_api.config")
set(NGRAPH_PY_CONFIG_BUILD "${DOCS_BUILD_DIR}/ngraph_py_api.config")
set(IE_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_docs.config")
set(C_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_c_api.config")
set(PY_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_py_api.config")
set(PLUGIN_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_plugin_api.config")
set(IE_CONFIG_BINARY "${DOCS_BINARY_DIR}/ie_docs.config")
set(C_CONFIG_BINARY "${DOCS_BINARY_DIR}/ie_c_api.config")
set(PY_CONFIG_BINARY "${DOCS_BINARY_DIR}/ie_py_api.config")
set(PLUGIN_CONFIG_BINARY "${DOCS_BINARY_DIR}/ie_plugin_api.config")
set(NGRAPH_CPP_LAYOUT_SOURCE "${DOXYGEN_DIR}/ngraph_cpp_api.xml")
set(NGRAPH_PY_LAYOUT_SOURCE "${DOXYGEN_DIR}/ngraph_py_api.xml")
set(IE_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_docs.xml")
set(OPENVINO_LAYOUT_SOURCE "${DOXYGEN_DIR}/openvino_docs.xml")
set(C_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_c_api.xml")
set(PY_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_py_api.xml")
set(PLUGIN_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_plugin_api.xml")
set(NGRAPH_CPP_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ngraph_cpp_api.xml")
set(NGRAPH_PY_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ngraph_py_api.xml")
set(IE_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_docs.xml")
set(OPENVINO_LAYOUT_BUILD "${DOCS_BUILD_DIR}/openvino_docs.xml")
set(C_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_c_api.xml")
set(PY_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_py_api.xml")
set(PLUGIN_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_plugin_api.xml")
# out dirs
set(OUTPUT_DIRECTORY "${DOCS_BUILD_DIR}/html")
set(IE_OUTPUT "${OUTPUT_DIRECTORY}")
set(C_OUTPUT "${OUTPUT_DIRECTORY}/ie_c_api")
set(PY_OUTPUT "${OUTPUT_DIRECTORY}/ie_python_api")
set(PLUGIN_OUTPUT "${OUTPUT_DIRECTORY}/ie_plugin_api")
set(NGRAPH_CPP_OUTPUT "${OUTPUT_DIRECTORY}/ngraph_cpp_api")
set(NGRAPH_PY_OUTPUT "${OUTPUT_DIRECTORY}/ngraph_python_api")
set(IE_LAYOUT_BINARY "${DOCS_BINARY_DIR}/ie_docs.xml")
set(C_LAYOUT_BINARY "${DOCS_BINARY_DIR}/ie_c_api.xml")
set(PY_LAYOUT_BINARY "${DOCS_BINARY_DIR}/ie_py_api.xml")
set(PLUGIN_LAYOUT_BINARY "${DOCS_BINARY_DIR}/ie_plugin_api.xml")
# Tables of contents
configure_file(${NGRAPH_CPP_LAYOUT_SOURCE} ${NGRAPH_CPP_LAYOUT_BUILD} @ONLY)
configure_file(${NGRAPH_PY_LAYOUT_SOURCE} ${NGRAPH_PY_LAYOUT_BUILD} @ONLY)
configure_file(${IE_LAYOUT_SOURCE} ${IE_LAYOUT_BUILD} @ONLY)
configure_file(${OPENVINO_LAYOUT_SOURCE} ${OPENVINO_LAYOUT_BUILD} @ONLY)
configure_file(${C_LAYOUT_SOURCE} ${C_LAYOUT_BUILD} @ONLY)
configure_file(${PY_LAYOUT_SOURCE} ${PY_LAYOUT_BUILD} @ONLY)
configure_file(${PLUGIN_LAYOUT_SOURCE} ${PLUGIN_LAYOUT_BUILD} @ONLY)
configure_file(${IE_LAYOUT_SOURCE} ${IE_LAYOUT_BINARY} @ONLY)
configure_file(${C_LAYOUT_SOURCE} ${C_LAYOUT_BINARY} @ONLY)
configure_file(${PY_LAYOUT_SOURCE} ${PY_LAYOUT_BINARY} @ONLY)
configure_file(${PLUGIN_LAYOUT_SOURCE} ${PLUGIN_LAYOUT_BINARY} @ONLY)
# Doxygen config files
configure_file(${NGRAPH_CPP_CONFIG_SOURCE} ${NGRAPH_CPP_CONFIG_BUILD} @ONLY)
configure_file(${NGRAPH_PY_CONFIG_SOURCE} ${NGRAPH_PY_CONFIG_BUILD} @ONLY)
configure_file(${IE_CONFIG_SOURCE} ${IE_CONFIG_BUILD} @ONLY)
configure_file(${C_CONFIG_SOURCE} ${C_CONFIG_BUILD} @ONLY)
configure_file(${PY_CONFIG_SOURCE} ${PY_CONFIG_BUILD} @ONLY)
configure_file(${PLUGIN_CONFIG_SOURCE} ${PLUGIN_CONFIG_BUILD} @ONLY)
configure_file(${IE_CONFIG_SOURCE} ${IE_CONFIG_BINARY} @ONLY)
configure_file(${C_CONFIG_SOURCE} ${C_CONFIG_BINARY} @ONLY)
configure_file(${PY_CONFIG_SOURCE} ${PY_CONFIG_BINARY} @ONLY)
configure_file(${PLUGIN_CONFIG_SOURCE} ${PLUGIN_CONFIG_BINARY} @ONLY)
# Preprocessing scripts
set(DOXY_MD_FILTER "${DOXYGEN_DIR}/doxy_md_filter.py")
set(PYX_FILTER "${DOXYGEN_DIR}/pyx_filter.py")
# nGraph C++ API
add_custom_target(ngraph_cpp_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${NGRAPH_CPP_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${NGRAPH_CPP_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
VERBATIM)
# nGraph Python API
add_custom_target(ngraph_py_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${NGRAPH_PY_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${NGRAPH_PY_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
VERBATIM)
# C API
add_custom_target(c_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${C_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${C_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMAND ${DOXYGEN_EXECUTABLE} ${C_CONFIG_BINARY}
WORKING_DIRECTORY ${DOCS_BINARY_DIR}
COMMENT "Generating C API Reference"
VERBATIM)
# Python API
add_custom_target(py_api
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${PY_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${PY_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMAND ${DOXYGEN_EXECUTABLE} ${PY_CONFIG_BINARY}
WORKING_DIRECTORY ${DOCS_BINARY_DIR}
COMMENT "Generating Python API Reference"
VERBATIM)
@@ -220,158 +129,49 @@ function(build_docs)
COMMAND ${Python3_EXECUTABLE} ${PYX_FILTER} ${PYTHON_API_OUT}
COMMENT "Pre-process Python API")
# Plugin API
add_custom_target(plugin_api
COMMAND ${DOXYGEN_EXECUTABLE} ${PLUGIN_CONFIG_BINARY}
WORKING_DIRECTORY ${DOCS_BINARY_DIR}
COMMENT "Generating Plugin API Reference"
VERBATIM)
# Preprocess docs
add_custom_target(preprocess_docs
COMMENT "Pre-process docs"
VERBATIM)
# ovino doc files
file(GLOB_RECURSE ovino_doc_files
LIST_DIRECTORIES true RELATIVE ${OpenVINO_MAIN_SOURCE_DIR}
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.md"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.png"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.gif"
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg")
foreach(source_file ${ovino_doc_files})
foreach(source_file ${doc_source_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${OpenVINO_MAIN_SOURCE_DIR}/${source_file}" "${DOCS_BUILD_DIR}/openvino/${source_file}")
"${OpenVINO_MAIN_SOURCE_DIR}/${source_file}" "${DOCS_BINARY_DIR}/${source_file}")
endforeach()
# omz doc files
if(EXISTS "${OMZ_DOCS_DIR}")
get_filename_component(OMZ_DOCS_DIR "${OMZ_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE omz_doc_files
LIST_DIRECTORIES true RELATIVE ${OMZ_DOCS_DIR}
"${OMZ_DOCS_DIR}/*.md"
"${OMZ_DOCS_DIR}/*.png"
"${OMZ_DOCS_DIR}/*.gif"
"${OMZ_DOCS_DIR}/*.jpg")
foreach(source_file ${omz_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${OMZ_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/omz/${source_file}")
endforeach()
configure_file("${OMZ_DOCS_DIR}/omz_docs.xml" "${DOCS_BUILD_DIR}/omz_docs.xml" @ONLY)
endif()
# workbench doc files
if(EXISTS "${WORKBENCH_DOCS_DIR}")
get_filename_component(WORKBENCH_DOCS_DIR "${WORKBENCH_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE workbench_doc_files
LIST_DIRECTORIES true RELATIVE ${WORKBENCH_DOCS_DIR}
"${WORKBENCH_DOCS_DIR}/*.md"
"${WORKBENCH_DOCS_DIR}/*.png"
"${WORKBENCH_DOCS_DIR}/*.gif"
"${WORKBENCH_DOCS_DIR}/*.jpg")
foreach(source_file ${workbench_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${WORKBENCH_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/workbench/${source_file}")
endforeach()
configure_file("${WORKBENCH_DOCS_DIR}/docs/Workbench_DG/workbench_docs.xml" "${DOCS_BUILD_DIR}/workbench_docs.xml" @ONLY)
endif()
# pot doc files
if(EXISTS "${POT_DOCS_DIR}")
get_filename_component(POT_DOCS_DIR "${POT_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE pot_doc_files
LIST_DIRECTORIES true RELATIVE ${POT_DOCS_DIR}
"${POT_DOCS_DIR}/*.md"
"${POT_DOCS_DIR}/*.png"
"${POT_DOCS_DIR}/*.gif"
"${POT_DOCS_DIR}/*.jpg")
foreach(source_file ${pot_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${POT_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/pot/${source_file}")
endforeach()
configure_file("${POT_DOCS_DIR}/docs/pot_docs.xml" "${DOCS_BUILD_DIR}/pot_docs.xml" @ONLY)
endif()
# gst doc files
if(EXISTS "${GST_DOCS_DIR}")
get_filename_component(GST_DOCS_DIR "${GST_DOCS_DIR}" ABSOLUTE)
file(GLOB_RECURSE gst_doc_files
LIST_DIRECTORIES true RELATIVE ${GST_DOCS_DIR}
"${GST_DOCS_DIR}/*.md"
"${GST_DOCS_DIR}/*.png"
"${GST_DOCS_DIR}/*.gif"
"${GST_DOCS_DIR}/*.jpg")
foreach(source_file ${gst_doc_files})
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
"${GST_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/gst/${source_file}")
endforeach()
endif()
add_custom_command(TARGET preprocess_docs
PRE_BUILD
${commands}
COMMAND ${Python3_EXECUTABLE} ${DOXY_LAYOUT_SCRIPT} --openvino ${OPENVINO_LAYOUT_BUILD}
COMMAND ${Python3_EXECUTABLE} ${DOXY_MD_FILTER} ${DOCS_BUILD_DIR}
COMMAND ${Python3_EXECUTABLE} ${DOXY_MD_FILTER} ${DOCS_BINARY_DIR}
COMMENT "Pre-process markdown and image links")
# IE dev guide and C++ API
add_custom_target(ie_docs
DEPENDS ngraph_cpp_api preprocess_docs
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${IE_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${IE_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
VERBATIM)
# Plugin API
add_custom_target(plugin_api
DEPENDS ngraph_cpp_api ie_docs
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${PLUGIN_OUTPUT}/assets
COMMAND ${DOXYGEN_EXECUTABLE} ${PLUGIN_CONFIG_BUILD}
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
COMMENT "Generating Plugin API Reference"
DEPENDS preprocess_docs
COMMAND ${DOXYGEN_EXECUTABLE} ${IE_CONFIG_BINARY}
WORKING_DIRECTORY ${DOCS_BINARY_DIR}
VERBATIM)
# Umbrella OpenVINO target
add_custom_target(openvino_docs
DEPENDS ngraph_cpp_api ngraph_py_api c_api py_api ie_docs plugin_api
DEPENDS c_api py_api ie_docs plugin_api
COMMENT "Generating OpenVINO documentation"
VERBATIM)
set_target_properties(openvino_docs ie_docs c_api py_api preprocess_docs plugin_api
ngraph_py_api ngraph_cpp_api
PROPERTIES FOLDER docs)
add_custom_command(TARGET openvino_docs
POST_BUILD
COMMAND ${Python3_EXECUTABLE} ${DOXY_LOG_SCRIPT} --log "${DOCS_BUILD_DIR}/ie_docs.log"
--include_omz $<BOOL:${OMZ_DOCS_DIR}>
--include_wb $<BOOL:${WORKBENCH_DOCS_DIR}>
--include_pot $<BOOL:${POT_DOCS_DIR}>
--include_gst $<BOOL:${GST_DOCS_DIR}>
COMMENT "Parse doxygen log to find errors."
VERBATIM)
# added linkcheker
if(EXISTS "${LINKCHECKER_PY}")
add_custom_target(docs_check
COMMAND ${Python3_EXECUTABLE} "${LINKCHECKER_PY}" -v "${DOCS_BUILD_DIR}/html/"
COMMENT "Check links in generated documentation"
WORKING_DIRECTORY "${DOCS_BUILD_DIR}"
VERBATIM)
set_target_properties(docs_check PROPERTIES FOLDER docs)
endif()
find_program(browser NAMES xdg-open)
if(browser)
add_custom_target(ie_docs_open

View File

@@ -1,380 +1,212 @@
# Custom Operations Guide {#openvino_docs_HOWTO_Custom_Layers_Guide}
# Custom Layers Guide {#openvino_docs_HOWTO_Custom_Layers_Guide}
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks including
TensorFlow*, Caffe*, MXNet*, Kaldi* and ONNX* file format. The list of supported operations (layers) is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Layers](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
The Intel® Distribution of OpenVINO™ toolkit supports neural network model layers in multiple frameworks including TensorFlow*, Caffe*, MXNet*, Kaldi* and ONYX*. The list of known layers is different for each of the supported frameworks. To see the layers supported by your framework, refer to [supported frameworks](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
Custom operations are operations that are not included in the list of known operations. If your model contains any
operation that is not in the list of known operations, the Model Optimizer is not able to generate an Intermediate
Representation (IR) for this model.
Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in the list of known layers, the Model Optimizer classifies them as custom.
This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to
plug in your own implementation for existing or completely new operation.
This guide illustrates the workflow for running inference on topologies featuring custom layers, allowing you to plug in your own implementation for existing or completely new layers.
For a step-by-step example of creating and executing a custom layer, see the [Custom Layer Implementation Tutorials for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0)
> **NOTE:** *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used.
> Refer to the [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../MO_DG/IR_and_opsets.md)
> for more information on the topic.
## Terms used in this guide
## Terms Used in This Guide
- *Layer* — The abstract concept of a math function that is selected for a specific purpose (relu, sigmoid, tanh, convolutional). This is one of a sequential series of building blocks within the neural network.
- *Kernel* — The implementation of a layer function, in this case, the math programmed (in C++ and Python) to perform the layer operation for target hardware (CPU or GPU).
- *Intermediate Representation (IR)* — Neural Network used only by the Inference Engine in OpenVINO abstracting the different frameworks and describing topology, layer parameters and weights.
The original format will be a supported framework such as TensorFlow, Caffe, or MXNet.
- *Intermediate Representation (IR)* — Neural Network used only by the Inference Engine in OpenVINO abstracting the
different frameworks and describing the model topology, operations parameters and weights.
- *Operation* — The abstract concept of a math function that is selected for a specific purpose. Operations supported by
OpenVINO™ are listed in the supported operation set provided in the [Available Operations Sets](../ops/opset.md).
Examples of the operations are: [ReLU](../ops/activation/ReLU_1.md), [Convolution](../ops/convolution/Convolution_1.md),
[Add](../ops/arithmetic/Add_1.md), etc.
- *Kernel* — The implementation of a operation function in the OpenVINO™ plugin, in this case, the math programmed (in
C++ and OpenCL) to perform the operation for a target hardware (CPU or GPU).
- *Inference Engine Extension* — Device-specific module implementing custom operations (a set of kernels).
## Custom Operation Support Overview
There are three steps to support inference of a model with custom operation(s):
1. Add support for a custom operation in the [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) so
the Model Optimizer can generate the IR with the operation.
2. Create an operation set and implement a custom nGraph operation in it as described in the
[Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md).
3. Implement a customer operation in one of the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
plugins to support inference of this operation using a particular target hardware (CPU, GPU or VPU).
To see the operations that are supported by each device plugin for the Inference Engine, refer to the
[Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md).
> **NOTE:** If a device doesn't support a particular operation, an alternative to creating a new operation is to target
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be
> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to
> run on another device (e.g., CPU) that does support those operations.
### Custom Operation Support for the Model Optimizer
Model Optimizer model conversion pipeline is described in details in "Model Conversion Pipeline" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
It is recommended to read that article first for a better understanding of the following material.
Model Optimizer provides extensions mechanism to support new operations and implement custom model transformations to
generate optimized IR. This mechanism is described in the "Model Optimizer Extensions" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
Two types of the Model Optimizer extensions should be implemented to support custom operation at minimum:
1. Operation class for a new operation. This class stores information about the operation, its attributes, shape
inference function, attributes to be saved to an IR and some others internally used attributes. Refer to the
"Model Optimizer Operation" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for the
detailed instruction on how to implement it.
2. Operation attributes extractor. The extractor is responsible for parsing framework-specific representation of the
operation and uses corresponding operation class to update graph node attributes with necessary attributes of the
operation. Refer to the "Operation Extractor" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for the
detailed instruction on how to implement it.
> **NOTE:** In some cases you may need to implement some transformation to support the operation. This topic is covered
> in the "Graph Transformation Extensions" section on the
> [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
## Custom Operations Extensions for the Inference Engine
Inference Engine provides extensions mechanism to support new operations. This mechanism is described in the
[Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md).
Each device plugin includes a library of optimized implementations to execute known operations which must be extended to
execute a custom operation. The custom operation extension is implemented according to the target device:
- Custom Operation CPU Extension
- A compiled shared library (`.so`, `.dylib` or `.dll`) needed by the CPU Plugin for executing the custom operation
on a CPU. Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more
details.
- Custom Operation GPU Extension
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with a
operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the
[How to Implement Custom GPU Operations](../IE_DG/Extensibility_DG/GPU_Kernel.md) for more details.
- Custom Operation VPU Extension
- OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with a
operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to the
[How to Implement Custom Operations for VPU](../IE_DG/Extensibility_DG/VPU_Kernel.md) for more details.
Also, it is necessary to implement nGraph custom operation according to the
[Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) so the Inference Engine can read an IR with this
operation and correctly infer output tensors shape and type.
## Enabling Magnetic Resonance Image Reconstruction Model
This chapter provides a step-by-step instruction on how to enable the magnetic resonance image reconstruction model
implemented in the [repository](https://github.com/rmsouza01/Hybrid-CS-Model-MRI/) using a custom operation on CPU. The
example is prepared for a model generated from the repository with hash `2ede2f96161ce70dcdc922371fe6b6b254aafcc8`.
### Download and Convert the Model to a Frozen TensorFlow\* Model Format
The original pre-trained model is provided in the hdf5 format which is not supported by OpenVINO directly and needs to
be converted to TensorFlow\* frozen model format first.
1. Download repository `https://github.com/rmsouza01/Hybrid-CS-Model-MRI`:<br
```bash
git clone https://github.com/rmsouza01/Hybrid-CS-Model-MRI
git checkout 2ede2f96161ce70dcdc922371fe6b6b254aafcc8
```
2. Convert pre-trained `.hdf5` to a frozen `.pb` graph using the following script (tested with TensorFlow==1.15.0 and
Keras==2.2.4) which should be executed from the root of the cloned repository:<br>
```py
import keras as K
import numpy as np
import Modules.frequency_spatial_network as fsnet
import tensorflow as tf
under_rate = '20'
stats = np.load("Data/stats_fs_unet_norm_" + under_rate + ".npy")
var_sampling_mask = np.load("Data/sampling_mask_" + under_rate + "perc.npy")
model = fsnet.wnet(stats[0], stats[1], stats[2], stats[3], kshape = (5,5), kshape2=(3,3))
model_name = "Models/wnet_" + under_rate + ".hdf5"
model.load_weights(model_name)
inp = np.random.standard_normal([1, 256, 256, 2]).astype(np.float32)
np.save('inp', inp)
sess = K.backend.get_session()
sess.as_default()
graph_def = sess.graph.as_graph_def()
graph_def = tf.graph_util.convert_variables_to_constants(sess, graph_def, ['conv2d_44/BiasAdd'])
with tf.gfile.FastGFile('wnet_20.pb', 'wb') as f:
f.write(graph_def.SerializeToString())
```
- *Model Extension Generator* — Generates template source code files for each of the extensions needed by the Model Optimizer and the Inference Engine.
As a result the TensorFlow\* frozen model file "wnet_20.pb" is generated.
- *Inference Engine Extension* — Device-specific module implementing custom layers (a set of kernels).
### Convert the Frozen TensorFlow\* Model to Intermediate Representation
Firstly, open the model in the TensorBoard or other TensorFlow* model visualization tool. The model supports dynamic
batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all
dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set
the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to
[Converting a Model Using General Conversion Parameters](../MO_DG/prepare_model/convert_model/Converting_Model_General.md)
and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
for more details and command line parameters used for the model conversion.
## Custom Layer Overview
The [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) searches the list of known layers for each layer contained in the input model topology before building the model's internal representation, optimizing the model, and producing the Intermediate Representation files.
The [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) loads the layers from the input model IR files into the specified device plugin, which will search a list of known layer implementations for the device. If your topology contains layers that are not in the list of known layers for the device, the Inference Engine considers the layer to be unsupported and reports an error. To see the layers that are supported by each device plugin for the Inference Engine, refer to the [Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md) documentation.
<br>
> **NOTE:** If a device doesn't support a particular layer, an alternative to creating a new custom layer is to target an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be used to run an inference model on multiple devices allowing the unsupported layers on one device to "fallback" to run on another device (e.g., CPU) that does support those layers.
## Custom Layer Implementation Workflow
When implementing a custom layer for your pre-trained model in the Intel® Distribution of OpenVINO™ toolkit, you will need to add extensions to both the Model Optimizer and the Inference Engine.
## Custom Layer Extensions for the Model Optimizer
The following figure shows the basic processing steps for the Model Optimizer highlighting the two necessary custom layer extensions, the Custom Layer Extractor and the Custom Layer Operation.
![](img/MO_extensions_flow.png)
The Model Optimizer first extracts information from the input model which includes the topology of the model layers along with parameters, input and output format, etc., for each layer. The model is then optimized from the various known characteristics of the layers, interconnects, and data flow which partly comes from the layer operation providing details including the shape of the output for each layer. Finally, the optimized model is output to the model IR files needed by the Inference Engine to run the model.
The Model Optimizer starts with a library of known extractors and operations for each [supported model framework](../MO_DG/prepare_model/Supported_Frameworks_Layers.md) which must be extended to use each unknown custom layer. The custom layer extensions needed by the Model Optimizer are:
- Custom Layer Extractor
- Responsible for identifying the custom layer operation and extracting the parameters for each instance of the custom layer. The layer parameters are stored per instance and used by the layer operation before finally appearing in the output IR. Typically the input layer parameters are unchanged, which is the case covered by this tutorial.
- Custom Layer Operation
- Responsible for specifying the attributes that are supported by the custom layer and computing the output shape for each instance of the custom layer from its parameters. <br> The `--mo-op` command-line argument shown in the examples below generates a custom layer operation for the Model Optimizer.
## Custom Layer Extensions for the Inference Engine
The following figure shows the basic flow for the Inference Engine highlighting two custom layer extensions for the CPU and GPU Plugins, the Custom Layer CPU extension and the Custom Layer GPU Extension.
![](img/IE_extensions_flow.png)
Each device plugin includes a library of optimized implementations to execute known layer operations which must be extended to execute a custom layer. The custom layer extension is implemented according to the target device:
- Custom Layer CPU Extension
- A compiled shared library (.so or .dll binary) needed by the CPU Plugin for executing the custom layer on the CPU.
- Custom Layer GPU Extension
- OpenCL source code (.cl) for the custom layer kernel that will be compiled to execute on the GPU along with a layer description file (.xml) needed by the GPU Plugin for the custom layer kernel.
## Model Extension Generator
Using answers to interactive questions or a *.json* configuration file, the Model Extension Generator tool generates template source code files for each of the extensions needed by the Model Optimizer and the Inference Engine. To complete the implementation of each extension, the template functions may need to be edited to fill-in details specific to the custom layer or the actual custom layer functionality itself.
### Command-line
The Model Extension Generator is included in the Intel® Distribution of OpenVINO™ toolkit installation and is run using the command (here with the "--help" option):
```bash
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
python3 /opt/intel/openvino/deployment_tools/tools/extension_generator/extgen.py new --help
```
Model Optimizer produces the following error:
```bash
[ ERROR ] List of operations that cannot be converted to Inference Engine IR:
[ ERROR ] Complex (1)
[ ERROR ] lambda_2/Complex
[ ERROR ] IFFT2D (1)
[ ERROR ] lambda_2/IFFT2D
[ ERROR ] ComplexAbs (1)
[ ERROR ] lambda_2/Abs
[ ERROR ] Part of the nodes was not converted to IR. Stopped.
where the output will appear similar to:
```
usage: You can use any combination of the following arguments:
Arguments to configure extension generation in the interactive mode:
optional arguments:
-h, --help show this help message and exit
--mo-caffe-ext generate a Model Optimizer Caffe* extractor
--mo-mxnet-ext generate a Model Optimizer MXNet* extractor
--mo-tf-ext generate a Model Optimizer TensorFlow* extractor
--mo-op generate a Model Optimizer operation
--ie-cpu-ext generate an Inference Engine CPU extension
--ie-gpu-ext generate an Inference Engine GPU extension
--output_dir OUTPUT_DIR
set an output directory. If not specified, the current
directory is used by default.
```
The error means that the Model Optimizer doesn't know how to handle 3 types of TensorFlow\* operations: "Complex",
"IFFT2D" and "ComplexAbs". In order to see more details about the conversion process run the model conversion with
additional parameter `--log_level DEBUG`. It is worth to mention the following lines from the detailed output:
The available command-line arguments are used to specify which extension(s) to generate templates for the Model Optimizer or Inference Engine. The generated extension files for each argument will appear starting from the top of the output directory as follows:
```bash
[ INFO ] Called "tf_native_tf_node_infer" for node "lambda_2/Complex"
[ <TIMESTAMP> ] [ DEBUG ] [ tf:228 ] Added placeholder with name 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder'
[ <TIMESTAMP> ] [ DEBUG ] [ tf:228 ] Added placeholder with name 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder'
[ <TIMESTAMP> ] [ DEBUG ] [ tf:241 ] update_input_in_pbs: replace input 'lambda_2/lambda_3/strided_slice' with input 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder'
[ <TIMESTAMP> ] [ DEBUG ] [ tf:249 ] Replacing input '0' of the node 'lambda_2/Complex' with placeholder 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder'
[ <TIMESTAMP> ] [ DEBUG ] [ tf:241 ] update_input_in_pbs: replace input 'lambda_2/lambda_4/strided_slice' with input 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder'
[ <TIMESTAMP> ] [ DEBUG ] [ tf:249 ] Replacing input '1' of the node 'lambda_2/Complex' with placeholder 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder'
[ <TIMESTAMP> ] [ DEBUG ] [ tf:148 ] Inferred shape of the output tensor with index '0' of the node 'lambda_2/Complex': '[ 1 256 256]'
[ <TIMESTAMP> ] [ DEBUG ] [ infer:145 ] Outputs:
[ <TIMESTAMP> ] [ DEBUG ] [ infer:32 ] output[0]: shape = [ 1 256 256], value = <UNKNOWN>
[ <TIMESTAMP> ] [ DEBUG ] [ infer:129 ] --------------------
[ <TIMESTAMP> ] [ DEBUG ] [ infer:130 ] Partial infer for lambda_2/IFFT2D
[ <TIMESTAMP> ] [ DEBUG ] [ infer:131 ] Op: IFFT2D
[ <TIMESTAMP> ] [ DEBUG ] [ infer:132 ] Inputs:
[ <TIMESTAMP> ] [ DEBUG ] [ infer:32 ] input[0]: shape = [ 1 256 256], value = <UNKNOWN>
```
Command-line Argument | Output Directory Location |
--------------------- | ------------------------------ |
`--mo-caffe-ext` | user_mo_extensions/front/caffe |
`--mo-mxnet-ext` | user_mo_extensions/front/mxnet |
`--mo-tf-ext` | user_mo_extensions/front/tf |
`--mo-op` | user_mo_extensions/ops |
`--ie-cpu-ext` | user_ie_extensions/cpu |
`--ie-gpu-ext` | user_ie_extensions/gpu |
This is a part of the log of the partial inference phase of the model conversion. See the "Partial Inference" section on
the [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for
more information about this phase. Model Optimizer inferred output shape for the unknown operation of type "Complex"
using a "fallback" to TensorFlow\*. However, it is not enough to generate the IR because Model Optimizer doesn't know
which attributes of the operation should be saved to IR. So it is necessary to implement Model Optimizer extensions to
support these operations.
### Extension Workflow
Before going into the extension development it is necessary to understand what these unsupported operations do according
to the TensorFlow\* framework specification.
The workflow for each generated extension follows the same basic steps:
* "Complex" - returns a tensor of complex type constructed from two real input tensors specifying real and imaginary
part of a complex number.
* "IFFT2D" - returns a tensor with inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of
an input.
* "ComplexAbs" - returns a tensor with absolute values of input tensor with complex numbers.
![](img/MEG_generic_flow.png)
The part of the model with all three unsupported operations is depicted below:
**Step 1: Generate:** Use the Model Extension Generator to generate the Custom Layer Template Files.
![Unsupported sub-graph](img/unsupported_subgraph.png)
**Step 2: Edit:** Edit the Custom Layer Template Files as necessary to create the specialized Custom Layer Extension Source Code.
This model uses complex numbers during the inference but Inference Engine does not support tensors of this data type. So
it is necessary to find a way how to avoid using tensors of such a type in the model. Fortunately, the complex tensor
appear as a result of "Complex" operation, is used as input in the "IFFT2D" operation then is passed to "ComplexAbs"
which produces real value tensor as output. So there are just 3 operations consuming/producing complex tensors in the
model.
**Step 3: Specify:** Specify the custom layer extension locations to be used by the Model Optimizer or Inference Engine.
Let's design an OpenVINO operation "FFT" which get a single real number tensor describing the complex number and
produces a single real number tensor describing output complex tensor. This way the fact that the model uses complex
numbers is hidden inside the "FFT" operation implementation. The operation gets a tensor of shape `[N, H, W, 2]` and
produces the output tensor with the same shape, where the innermost dimension contains pairs of real numbers describing
the complex number (its real and imaginary part). As we will see further this operation will allow us to support the
model. The implementation of the Model Optimizer operation should be saved to `mo_extensions/ops/FFT.py` file:
## Caffe\* Models with Custom Layers <a name="caffe-models-with-custom-layers"></a>
@snippet FFT.py fft:operation
If your Caffe\* model has custom layers:
The attribute `inverse` is a flag specifying type of the FFT to apply: forward or inverse.
**Register the custom layers as extensions to the Model Optimizer**. For instructions, see [Extending Model Optimizer with New Primitives](../MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You will need a bit of Python\* code that lets the Model Optimizer;
See the "Model Optimizer Operation" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for the
detailed instruction on how to implement the operation.
- Generate a valid Intermediate Representation according to the rules you specified.
- Be independent from the availability of Caffe on your computer.
If your model contains Custom Layers, it is important to understand the internal workflow of the Model Optimizer. Consider the following example.
Now it is necessary to implement extractor for the "IFFT2D" operation according to the
"Operation Extractor" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md). The
following snippet provides two extractors: one for "IFFT2D", another one for "FFT2D", however only on of them is used
in this example. The implementation should be saved to the file `mo_extensions/front/tf/FFT_ext.py`.
**Example**:
@snippet FFT_ext.py fft_ext:extractor
The network has:
> **NOTE:** The graph is in inconsistent state after extracting node attributes because according to original operation
> "IFFT2D" semantic it should have an input consuming a tensor of complex numbers, but the extractor instantiated an
> operation "FFT" which expects a real tensor with specific layout. But the inconsistency will be resolved during
> applying front phase transformations discussed below.
* One input layer (#1)
* One output Layer (#5)
* Three internal layers (#2, 3, 4)
The output shape of the operation "AddV2" from the picture above is `[N, H, W, 2]`. Where the innermost dimension
contains pairs of real numbers describing the complex number (its real and imaginary part). The following "StridedSlice"
operations split the input tensor into 2 parts to get a tensor of real and a tensor of imaginary parts which are then
consumed with the "Complex" operation to produce a tensor of complex numbers. These "StridedSlice" and "Complex"
operations can be removed so the "FFT" operation will get a real value tensor encoding complex numbers. To achieve this
we implement the front phase transformation which searches for a pattern of two "StridedSlice" operations with specific
attributes producing data to "Complex" operation and removes it from the graph. Refer to the
"Pattern-Defined Front Phase Transformations" section on the
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for more
information on how this type of transformation works. The code snippet should be saved to the file
`mo_extensions/front/tf/Complex.py`.
The custom and standard layer types are:
@snippet Complex.py complex:transformation
* Layers #2 and #5 are implemented as Model Optimizer extensions.
* Layers #1 and #4 are supported in Model Optimizer out-of-the box.
* Layer #3 is neither in the list of supported layers nor in extensions, but is specified in CustomLayersMapping.xml.
> **NOTE:** The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but
> "FFT" produces real value tensor.
> **NOTE**: If any of the layers are not in one of three categories described above, the Model Optimizer fails with an appropriate message and a link to the corresponding question in [Model Optimizer FAQ](../MO_DG/prepare_model/Model_Optimizer_FAQ.md).
Now lets implement a transformation which replace a "ComplexAbs" operation with a sub-graph of primitive operations
which calculate the result using the following formulae: \f$module(z) = \sqrt{real(z) \cdot real(z) + imag(z) \cdot imag(z)}\f$.
Original "IFFT2D" operation produces tensor of complex values, but the "FFT" operation produces a real value tensor with
the same format and shape as the input for the operation. So the input shape for the "ComplexAbs" will be `[N, H, W, 2]`
with the innermost dimension containing tuple with real and imaginary part of a complex number. In order to calculate
absolute values for the complex tensor we do the following:
1. Raise all elements in the power of 2.
2. Calculate a reduced sum over the innermost dimension.
3. Calculate a square root.
The general process is as shown:
The implementation should be saved to the file `mo_extensions/front/tf/ComplexAbs.py` and provided below:
![Example custom layer network](img/mo_caffe_priorities.png)
<br>
@snippet ComplexAbs.py complex_abs:transformation
**Step 1:** The example model is fed to the Model Optimizer that **loads the model** with the special parser built on top of the `caffe.proto` file. In case of failure, the Model Optimizer asks you to prepare the parser that can read the model. For more information, refer to the Model Optimizer, <a href="MO_FAQ.html#FAQ1">FAQ #1</a>.
Now it is possible to convert the model using the following command line:
```bash
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
```
**Step 2:** The Model Optimizer **extracts the attributes of all layers** by going through the list of layers and attempting to find the appropriate extractor. In order of priority, the Model Optimizer checks if the layer is:
* A. Registered as a Model Optimizer extension
* B. Registered as a standard Model Optimizer layer
When the Model Optimizer finds a satisfying condition from the list above, it extracts the attributes according to the following rules:
* For A. - takes only the parameters specified in the extension
* For B. - takes only the parameters specified in the standard extractor
<br>
The sub-graph corresponding to the originally non-supported one is depicted on the image below:
**Step 3:** The Model Optimizer **calculates the output shape of all layers**. The logic is the same as it is for the priorities. **Important:** the Model Optimizer always takes the first available option.
![Converted sub-graph](img/converted_subgraph.png)
**Step 4:** The Model Optimizer **optimizes the original model and produces the two Intermediate Representation (IR) files in .xml and .bin**.
<br>
> **NOTE:** Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with
> the value 2 moved to another position.
## TensorFlow\* Models with Custom Layers <a name="Tensorflow-models-with-custom-layers"></a>
### Inference Engine Extension Implementation
Now it is necessary to implement the extension for the CPU plugin with operation "FFT" introduced previously. The code
below is based on the template extension described on the
[Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md).
You have two options for TensorFlow\* models with custom layers:
<br>
#### CMake Build File
The first step is to create a CMake configuration file which builds the extension. The content of the "CMakeLists.txt"
file is the following:
* **Register those layers as extensions to the Model Optimizer.** In this case, the Model Optimizer generates a valid and optimized Intermediate Representation.
* **If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option.** This feature is helpful for many TensorFlow models. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
## MXNet\* Models with Custom Layers <a name="mxnet-models-with-custom-layers"></a>
@snippet ../template_extension/CMakeLists.txt cmake:extension
There are two options to convert your MXNet* model that contains custom layers:
The CPU FFT kernel implementation uses OpenCV to perform the FFT that is why the extension library is linked with
"opencv_core" which comes with the OpenVINO.
1. Register the custom layers as extensions to the Model Optimizer. For instructions, see [Extending MXNet Model Optimizer with New Primitives](../MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You can create Model Optimizer extensions for both MXNet layers with op `Custom` and layers which are not standard MXNet layers.
#### Custom nGraph Operation "FFT" Implementation
The next step is to create the nGraph operation FFT. The header file "fft_op.hpp" has the following content:
2. If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option. In MXNet the function is actively used for ssd models provides an opportunity to for the necessary subgraph sequences and replace them. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
@snippet ../template_extension/fft_op.hpp fft_op:header
## Kaldi\* Models with Custom Layers <a name="Kaldi-models-with-custom-layers"></a>
For information on converting your Kaldi* model containing custom layers see [Converting a Kaldi Model in the Model Optimizer Developer Guide](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md).
The operation has just one boolean attribute `inverse`. Implementation of the necessary nGraph operation functions are
in the "fft_op.cpp" file with the following content:
## ONNX\* Models with Custom Layers <a name="ONNX-models-with-custom-layers"></a>
For information on converting your ONNX* model containing custom layers see [Converting an ONNX Model in the Model Optimizer Developer Guide](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md).
@snippet ../template_extension/fft_op.cpp fft_op:implementation
Refer to the [Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) for more details.
#### CPU FFT Kernel Implementation
The operation implementation for CPU plugin uses OpenCV to perform the FFT. The header file "fft_kernel.hpp" has the
following content:
@snippet ../template_extension/fft_kernel.hpp fft_kernel:header
The "fft_kernel.cpp" with the implementation of the CPU has the following content:
@snippet ../template_extension/fft_kernel.cpp fft_kernel:implementation
Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more details.
#### Extension Library Implementation
The last step is to create an extension library "extension.cpp" and "extension.hpp" which will include the FFT
operation for the CPU plugin. The code of the library is described in the [Extension Library](../IE_DG/Extensibility_DG/Extension.md).
### Building and Running the Custom Extension
In order to build the extension run the following:<br>
```bash
mkdir build && cd build
source /opt/intel/openvino_2021/bin/setupvars.sh
cmake .. -DCMAKE_BUILD_TYPE=Release
make --jobs=$(nproc)
```
The result of this command is a compiled shared library (`.so`, `.dylib` or `.dll`). It should be loaded in the
application using `Core` class instance method `AddExtension` like this
`core.AddExtension(make_so_pointer<IExtension>(compiled_library_file_name), "CPU");`.
To test that the extension is implemented correctly we can run the "mri_reconstruction_demo.py" with the following content:
@snippet mri_reconstruction_demo.py mri_demo:demo
The script can be executed using the following command line:
```bash
python3 mri_reconstruction_demo.py \
-m <PATH_TO_IR>/wnet_20.xml \
-i <PATH_TO_SAMPLE_MRI_IMAGE>.npy \
-p <Hybrid-CS-Model-MRI_repo>/Data/sampling_mask_20perc.npy \
-l <PATH_TO_BUILD_DIR>/libtemplate_extension.so \
-d CPU
```
## Step-by-Step Custom Layers Tutorial
For a step-by-step walk-through creating and executing a custom layer, see [Custom Layer Implementation Tutorial for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0)
## Additional Resources
- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)
- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)
- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
- [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
- [Inference Engine Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
## Converting Models:
- [Convert Your Caffe* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
- [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
- [Convert Your MXNet* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
- [Convert Your ONNX* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)

View File

@@ -0,0 +1,83 @@
# Regression tests howto {#openvino_docs_HOWTO_add_regression_test_vpu}
## Purpose
This document contains instructions for correctly modifying a set of regression tests.
## Common
Regression tests for Myriad and HDDL plugins are on the path:
`inference-engine/tests/functional/vpu/regression_tests/`
The tests are divided into 4 groups:
* Classification
* Detection
* Raw-results
* Compilation
* VPU hetero
Testing framework [Google Test](https://github.com/google/googletest/).
Each group contains [parameterized](https://github.com/google/googletest/blob/master/googletest/docs/advanced.md) tests. The main idea is that to add a new test, you only need to add a new parameter. Except for scenarios different from the generalized case.
## Classsification and Detection tests
These groups contains two cases:
* For generalized scenario (` VpuNoClassificationRegression, VpuNoDetectionRegression`)
* For specific scenario (` VpuNoClassificationRegressionSpecific, VpuNoDetectionRegressionSpecific`)
### Generalized scenario
If You want test new parameter(batch, precision, model and etc.) then You need to edit the existing initialization of parameterized tests or create a new one.
Example of initialization of parameterized tests:
``` c++
INSTANTIATE_TEST_CASE_P(
VPURegTestWithResources_nightly,
VpuNoClassificationRegression,
Combine(ValuesIn(VpuTestParamsContainer::testingPlugin()),
Values(Precision::FP16),
Values(1), // batches
Values(true), //IsHwAdaptiveMode
Values(false), //DoReshape
Values(3, 5, 7), //Resources
Values(false), //IsIgnoreStatistic
Values(ClassificationSrcParam{ModelName::GoogleNetV1, SourceImages::kCat3, 0.01, Regression::EMean::eValues})),
VpuNoClassificationRegression::getTestCaseName);
```
### Specific scenario
If You need a test to perform some actions that are not provided in the generalized scenario, then add a specific test case. As with the generalized scenario You can change parameters for these tests.
Example of specific test case:
``` c++
TEST_P(VpuNoClassificationRegressionSpecific, onAlexNetWithNetworkConfig) {
DISABLE_ON_WINDOWS_IF(HDDL_PLUGIN);
DISABLE_IF(do_reshape_);
if (!hw_adaptive_mode_) {
config_[VPU_CONFIG_KEY(NETWORK_CONFIG)] = "data=data,scale=1";
}
assertThat().classificationResultsForInferRequestAPI()
.on(SourceImages::kDog2)
.withInputPrecision(in_precision_)
.times(batch_)
.withBatch(batch_)
.onModel(ModelName::AlexNet)
.setMean(Regression::EMean::eImage)
.onFP16()
.withTopK(1)
.withPluginConfig(config_)
.equalToReferenceWithDelta(0.04);
}
```
## Raw-results tests
There is no generalized scenario and recommendations are the same as for specific test cases for Classification/Detection groups.
## Compilation tests
The tests are in the `vpu_classification_regression.cpp` file and contains only one scenario ` VpuNoRegressionWithCompilation `. To add a new test just update parameters just as in generalized scenarion of Classification/Detection test groups.

View File

@@ -0,0 +1,94 @@
# Fuzzing howto {#openvino_docs_HOWTO_fuzzing_HOWTO}
## Intended Audience
This document is for a developer who wants to contribute fuzz tests.
## Purpose
This document walks you through creating your first fuzzer, running it and evaluating its quality.
## Prerequisites
- Linux OS or Mac OS.
- [American Fuzzy Loop](http://lcamtuf.coredump.cx/afl/) if building with GCC.
## Steps
1. Create a fuzz test in the existing project at `./tests/fuzz`. Fuzz test must
follow `<test name>-fuzzer.cc` naming scheme and implement a
`LLVMFuzzerTestOneInput` entry point.
``` bash
cat << EOF > ./tests/fuzz/test_name-fuzzer.cc
#include <stdint.h>
#include <cstdlib>
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
// put your fuzzing code here and use data+size as input.
return 0; // always return 0
}
EOF
```
2. Implement test logic under `LLVMFuzzerTestOneInput`.
See example fuzz test at `tests/fuzz/read_network-fuzzer.cc`.
3. Build fuzz tests with `-DENABLE_FUZZING=ON` flag for cmake.
``` bash
mkdir -p build && \
(cd build && \
CXX=afl-g++ CC=afl-gcc cmake -DCMAKE_BUILD_TYPE=Debug -DENABLE_FUZZING=ON -DENABLE_TESTS=ON .. && \
make fuzz --jobs=$(getconf _NPROCESSORS_ONLN))
```
4. Prepare sample inputs for your fuzz test to teach fuzzer engine on input
structure
``` bash
(cd bin/intel64/Debug && \
mkdir test_name-corpus && \
echo sample input > test_name-corpus/in1.txt)
```
5. Evaluate fuzz test with `afl-fuzz` fuzzing engine
Run fuzz test:
``` bash
(cd bin/intel64/Debug && \
afl-fuzz -i test_name-corpus -o test_name-out -- ./test_name-fuzzer @@
```
While fuzz test is running it prints out statistics. Besides just crashes `uniq
crashes` and hangs `uniq hangs` you should care about fuzz test quality:
- Fuzz test should be fast - speed of execution `exec speed` should be at least
100 exec/s. Speed less than 20 exec/s is not acceptable.
- Fuzz test should be able to explore new code paths `map coverage` and
`findings in depth`. Confirm it is increasing while fuzz test is running.
6. Reproduce fuzz test findings
All issues found by fuzz test are stored as a file in output folder specified
earlier via `-o` afl-fuzz option. To reproduce an issue run fuzz test executable
with an issue file as an argument.
## Summary
We have created a simple fuzz test, run it and asses its results.
## Extension
Try run parallel fuzzing with the help of
[afl-utils](https://gitlab.com/rc0r/afl-utils).
## Tips or FAQs
GCC 7 in Ubuntu 18.04 LTS has a
[defect](https://bugs.launchpad.net/ubuntu/+source/afl/+bug/1774816). Upgrade
GCC 7 for AFL to work. GCC version `Ubuntu 7.3.0-27ubuntu1~18.04` works OK.

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c2f362a39ae6c2af080e4f055b6fdba4954f918f85731545d1df3d687d9213d5
size 421056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cb5c700d003936779455353bfa4ed9432410c0975c46e2dfd30c6a1abccd1727
size 23320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:99d6b5146be85fa408dc5432883c3e2745cffe890133854a97dcf22f5c5962d4
size 47564

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f7c8ab4f15874d235968471bcf876c89c795d601e69891208107b8b72aa58eb1
size 70014

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0a4de6e502cae7542f1f311bcdbea6bb145f960f0d27d86a03160d1a60133778
size 301310

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3d5ccf51fe1babb93d96d042494695a6a6e055d1f8ebf7eef5083d54d8987a23
size 58789

View File

@@ -1,57 +0,0 @@
"""
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
#! [complex:transformation]
import logging as log
import numpy as np
from mo.front.common.replacement import FrontReplacementSubgraph
from mo.graph.graph import Graph
class Complex(FrontReplacementSubgraph):
enabled = True
def pattern(self):
return dict(
nodes=[
('strided_slice_real', dict(op='StridedSlice')),
('strided_slice_imag', dict(op='StridedSlice')),
('complex', dict(op='Complex')),
],
edges=[
('strided_slice_real', 'complex', {'in': 0}),
('strided_slice_imag', 'complex', {'in': 1}),
])
@staticmethod
def replace_sub_graph(graph: Graph, match: dict):
strided_slice_real = match['strided_slice_real']
strided_slice_imag = match['strided_slice_imag']
complex_node = match['complex']
# make sure that both strided slice operations get the same data as input
assert strided_slice_real.in_port(0).get_source() == strided_slice_imag.in_port(0).get_source()
# identify the output port of the operation producing datat for strided slice nodes
input_node_output_port = strided_slice_real.in_port(0).get_source()
input_node_output_port.disconnect()
# change the connection so now all consumers of "complex_node" get data from input node of strided slice nodes
complex_node.out_port(0).get_connection().set_source(input_node_output_port)
#! [complex:transformation]

View File

@@ -1,40 +0,0 @@
"""
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
#! [complex_abs:transformation]
import numpy as np
from extensions.ops.elementwise import Pow
from extensions.ops.ReduceOps import ReduceSum
from mo.front.common.replacement import FrontReplacementOp
from mo.graph.graph import Graph, Node
from mo.ops.const import Const
class ComplexAbs(FrontReplacementOp):
op = "ComplexAbs"
enabled = True
def replace_op(self, graph: Graph, node: Node):
pow_2 = Const(graph, {'value': np.float32(2.0)}).create_node()
reduce_axis = Const(graph, {'value': np.int32(-1)}).create_node()
pow_0_5 = Const(graph, {'value': np.float32(0.5)}).create_node()
sq = Pow(graph, dict(name=node.in_node(0).name + '/sq', power=2.0)).create_node([node.in_node(0), pow_2])
sum = ReduceSum(graph, dict(name=sq.name + '/sum')).create_node([sq, reduce_axis])
sqrt = Pow(graph, dict(name=sum.name + '/sqrt', power=0.5)).create_node([sum, pow_0_5])
return [sqrt.id]
#! [complex_abs:transformation]

View File

@@ -1,47 +0,0 @@
"""
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# ! [fft_ext:extractor]
from ...ops.FFT import FFT
from mo.front.extractor import FrontExtractorOp
from mo.utils.error import Error
class FFT2DFrontExtractor(FrontExtractorOp):
op = 'FFT2D'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'inverse': 0
}
FFT.update_node_stat(node, attrs)
return cls.enabled
class IFFT2DFrontExtractor(FrontExtractorOp):
op = 'IFFT2D'
enabled = True
@classmethod
def extract(cls, node):
attrs = {
'inverse': 1
}
FFT.update_node_stat(node, attrs)
return cls.enabled
# ! [fft_ext:extractor]

View File

@@ -1,40 +0,0 @@
"""
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
#! [fft:operation]
from mo.front.common.partial_infer.elemental import copy_shape_infer
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class FFT(Op):
op = 'FFT'
enabled = False
def __init__(self, graph: Graph, attrs: dict):
super().__init__(graph, {
'type': self.op,
'op': self.op,
'version': 'custom_opset',
'inverse': None,
'in_ports_count': 1,
'out_ports_count': 1,
'infer': copy_shape_infer
}, attrs)
def backend_attrs(self):
return ['inverse']
#! [fft:operation]

View File

@@ -1,119 +0,0 @@
"""
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
#! [mri_demo:demo]
import numpy as np
import cv2 as cv
import argparse
import time
from openvino.inference_engine import IECore
def kspace_to_image(kspace):
assert(len(kspace.shape) == 3 and kspace.shape[-1] == 2)
fft = cv.idft(kspace, flags=cv.DFT_SCALE)
img = cv.magnitude(fft[:,:,0], fft[:,:,1])
return cv.normalize(img, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='MRI reconstrution demo for network from https://github.com/rmsouza01/Hybrid-CS-Model-MRI (https://arxiv.org/abs/1810.12473)')
parser.add_argument('-i', '--input', dest='input', help='Path to input .npy file with MRI scan data.')
parser.add_argument('-p', '--pattern', dest='pattern', help='Path to sampling mask in .npy format.')
parser.add_argument('-m', '--model', dest='model', help='Path to .xml file of OpenVINO IR.')
parser.add_argument('-l', '--cpu_extension', dest='cpu_extension', help='Path to extensions library with FFT implementation.')
parser.add_argument('-d', '--device', dest='device', default='CPU',
help='Optional. Specify the target device to infer on; CPU, '
'GPU, HDDL or MYRIAD is acceptable. For non-CPU targets, '
'HETERO plugin is used with CPU fallbacks to FFT implementation. '
'Default value is CPU')
args = parser.parse_args()
xml_path = args.model
assert(xml_path.endswith('.xml'))
bin_path = xml_path[:xml_path.rfind('.xml')] + '.bin'
ie = IECore()
ie.add_extension(args.cpu_extension, "CPU")
net = ie.read_network(xml_path, bin_path)
device = 'CPU' if args.device == 'CPU' else ('HETERO:' + args.device + ',CPU')
exec_net = ie.load_network(net, device)
# Hybrid-CS-Model-MRI/Data/stats_fs_unet_norm_20.npy
stats = np.array([2.20295299e-01, 1.11048916e+03, 4.16997984e+00, 4.71741395e+00], dtype=np.float32)
# Hybrid-CS-Model-MRI/Data/sampling_mask_20perc.npy
var_sampling_mask = np.load(args.pattern) # TODO: can we generate it in runtime?
print('Sampling ratio:', 1.0 - var_sampling_mask.sum() / var_sampling_mask.size)
data = np.load(args.input)
num_slices, height, width = data.shape[0], data.shape[1], data.shape[2]
pred = np.zeros((num_slices, height, width), dtype=np.uint8)
data /= np.sqrt(height * width)
print('Compute...')
start = time.time()
for slice_id, kspace in enumerate(data):
kspace = kspace.copy()
# Apply sampling
kspace[var_sampling_mask] = 0
kspace = (kspace - stats[0]) / stats[1]
# Forward through network
input = np.expand_dims(kspace.transpose(2, 0, 1), axis=0)
outputs = exec_net.infer(inputs={'input_1': input})
output = next(iter(outputs.values()))
output = output.reshape(height, width)
# Save predictions
pred[slice_id] = cv.normalize(output, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
print('Elapsed time: %.1f seconds' % (time.time() - start))
WIN_NAME = 'MRI reconstruction with OpenVINO'
slice_id = 0
def callback(pos):
global slice_id
slice_id = pos
kspace = data[slice_id]
img = kspace_to_image(kspace)
kspace[var_sampling_mask] = 0
masked = kspace_to_image(kspace)
rec = pred[slice_id]
# Add a header
border_size = 20
render = cv.hconcat((img, masked, rec))
render = cv.copyMakeBorder(render, border_size, 0, 0, 0, cv.BORDER_CONSTANT, value=255)
cv.putText(render, 'Original', (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.putText(render, 'Sampled (PSNR %.1f)' % cv.PSNR(img, masked), (width, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.putText(render, 'Reconstructed (PSNR %.1f)' % cv.PSNR(img, rec), (width*2, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
cv.imshow(WIN_NAME, render)
cv.waitKey(1)
cv.namedWindow(WIN_NAME, cv.WINDOW_NORMAL)
print(num_slices)
cv.createTrackbar('Slice', WIN_NAME, num_slices // 2, num_slices - 1, callback)
callback(num_slices // 2) # Trigger initial visualization
cv.waitKey()
#! [mri_demo:demo]

View File

@@ -2,42 +2,6 @@
The sections below contain detailed list of changes made to the Inference Engine API in recent releases.
## 2021.3
### New API
* InferenceEngine::InferRequest::Cancel to cancel inference request execution
* InferenceEngine::Layout::HWC to support HWC layout for input or output blobs
* InferenceEngine::Precision::F64 data precision for f64 data type
* InferenceEngine::CNNNetwork::getOVNameForTensor to map frameworks tensor names to OpenVINO internal tensor names
### Deprecated API
* InferenceEngine::IVariableState interface is deprecated, use InferenceEngine::VariableState wrapper
## 2021.2
### New API
**State API**
* InferenceEngine::InferRequest::QueryState query state value of network on current infer request
* InferenceEngine::IVariableState class instead of IMemoryState (rename)
* InferenceEngine::IVariableState::GetState instead of IMemoryState::GetLastState (rename)
**BatchedBlob** - represents a InferenceEngine::BatchedBlob containing other blobs - one per batch.
**Transformations API** - added a new header `ie_transformations.hpp` which contains transformations for InferenceEngine::CNNNetwork object. Such transformations can be called prior to loading network for compilation for particular device:
* InferenceEngine::LowLatency
### Deprecated API
**State API**
* InferenceEngine::ExecutableNetwork::QueryState - use InferenceEngine::InferRequest::QueryState
* InferenceEngine::IVariableState::GetLastState - use InferenceEngine::IVariableState::GetState
## 2021.1
### Deprecated API
@@ -169,7 +133,7 @@ The sections below contain detailed list of changes made to the Inference Engine
### Deprecated API
**MYRIAD Plugin API:**
**Myriad Plugin API:**
* VPU_CONFIG_KEY(IGNORE_IR_STATISTIC)

View File

@@ -2,8 +2,7 @@
## Disclaimer
Inference Engine with the bfloat16 inference implemented on CPU must support the native `avx512_bf16` instruction and therefore the bfloat16 data format.
It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native `avx512_bf16` instruction usage.
Inference Engine with the bfloat16 inference implemented on CPU must support the `avx512_bf16` instruction and therefore the bfloat16 data format.
## Introduction
@@ -13,7 +12,7 @@ Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format w
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits.
Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range.
Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
Another useful feature of BF16 is possibility to encode an INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
See the [Intel's site](https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
@@ -21,9 +20,19 @@ There are two ways to check if CPU device can support bfloat16 computations for
1. Query the instruction set via system `lscpu | grep avx512_bf16` or `cat /proc/cpuinfo | grep avx512_bf16`.
2. Use [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
@snippet snippets/Bfloat16Inference0.cpp part0
```cpp
InferenceEngine::Core core;
auto cpuOptimizationCapabilities = core.GetMetric("CPU", METRIC_KEY(OPTIMIZATION_CAPABILITIES)).as<std::vector<std::string>>();
```
Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the following layers in BF16 computation mode:
* Convolution
* FullyConnected
* InnerProduct
* LRN
* Pooling
This means that BF16 inference can only be performed with the CPU plugin on the layers listed above. All other layers are executed in FP32.
## Lowering Inference Precision
@@ -37,37 +46,26 @@ Bfloat16 data usage provides the following benefits that increase performance:
4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds.
5. Reduced amount of data that must be transferred, as a result, reduced data transition time.
For default optimization on CPU, source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, `KEY_ENFORCE_BF16` is set to `YES`.
For default optimization on CPU, source model converts from FP32 or FP16 to BF16 and executes internally on platforms with native BF16 support. In that case, `KEY_ENFORCE_BF16` is set to `YES`.
The code below demonstrates how to check if the key is set:
@snippet snippets/Bfloat16Inference1.cpp part1
To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers as is without modifications with precisions that were set on each layer edge.
@snippet snippets/Bfloat16Inference2.cpp part2
To disable BF16 in C API:
```
ie_config_t config = { "ENFORCE_BF16", "NO", NULL};
ie_core_load_network(core, network, device_name, &config, &exe_network);
```cpp
InferenceEngine::Core core;
auto exeNetwork = core.LoadNetwork(network, "CPU");
auto enforceBF16 = exeNetwork.GetConfig(PluginConfigParams::KEY_ENFORCE_BF16).as<std::string>();
```
An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support or BF16 simulation mode.
To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers AS IS without modifications with precisions that were set on each layer edge.
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
## Bfloat16 Simulation Mode
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee an adequate performance.
To enable Bfloat16 simulator:
* In [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option
* In C++ API, set `KEY_ENFORCE_BF16` to `YES`
* In C API:
```
ie_config_t config = { "ENFORCE_BF16", "YES", NULL};
ie_core_load_network(core, network, device_name, &config, &exe_network);
```cpp
InferenceEngine::Core core;
core.SetConfig({ { CONFIG_KEY(ENFORCE_BF16), CONFIG_VALUE(NO) } }, "CPU");
```
An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support.
Low-Precision 8-bit integer models do not convert to BF16, even if bfloat16 optimization is set by default.
## Performance Counters
Information about layer precision is stored in the performance counters that are
@@ -89,4 +87,4 @@ prob EXECUTED layerType: SoftMax realT
The `execType` column of the table includes inference primitives with specific suffixes.
[bf16_format]: img/bf16_format.png
[bf16_format]: img/bf16_format.png

View File

@@ -1,122 +1,88 @@
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
> **NOTE:** [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
## Introduction to the OpenVINO™ Toolkit
This Guide provides an overview of the Inference Engine describing the typical workflow for performing
The OpenVINO™ toolkit is a comprehensive toolkit that you can use to develop and deploy vision-oriented solutions on
Intel® platforms. Vision-oriented means the solutions use images or videos to perform specific tasks.
A few of the solutions use cases include autonomous navigation, digital surveillance cameras, robotics,
and mixed-reality headsets.
The OpenVINO™ toolkit:
* Enables CNN-based deep learning inference on the edge
* Supports heterogeneous execution across an Intel&reg; CPU, Intel&reg; Integrated Graphics, Intel&reg; Movidius&trade; Neural Compute Stick and Intel&reg; Neural Compute Stick 2
* Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
* Includes optimized calls for computer vision standards including OpenCV\*, OpenCL&trade;, and OpenVX\*
The OpenVINO™ toolkit includes the following components:
* Intel® Deep Learning Deployment Toolkit (Intel® DLDT)
- [Deep Learning Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) — A cross-platform command-line tool for importing models and
preparing them for optimal execution with the Deep Learning Inference Engine. The Model Optimizer supports converting Caffe*,
TensorFlow*, MXNet*, Kaldi*, ONNX* models.
- [Deep Learning Inference Engine](inference_engine_intro.md) — A unified API to allow high performance inference on many hardware types
including Intel® CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Neural Compute Stick 2.
- [nGraph](../nGraph_DG/nGraph_dg.md) — graph representation and manipulation engine which is used to represent a model inside Inference Engine and allows the run-time model construction without using Model Optimizer.
* [OpenCV](https://docs.opencv.org/) — OpenCV* community version compiled for Intel® hardware.
Includes PVL libraries for computer vision.
* Drivers and runtimes for OpenCL™ version 2.1
* [Intel® Media SDK](https://software.intel.com/en-us/media-sdk)
* [OpenVX*](https://software.intel.com/en-us/cvsdk-ovx-guide) — Intel's implementation of OpenVX*
optimized for running on Intel® hardware (CPU, GPU, IPU).
* [Demos and samples](Samples_Overview.md).
This Guide provides overview of the Inference Engine describing the typical workflow for performing
inference of a pre-trained and optimized deep learning model and a set of sample applications.
> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel).
> **NOTES:**
> - Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_intel_index).
> - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.
Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, or VPU. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available.
## Table of Contents
For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.
* [Inference Engine API Changes History](API_Changes.md)
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
* [Introduction to Inference Engine](inference_engine_intro.md)
To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
* [Understanding Inference Engine Memory Primitives](Memory_primitives.md)
For complete API Reference, see the [Inference Engine API References](./api_references.html) section.
* [Introduction to Inference Engine Device Query API](InferenceEngine_QueryAPI.md)
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
* [Adding Your Own Layers to the Inference Engine](Extensibility_DG/Intro.md)
## Modules in the Inference Engine component
### Core Inference Engine Libraries ###
* [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md)
Your application must link to the core Inference Engine libraries:
* Linux* OS:
- `libinference_engine.so`, which depends on `libinference_engine_transformations.so`, `libtbb.so`, `libtbbmalloc.so` and `libngraph.so`
* Windows* OS:
- `inference_engine.dll`, which depends on `inference_engine_transformations.dll`, `tbb.dll`, `tbbmalloc.dll` and `ngraph.dll`
* macOS*:
- `libinference_engine.dylib`, which depends on `libinference_engine_transformations.dylib`, `libtbb.dylib`, `libtbbmalloc.dylib` and `libngraph.dylib`
* [[DEPRECATED] Migration from Inference Engine Plugin API to Core API](Migration_CoreAPI.md)
The required C++ header files are located in the `include` directory.
* [Introduction to Performance Topics](Intro_to_Performance.md)
This library contains the classes to:
* Create Inference Engine Core object to work with devices and read network (InferenceEngine::Core)
* Manipulate network information (InferenceEngine::CNNNetwork)
* Execute and pass inputs and outputs (InferenceEngine::ExecutableNetwork and InferenceEngine::InferRequest)
* [Inference Engine Python API Overview](../../inference-engine/ie_bridges/python/docs/api_overview.md)
### Plugin Libraries to Read a Network Object ###
* [Using Dynamic Batching feature](DynamicBatching.md)
Starting from 2020.4 release, Inference Engine introduced a concept of `CNNNetwork` reader plugins. Such plugins can be automatically dynamically loaded by Inference Engine in runtime depending on file format:
* Linux* OS:
- `libinference_engine_ir_reader.so` to read a network from IR
- `libinference_engine_onnx_reader.so` to read a network from ONNX model format
* Windows* OS:
- `inference_engine_ir_reader.dll` to read a network from IR
- `inference_engine_onnx_reader.dll` to read a network from ONNX model format
* [Using Static Shape Infer feature](ShapeInference.md)
### Device-Specific Plugin Libraries ###
* [Using Low-Precision 8-bit Integer Inference](Int8Inference.md)
For each supported target device, Inference Engine provides a plugin — a DLL/shared library that contains complete implementation for inference on this particular device. The following plugins are available:
* [Using Bfloat16 Inference](Bfloat16Inference.md)
| Plugin | Device Type |
| ------- | ----------------------------- |
|CPU | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
|GPU | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
|MYRIAD | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
|GNA | Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, Intel&reg; Core&trade; i3-1000G4 Processor |
|HETERO | Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers|
|MULTI | Simultaneous inference of the same network on several devices in parallel|
* Utilities to Validate Your Converted Model
* [Using Cross Check Tool for Per-Layer Comparison Between Plugins](../../inference-engine/tools/cross_check_tool/README.md)
The table below shows the plugin libraries and additional dependencies for Linux, Windows and macOS platforms.
* [Supported Devices](supported_plugins/Supported_Devices.md)
* [GPU](supported_plugins/CL_DNN.md)
* [CPU](supported_plugins/CPU.md)
* [VPU](supported_plugins/VPU.md)
* [MYRIAD](supported_plugins/MYRIAD.md)
* [HDDL](supported_plugins/HDDL.md)
* [Heterogeneous execution](supported_plugins/HETERO.md)
* [GNA](supported_plugins/GNA.md)
* [MULTI](supported_plugins/MULTI.md)
| Plugin | Library name for Linux | Dependency libraries for Linux | Library name for Windows | Dependency libraries for Windows | Library name for macOS | Dependency libraries for macOS |
|--------|-----------------------------|-------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------|------------------------------|---------------------------------------------|
| CPU | `libMKLDNNPlugin.so` | `libinference_engine_lp_transformations.so` | `MKLDNNPlugin.dll` | `inference_engine_lp_transformations.dll` | `libMKLDNNPlugin.so` | `inference_engine_lp_transformations.dylib` |
| GPU | `libclDNNPlugin.so` | `libinference_engine_lp_transformations.so`, `libOpenCL.so` | `clDNNPlugin.dll` | `OpenCL.dll`, `inference_engine_lp_transformations.dll` | Is not supported | - |
| MYRIAD | `libmyriadPlugin.so` | `libusb.so`, | `myriadPlugin.dll` | `usb.dll` | `libmyriadPlugin.so` | `libusb.dylib` |
| HDDL | `libHDDLPlugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so` | `HDDLPlugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll` | Is not supported | - |
| GNA | `libGNAPlugin.so` | `libgna.so`, | `GNAPlugin.dll` | `gna.dll` | Is not supported | - |
| HETERO | `libHeteroPlugin.so` | Same as for selected plugins | `HeteroPlugin.dll` | Same as for selected plugins | `libHeteroPlugin.so` | Same as for selected plugins |
| MULTI | `libMultiDevicePlugin.so` | Same as for selected plugins | `MultiDevicePlugin.dll` | Same as for selected plugins | `libMultiDevicePlugin.so` | Same as for selected plugins |
* [Pre-Trained Models](@ref omz_models_intel_index)
> **NOTE**: All plugin libraries also depend on core Inference Engine libraries.
* [Known Issues](Known_Issues_Limitations.md)
Make sure those libraries are in your computer's path or in the place you pointed to in the plugin loader. Make sure each plugin's related dependencies are in the:
* Linux: `LD_LIBRARY_PATH`
* Windows: `PATH`
* macOS: `DYLD_LIBRARY_PATH`
On Linux and macOS, use the script `bin/setupvars.sh` to set the environment variables.
On Windows, run the `bin\setupvars.bat` batch file to set the environment variables.
To learn more about supported devices and corresponding plugins, see the [Supported Devices](supported_plugins/Supported_Devices.md) chapter.
## Common Workflow for Using the Inference Engine API
The common workflow contains the following steps:
1. **Create Inference Engine Core object** - Create an `InferenceEngine::Core` object to work with different devices, all device plugins are managed internally by the `Core` object. Register extensions with custom nGraph operations (`InferenceEngine::Core::AddExtension`).
2. **Read the Intermediate Representation** - Using the `InferenceEngine::Core` class, read an Intermediate Representation file into an object of the `InferenceEngine::CNNNetwork` class. This class represents the network in the host memory.
3. **Prepare inputs and outputs format** - After loading the network, specify input and output precision and the layout on the network. For these specification, use the `InferenceEngine::CNNNetwork::getInputsInfo()` and `InferenceEngine::CNNNetwork::getOutputsInfo()`.
4. Pass per device loading configurations specific to this device (`InferenceEngine::Core::SetConfig`), and register extensions to this device (`InferenceEngine::Core::AddExtension`).
5. **Compile and Load Network to device** - Use the `InferenceEngine::Core::LoadNetwork()` method with specific device (e.g. `CPU`, `GPU`, etc.) to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation.
6. **Set input data** - With the network loaded, you have an `InferenceEngine::ExecutableNetwork` object. Use this object to create an `InferenceEngine::InferRequest` in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy.
7. **Execute** - With the input and output memory now defined, choose your execution mode:
* Synchronously - `InferenceEngine::InferRequest::Infer()` method. Blocks until inference is completed.
* Asynchronously - `InferenceEngine::InferRequest::StartAsync()` method. Check status with the `InferenceEngine::InferRequest::Wait()` method (0 timeout), wait, or specify a completion callback.
8. **Get the output** - After inference is completed, get the output memory or read the memory you provided earlier. Do this with the `InferenceEngine::IInferRequest::GetBlob()` method.
## Video: Inference Engine Concept
[![](https://img.youtube.com/vi/e6R13V8nbak/0.jpg)](https://www.youtube.com/watch?v=e6R13V8nbak)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/e6R13V8nbak" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
## Further Reading
For more details on the Inference Engine API, refer to the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
**Typical Next Step:** [Introduction to Inference Engine](inference_engine_intro.md)

View File

@@ -17,8 +17,39 @@ dynamically in all of its infer requests using <code>SetBatch()</code> method.
The batch size that was set in passed <code>CNNNetwork</code> object will be used as a maximum batch size limit.
Here is a code example:
```cpp
int dynBatchLimit = FLAGS_bl; //take dynamic batch limit from command line option
@snippet snippets/DynamicBatching.cpp part0
// Read network model
Core core;
CNNNetwork network = core.ReadNetwork(modelFileName, weightFileName);
// enable dynamic batching and prepare for setting max batch limit
const std::map<std::string, std::string> dyn_config =
{ { PluginConfigParams::KEY_DYN_BATCH_ENABLED, PluginConfigParams::YES } };
network.setBatchSize(dynBatchLimit);
// create executable network and infer request
auto executable_network = core.LoadNetwork(network, "CPU", dyn_config);
auto infer_request = executable_network.CreateInferRequest();
...
// process a set of images
// dynamically set batch size for subsequent Infer() calls of this request
size_t batchSize = imagesData.size();
infer_request.SetBatch(batchSize);
infer_request.Infer();
...
// process another set of images
batchSize = imagesData2.size();
infer_request.SetBatch(batchSize);
infer_request.Infer();
```
## Limitations

View File

@@ -1,81 +1,80 @@
# Custom nGraph Operation {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
# Add Custom nGraph Operations {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
Inference Engine Extension API enables you to register operation sets (opsets) with custom nGraph operations to support models with operations which OpenVINO™ does not support out-of-the-box.
Inference Engine Extension API allows to register operation sets (opsets) with custom nGraph operations, it allows to support Networks with unknown operations.
## Operation Class
To add your custom nGraph operation, create a new class that extends `ngraph::Op`, which is in turn derived from `ngraph::Node`, the base class for all graph operations in nGraph. Follow the steps below:
1. Add the `NGRAPH_RTTI_DECLARATION` and `NGRAPH_RTTI_DEFINITION` macros which define a `NodeTypeInfo` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an nGraph operation currently consists of a string identifier and a version number, but this may change in the future.
1. Define a `NodeTypeInfo` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an nGraph operation currently consists of a string identifier and a version number, but this may change in the future.
2. Implement constructors that optionally take the operation inputs and attributes as parameters.
2. Implement constructors that can optionally take the operation inputs and attributes as parameters.
3. Override the shape inference method `validate_and_infer_types`. This method is called multiple times during graph manipulations to determine the shapes and element types of the operations outputs. To access the input shapes and input element types, use the `get_input_partial_shape()` and `get_input_element_type()` methods of `ngraph::Node`. Set the inferred shape and element type of the output using `set_output_type`.
3. Override the shape inference method `validate_and_infer_types`. This method is called multiple times during graph manipulations to determine the shapes and element types of the outputs of the operations. You can access the input shapes through the `get_input_partial_shape()` method and input element types through the `get_input_element_type()` method of `ngraph::Node`. Set the inferred shape and element type of the output using `set_output_type`.
4. Override the `clone_with_new_inputs` method, which enables graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
4. Override the `clone_with_new_inputs` method, which allows graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
5. Override the `visit_attributes` method, which enables serialization and deserialization of operation attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector`, and for existing nGraph defined types.
5. Override the `visit_attributes` method, which allows serialization and deserialization of attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector` and for existing nGraph defined types.
6. Override `evaluate`, which is an optional method that enables the application of constant folding if there is a custom operation on the constant branch.
Based on that, declaration of an operation class can look as follows:
Based on that, declaration of a operation class can look as follows:
@snippet template_extension/op.hpp op:header
@snippet op.hpp op:header
### Class Fields
The provided implementation has several fields:
* `add` of type `int64_t` is an attribute of a custom operation.
* `type_info` of type `ngraph::NodeTypeInfo` defines the type and version of an operation.
* `add` of type `int64_t` is an attribute of custom operation
* `type_info` of type `ngraph::NodeTypeInfo` defines the type and version of operation
### Operation Constructors
nGraph operation contains two constructors:
* Default constructor, which enables you to create an operation without attributes
* Constructor that creates and validates an operation with specified inputs and attributes
nGraph operation contains two constructors: a default constructor, which allows to create operation without attributes and a constructor that creates and validates operation with specified inputs and attributes.
@snippet template_extension/op.cpp op:ctor
@snippet op.cpp op:ctor
### `validate_and_infer_types()`
`ngraph::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of the operation.
`ngraph::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of operation.
@snippet template_extension/op.cpp op:validate
@snippet op.cpp op:validate
### `clone_with_new_inputs()`
`ngraph::Node::clone_with_new_inputs` method creates a copy of the nGraph operation with new inputs.
`ngraph::Node::clone_with_new_inputs` method creates a copy of nGraph operation with new inputs.
@snippet template_extension/op.cpp op:copy
@snippet op.cpp op:copy
### `visit_attributes()`
`ngraph::Node::visit_attributes` method enables you to visit all operation attributes.
`ngraph::Node::visit_attributes` method allows to visit all operation attributes.
@snippet template_extension/op.cpp op:visit_attributes
@snippet op.cpp op:visit_attributes
### `evaluate()`
`ngraph::Node::evaluate` method enables you to apply constant folding to an operation.
`ngraph::Node::evaluate` method allows to apply constant folding to an operation.
@snippet template_extension/op.cpp op:evaluate
@snippet op.cpp op:evaluate
## Register Custom Operations in Extension Class
To add custom operations to the [Extension](Extension.md) class, create an operation set with custom operations and implement the `InferenceEngine::IExtension::getOpSets` method:
@snippet template_extension/extension.cpp extension:getOpSets
@snippet extension.cpp extension:getOpSets
This method returns a map of opsets that exist in the extension library.
nGraph provides an opset mechanism to group operations into clusters. S. Different opsets distinguish between different versions of one operation.
nGraph provides opsets mechanism for operation versioning. Different opsets distinguish between different versions of one operation.
When specifying opset names, follow the rules below:
* Use unique opset names.
* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opset2`, `opset3`, ... , `opsetN`.
* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opest2`.
* Make sure that the Model Optimizer and your extension use the same opset names.
* IR v10 operations have the mandatory `version` attribute specifying the opset.
* IR v10 layers have the mandatory `version` attribute specifying the opset.
* `opset1` is the name of default operations set.
Operations from the default opset cannot be redefined.
Use a custom opset to create a new operation or extend functionality of an existing operation from another opset.

Some files were not shown because too many files have changed in this diff Show More