Compare commits
216 Commits
2023.1.0.d
...
releases/2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0f1fa2cbde | ||
|
|
a75dc09204 | ||
|
|
ccfb0ed1f1 | ||
|
|
1363573c48 | ||
|
|
c9beff1c56 | ||
|
|
9b42a5311e | ||
|
|
4ebe83842e | ||
|
|
6cdc1bb302 | ||
|
|
18fbbd2227 | ||
|
|
4a24c0721e | ||
|
|
13e01c9ea3 | ||
|
|
8cabbc0768 | ||
|
|
b7d91f58b0 | ||
|
|
92562697e8 | ||
|
|
bd927ed8bb | ||
|
|
14cac44cc3 | ||
|
|
cd8f27d1bf | ||
|
|
6c172f33a5 | ||
|
|
24fa511f49 | ||
|
|
90d594a631 | ||
|
|
a8f6fd9c2b | ||
|
|
d4ad38b78f | ||
|
|
d053b6c331 | ||
|
|
846ea1f347 | ||
|
|
09626eaa9b | ||
|
|
a195a8a5bf | ||
|
|
282f1f6063 | ||
|
|
45a7295e64 | ||
|
|
47278caed9 | ||
|
|
10e2748286 | ||
|
|
a0599267ea | ||
|
|
d653e22dfe | ||
|
|
e220238028 | ||
|
|
a669d35afe | ||
|
|
e2124246b1 | ||
|
|
7b66255f34 | ||
|
|
4d9a984188 | ||
|
|
db73b258e6 | ||
|
|
703b2a85f7 | ||
|
|
6abeaec8ad | ||
|
|
e634014906 | ||
|
|
6fc737af51 | ||
|
|
acad377cf0 | ||
|
|
5885d56e1a | ||
|
|
d59945b73b | ||
|
|
6b85a0b33e | ||
|
|
9b3b99c84b | ||
|
|
0900c39aca | ||
|
|
0ec7da591d | ||
|
|
6c4462759e | ||
|
|
a28d93b2bf | ||
|
|
400e0657cd | ||
|
|
e2a469a345 | ||
|
|
de85202b29 | ||
|
|
5b283ee6d4 | ||
|
|
ca36a3335f | ||
|
|
b1985ff5fd | ||
|
|
dd0f845a18 | ||
|
|
3a0d7a28ed | ||
|
|
3cdebfcb7c | ||
|
|
6fdc6b4c16 | ||
|
|
7096c01127 | ||
|
|
2d28a05421 | ||
|
|
e2ace1da40 | ||
|
|
f5e9b699ce | ||
|
|
5ec50585df | ||
|
|
279961332b | ||
|
|
0d5f86ae33 | ||
|
|
9b9f5e3eee | ||
|
|
c885b1933d | ||
|
|
1b6dc06cf0 | ||
|
|
a3afa57bd9 | ||
|
|
ad1efb7f16 | ||
|
|
7eed1d910b | ||
|
|
2278100258 | ||
|
|
361dec7362 | ||
|
|
d7a917d92c | ||
|
|
326b8ad009 | ||
|
|
cc6321f6d3 | ||
|
|
5c2a39009e | ||
|
|
a0b4394423 | ||
|
|
85378ce176 | ||
|
|
e2fd335b70 | ||
|
|
c5175063c8 | ||
|
|
5fb5057cb6 | ||
|
|
485bea73b1 | ||
|
|
105e67573a | ||
|
|
2282b0ee5c | ||
|
|
f00dc87a92 | ||
|
|
e932a2eda5 | ||
|
|
bd989cee67 | ||
|
|
bdf72bcd88 | ||
|
|
6d634d09a4 | ||
|
|
05c641ff7c | ||
|
|
05a768e357 | ||
|
|
a0b2200408 | ||
|
|
e339afe375 | ||
|
|
7f47108bb3 | ||
|
|
b13bcbcd0d | ||
|
|
c2bfbf29fb | ||
|
|
14e67d8663 | ||
|
|
d98cb7bdf8 | ||
|
|
fa18eecfb7 | ||
|
|
3aa2f02240 | ||
|
|
a4fdc1c947 | ||
|
|
f14f28f32b | ||
|
|
e3baff25a6 | ||
|
|
708825b439 | ||
|
|
91d85a88a1 | ||
|
|
f77c3d7fdc | ||
|
|
36f2e63c9c | ||
|
|
d474617d12 | ||
|
|
64a896c22e | ||
|
|
2dcd09055f | ||
|
|
1a656f4e44 | ||
|
|
0702b44174 | ||
|
|
ce21344585 | ||
|
|
3a28ffaf57 | ||
|
|
f63be649eb | ||
|
|
32a9e98437 | ||
|
|
b76c903745 | ||
|
|
4714b8edb8 | ||
|
|
47d1f2147a | ||
|
|
61aa366706 | ||
|
|
b16ce268eb | ||
|
|
629de56910 | ||
|
|
dad76527d6 | ||
|
|
b86ab12f0f | ||
|
|
170e4d2cce | ||
|
|
2a9eec1c3f | ||
|
|
92420cd0d5 | ||
|
|
886254c5b9 | ||
|
|
e75e647ebe | ||
|
|
6e45b62be6 | ||
|
|
4a70806d10 | ||
|
|
761c645f14 | ||
|
|
e19b3befb7 | ||
|
|
fb3ceb6aa4 | ||
|
|
9a8d8440a5 | ||
|
|
543ea75813 | ||
|
|
f03763defe | ||
|
|
114ed1cb4b | ||
|
|
3117879c54 | ||
|
|
2f48787fc4 | ||
|
|
7848ac7a74 | ||
|
|
62f126cdd2 | ||
|
|
4d1c358aa3 | ||
|
|
bf51d49ad1 | ||
|
|
6d9699681f | ||
|
|
0b248b68dd | ||
|
|
d286e0a9ad | ||
|
|
21ed761569 | ||
|
|
2639f35543 | ||
|
|
1bbd91506b | ||
|
|
9a31a3d821 | ||
|
|
9acc3dfe68 | ||
|
|
205c23b382 | ||
|
|
e48965683b | ||
|
|
eaa5a22979 | ||
|
|
bfdd1a199f | ||
|
|
096a92dcb3 | ||
|
|
7a05a12190 | ||
|
|
5d39724934 | ||
|
|
7cec19fe6e | ||
|
|
568096ddeb | ||
|
|
34bda79333 | ||
|
|
0da68d9c70 | ||
|
|
a82011199a | ||
|
|
6bbec510b0 | ||
|
|
90eaa2666a | ||
|
|
4eb4ee1882 | ||
|
|
fadeaecb6d | ||
|
|
a5c930eeaa | ||
|
|
5135425bb9 | ||
|
|
204c4ba79a | ||
|
|
640ab71b6a | ||
|
|
0361fc8e2d | ||
|
|
ccae439943 | ||
|
|
5cee8bbf29 | ||
|
|
a220a0a7af | ||
|
|
af2fec9a00 | ||
|
|
cca57782ce | ||
|
|
c2e8c3bd92 | ||
|
|
4833c8db72 | ||
|
|
3352b483b9 | ||
|
|
c40da68a2b | ||
|
|
0a959ef8e5 | ||
|
|
cd81789d29 | ||
|
|
55fb7c6663 | ||
|
|
1aa89edbf3 | ||
|
|
6ab6983778 | ||
|
|
fb4d52068b | ||
|
|
21514fa9d5 | ||
|
|
bb8e2c3137 | ||
|
|
7a316dcde3 | ||
|
|
abe9005ffb | ||
|
|
c6654b9c81 | ||
|
|
58dd421d58 | ||
|
|
64bc081abc | ||
|
|
c5b65f2cb1 | ||
|
|
59ffa90724 | ||
|
|
cb4dcbce83 | ||
|
|
5670e9d8d0 | ||
|
|
e47287264c | ||
|
|
fe1563f0f0 | ||
|
|
e87ab16e7c | ||
|
|
cf5c072cf4 | ||
|
|
6b3a652e54 | ||
|
|
66eef3c3d9 | ||
|
|
0accd09c45 | ||
|
|
f339cf70c6 | ||
|
|
2ec6d9590c | ||
|
|
ca116ab8d1 | ||
|
|
84e935c0f2 | ||
|
|
5859d44abc | ||
|
|
7b67a83d8c |
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
|
||||
@@ -1,14 +1,25 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
resources:
|
||||
repositories:
|
||||
- repository: openvino_contrib
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: releases/2021/4
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: releases/2021/4
|
||||
|
||||
jobs:
|
||||
- job: Lin
|
||||
@@ -43,6 +54,7 @@ jobs:
|
||||
echo Python info ; which python ; python --version
|
||||
echo Java info ; which java ; java -version
|
||||
echo gcc info ; which gcc ; gcc --version
|
||||
echo cmake info ; which cmake ; cmake --version
|
||||
lsb_release
|
||||
env
|
||||
cat /proc/cpuinfo
|
||||
@@ -52,14 +64,16 @@ jobs:
|
||||
df
|
||||
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
|
||||
free -h
|
||||
echo TargetBranch: $(System.PullRequest.TargetBranch)
|
||||
echo SourceBranch: $(Build.SourceBranch)
|
||||
displayName: 'System info'
|
||||
|
||||
- script: |
|
||||
set -e
|
||||
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
|
||||
rm -rf $(BUILD_DIR) ; mkdir $(BUILD_DIR)
|
||||
rm -rf $(BUILD_SAMPLES_DIR) ; mkdir $(BUILD_SAMPLES_DIR)
|
||||
echo TargetBranch: $(System.PullRequest.TargetBranch)
|
||||
echo SourceBranch: $(Build.SourceBranch)
|
||||
|
||||
displayName: 'Make dir'
|
||||
|
||||
- checkout: self
|
||||
@@ -80,7 +94,8 @@ jobs:
|
||||
path: testdata
|
||||
|
||||
- script: |
|
||||
sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
set -e
|
||||
sudo apt --assume-yes update && sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
# For opencv-python: setuptools and upgrade
|
||||
sudo apt-get install python3-setuptools patchelf
|
||||
python3 -m pip install --upgrade pip
|
||||
@@ -89,7 +104,7 @@ jobs:
|
||||
# For running Python API tests
|
||||
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/src/requirements-dev.txt
|
||||
# Speed up build
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
|
||||
unzip ninja-linux.zip
|
||||
sudo cp -v ninja /usr/local/bin/
|
||||
# Speed up tests
|
||||
@@ -105,6 +120,7 @@ jobs:
|
||||
-DVERBOSE_BUILD=ON
|
||||
-DENABLE_TEMPLATE_PLUGIN=ON
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||
-DBUILD_cuda_plugin=OFF
|
||||
-DENABLE_PYTHON=ON
|
||||
-DPYTHON_EXECUTABLE=/usr/bin/python3.6
|
||||
-DENABLE_WHEEL=ON
|
||||
|
||||
@@ -1,3 +1,12 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
jobs:
|
||||
- job: LinCC
|
||||
# About 150% of total time
|
||||
@@ -10,14 +19,12 @@ jobs:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
WORKERS_NUMBER: 16
|
||||
BUILD_TYPE: Release
|
||||
REPO_DIR: $(Build.Repository.LocalPath)
|
||||
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
|
||||
MODELS_PATH: $(REPO_DIR)/../testdata
|
||||
WORK_DIR: $(Pipeline.Workspace)/_w
|
||||
BUILD_DIR: $(WORK_DIR)/build
|
||||
BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE)
|
||||
INSTALL_DIR: $(WORK_DIR)/install_pkg
|
||||
SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh
|
||||
|
||||
@@ -30,6 +37,7 @@ jobs:
|
||||
echo Python info ; which python ; python --version
|
||||
echo Java info ; which java ; java -version
|
||||
echo gcc info ; which gcc ; gcc --version
|
||||
echo cmake info ; which cmake ; cmake --version
|
||||
lsb_release
|
||||
env
|
||||
cat /proc/cpuinfo
|
||||
@@ -53,10 +61,11 @@ jobs:
|
||||
path: openvino
|
||||
|
||||
- script: |
|
||||
sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
set -e
|
||||
sudo apt --assume-yes update && sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/requirements.txt
|
||||
# Speed up build
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
|
||||
unzip ninja-linux.zip
|
||||
sudo cp -v ninja /usr/local/bin/
|
||||
workingDirectory: $(WORK_DIR)
|
||||
@@ -76,12 +85,14 @@ jobs:
|
||||
|
||||
- script: ninja
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Build'
|
||||
displayName: 'Build LinCC'
|
||||
|
||||
- script: ls -alR $(REPO_DIR)/bin/
|
||||
displayName: 'List files'
|
||||
displayName: 'List bin files'
|
||||
|
||||
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Install'
|
||||
|
||||
- script: ls -alR $(INSTALL_DIR)
|
||||
displayName: 'List install files'
|
||||
|
||||
@@ -1,22 +1,42 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
jobs:
|
||||
- job: nGraph_ONNX_Lin
|
||||
- job: OpenVINO_ONNX_CI
|
||||
strategy:
|
||||
matrix:
|
||||
Release:
|
||||
BUILD_TYPE: 'Release'
|
||||
PROTOBUF_LITE: 'ON'
|
||||
TOX_COMMAND: 'tox && tox -e zoo_models'
|
||||
Debug:
|
||||
BUILD_TYPE: 'Debug'
|
||||
PROTOBUF_LITE: 'ON'
|
||||
TOX_COMMAND: 'tox'
|
||||
maxParallel: 2
|
||||
|
||||
# About 300% of total time
|
||||
timeoutInMinutes: 90
|
||||
|
||||
pool:
|
||||
name: LIN_VMSS_VENV_ONNX_WU2
|
||||
name: LIN_VMSS_VENV_ONNX_U20_WU2
|
||||
|
||||
variables:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
WORKERS_NUMBER: 8
|
||||
BUILD_TYPE: Release
|
||||
REPO_DIR: $(Build.Repository.LocalPath)
|
||||
WORK_DIR: $(Pipeline.Workspace)/_w
|
||||
MODELS_DIR: /mount/cinfsshare/onnxtestdata
|
||||
TMP_DIR: /mnt/tmp
|
||||
ONNX_MODEL_ZOO_SHA: "d58213534f2a4d1c4b19ba62b3bb5f544353256e"
|
||||
|
||||
|
||||
steps:
|
||||
- script: |
|
||||
@@ -27,6 +47,7 @@ jobs:
|
||||
echo Python info ; which python ; python --version
|
||||
echo Java info ; which java ; java -version
|
||||
echo gcc info ; which gcc ; gcc --version
|
||||
echo cmake info ; which cmake ; cmake --version
|
||||
lsb_release
|
||||
env
|
||||
cat /proc/cpuinfo
|
||||
@@ -40,10 +61,10 @@ jobs:
|
||||
|
||||
- script: |
|
||||
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
|
||||
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
|
||||
sudo mkdir -p $(MODELS_DIR)
|
||||
sudo apt --assume-yes install nfs-common
|
||||
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
|
||||
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(MODELS_DIR) -o vers=4,minorversion=1,sec=sys
|
||||
mkdir -p $(MODELS_DIR)/models_data
|
||||
displayName: 'Make dirs'
|
||||
|
||||
- checkout: self
|
||||
@@ -52,31 +73,23 @@ jobs:
|
||||
submodules: recursive
|
||||
path: openvino
|
||||
|
||||
- script: docker build --tag=openvino-onnx-ci-image --file=.ci/openvino-onnx/Dockerfile .
|
||||
displayName: 'Docker build'
|
||||
|
||||
- script: ngraph/python/tests/test_onnx/model_zoo_preprocess.sh -d $(TMP_DIR) -o
|
||||
displayName: 'Get models'
|
||||
|
||||
- script: |
|
||||
##wget -O "$(TMP_DIR)/msft.zip" https://onnxruntimetestdata.blob.core.windows.net/models/20191107.zip
|
||||
##unzip "$(TMP_DIR)/msft.zip" -d "$(MODELS_DIR)/msft"
|
||||
#unzip "/mnt/onnxtestdata/models/20191107.zip" -d "$(MODELS_DIR)/msft"
|
||||
#mv $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/seq_lens_sorted $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/test_data_set_0
|
||||
#mv $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/seq_lens_unsorted $(MODELS_DIR)/msft/opset9/LSTM_Seq_lens_unpacked/test_data_set_1
|
||||
displayName: 'Get MSFT models'
|
||||
enabled: false
|
||||
set -e
|
||||
sudo apt --assume-yes install git-lfs uidmap
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sudo sh get-docker.sh
|
||||
workingDirectory: $(WORK_DIR)
|
||||
displayName: 'Install dependencies'
|
||||
|
||||
- script: |
|
||||
ls -alR $(MODELS_DIR)
|
||||
ls -alR $(TMP_DIR)
|
||||
displayName: 'List models'
|
||||
enabled: false
|
||||
- script: ngraph/python/tests/test_onnx/model_zoo_preprocess.sh -d $(MODELS_DIR)/models_data -o -s "$(ONNX_MODEL_ZOO_SHA)"
|
||||
displayName: 'Update models'
|
||||
condition: ne(variables['BUILD_TYPE'], 'Debug')
|
||||
|
||||
- script: sudo fallocate -l 48G /swapfile ; sudo mkswap /swapfile ; sudo swapon /swapfile ; df ; free -h
|
||||
- script: sudo docker build --tag=openvino-onnx-ci-image --file=.ci/openvino-onnx/Dockerfile --build-arg BUILD_TYPE=$(BUILD_TYPE) --build-arg PROTOBUF_LITE=$(PROTOBUF_LITE) .
|
||||
displayName: 'Docker build $(BUILD_TYPE) protobuf-lite: $(PROTOBUF_LITE)'
|
||||
|
||||
- script: sudo fallocate -l 64G /swapfile ; sudo mkswap /swapfile ; sudo swapon /swapfile ; df ; free -h
|
||||
displayName: 'Create swap'
|
||||
|
||||
- script: |
|
||||
docker run --name openvino-onnx-ci-container --volume $(TMP_DIR)/model_zoo:/root/.onnx/model_zoo --volume $(MODELS_DIR)/msft:/root/.onnx/model_zoo/MSFT openvino-onnx-ci-image
|
||||
displayName: 'Docker run'
|
||||
|
||||
- script: sudo docker run --name openvino-onnx-ci-container --volume $(MODELS_DIR)/models_data/model_zoo/onnx_model_zoo_$(ONNX_MODEL_ZOO_SHA):/root/.onnx/model_zoo/onnx_model_zoo --volume $(MODELS_DIR)/msft:/root/.onnx/model_zoo/MSFT openvino-onnx-ci-image /bin/bash -c "$(TOX_COMMAND)"
|
||||
displayName: 'Docker run $(BUILD_TYPE) protobuf-lite: $(PROTOBUF_LITE)'
|
||||
|
||||
@@ -1,15 +1,23 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
jobs:
|
||||
- job: onnxruntime
|
||||
timeoutInMinutes: 90
|
||||
|
||||
pool:
|
||||
name: LIN_VMSS_VENV_ONNX_WU2
|
||||
name: LIN_VMSS_VENV_ONNX_U20_WU2
|
||||
|
||||
variables:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
WORKERS_NUMBER: 8
|
||||
BUILD_TYPE: Release
|
||||
REPO_DIR: $(Build.Repository.LocalPath)
|
||||
ONNXRUNTIME_REPO_DIR: $(REPO_DIR)/../onnxruntime
|
||||
@@ -20,6 +28,7 @@ jobs:
|
||||
BUILD_DIR: $(WORK_DIR)/build
|
||||
ONNXRUNTIME_UTILS: $(REPO_DIR)/.ci/azure/ci_utils/onnxruntime
|
||||
ONNXRUNTIME_BUILD_DIR: $(ONNXRUNTIME_REPO_DIR)/build
|
||||
|
||||
steps:
|
||||
- script: |
|
||||
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2019-06-01"
|
||||
@@ -29,6 +38,7 @@ jobs:
|
||||
echo Python info ; which python ; python --version
|
||||
echo Java info ; which java ; java -version
|
||||
echo gcc info ; which gcc ; gcc --version
|
||||
echo cmake info ; which cmake ; cmake --version
|
||||
lsb_release
|
||||
env
|
||||
cat /proc/cpuinfo
|
||||
@@ -44,7 +54,7 @@ jobs:
|
||||
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
|
||||
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
|
||||
sudo mkdir -p $(MODELS_DIR)
|
||||
sudo apt --assume-yes install nfs-common
|
||||
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
|
||||
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(MODELS_DIR) -o vers=4,minorversion=1,sec=sys
|
||||
displayName: 'Make dirs'
|
||||
|
||||
@@ -60,15 +70,14 @@ jobs:
|
||||
displayName: 'Clone onnxruntime'
|
||||
|
||||
- script: |
|
||||
sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
# For opencv-python: setuptools and upgrade
|
||||
sudo apt-get install python3-setuptools
|
||||
set -e
|
||||
$(REPO_DIR)/install_build_dependencies.sh
|
||||
python3 -m pip install --upgrade pip
|
||||
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/requirements.txt
|
||||
# For running Python API tests
|
||||
python3 -m pip install -r $(REPO_DIR)/inference-engine/ie_bridges/python/src/requirements-dev.txt
|
||||
# Speed up build
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-linux.zip
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
|
||||
unzip ninja-linux.zip
|
||||
sudo cp -v ninja /usr/local/bin/
|
||||
# Speed up tests
|
||||
@@ -83,7 +92,7 @@ jobs:
|
||||
-GNinja
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||
-DENABLE_PYTHON=ON
|
||||
-DPYTHON_EXECUTABLE=/usr/bin/python3.6
|
||||
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
|
||||
-DENABLE_VPU=OFF
|
||||
-DENABLE_GNA=OFF
|
||||
-DENABLE_OPENCV=OFF
|
||||
@@ -104,10 +113,10 @@ jobs:
|
||||
|
||||
- script: ninja
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Build Lin'
|
||||
displayName: 'Build Lin ONNX'
|
||||
|
||||
- script: ls -alR $(REPO_DIR)/bin/
|
||||
displayName: 'List files'
|
||||
displayName: 'List bin files'
|
||||
|
||||
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
@@ -118,7 +127,7 @@ jobs:
|
||||
echo "2021.2" > $(INSTALL_DIR)/deployment_tools/inference_engine/version.txt
|
||||
CXXFLAGS="-Wno-error=deprecated-declarations" ./build.sh --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel --skip_tests --build_dir $(ONNXRUNTIME_BUILD_DIR)
|
||||
workingDirectory: $(ONNXRUNTIME_REPO_DIR)
|
||||
displayName: 'Build ONNX Runtime'
|
||||
displayName: 'Build Lin ONNX Runtime'
|
||||
|
||||
- script: |
|
||||
source $(INSTALL_DIR)/bin/setupvars.sh
|
||||
|
||||
@@ -1,14 +1,25 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
resources:
|
||||
repositories:
|
||||
- repository: openvino_contrib
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: releases/2021/4
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: releases/2021/4
|
||||
|
||||
jobs:
|
||||
- job: Mac
|
||||
@@ -22,7 +33,6 @@ jobs:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
WORKERS_NUMBER: 3
|
||||
BUILD_TYPE: Release
|
||||
REPO_DIR: $(Build.Repository.LocalPath)
|
||||
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
|
||||
@@ -37,11 +47,11 @@ jobs:
|
||||
- script: |
|
||||
whoami
|
||||
uname -a
|
||||
which python3
|
||||
python3 --version
|
||||
which java
|
||||
java -version
|
||||
gcc --version
|
||||
echo Python3 info ; which python3 ; python3 --version
|
||||
echo Python info ; which python ; python --version
|
||||
echo Java info ; which java ; java -version
|
||||
echo gcc info ; which gcc ; gcc --version
|
||||
echo cmake info ; which cmake ; cmake --version
|
||||
xcrun --sdk macosx --show-sdk-version
|
||||
env
|
||||
sysctl -a
|
||||
@@ -90,21 +100,27 @@ jobs:
|
||||
# Disable errors with Ninja
|
||||
export CXXFLAGS="-Wno-error=unused-command-line-argument"
|
||||
export CFLAGS="-Wno-error=unused-command-line-argument"
|
||||
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
|
||||
cmake -GNinja -DVERBOSE_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DBUILD_cuda_plugin=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules $(REPO_DIR)
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'CMake'
|
||||
|
||||
- script: ls -alR $(REPO_DIR)/inference-engine/temp/
|
||||
displayName: 'List temp SDKs'
|
||||
|
||||
- script: ninja
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Build Mac'
|
||||
|
||||
- script: ls -alR $(REPO_DIR)/bin/
|
||||
displayName: 'List files'
|
||||
displayName: 'List bin files'
|
||||
|
||||
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Install'
|
||||
|
||||
- script: ls -alR $(INSTALL_DIR)
|
||||
displayName: 'List install files'
|
||||
|
||||
- script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid:IE_CPU/GRUSequenceOp.onnx_model_gru* --gtest_output=xml:TEST-NGraphUT.xml
|
||||
displayName: 'nGraph UT'
|
||||
continueOnError: false
|
||||
|
||||
@@ -1,14 +1,25 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
resources:
|
||||
repositories:
|
||||
- repository: openvino_contrib
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: releases/2021/4
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: releases/2021/4
|
||||
|
||||
jobs:
|
||||
- job: Win
|
||||
@@ -16,13 +27,12 @@ jobs:
|
||||
timeoutInMinutes: 120
|
||||
|
||||
pool:
|
||||
name: WIN_VMSS_VENV_F8S_WU2
|
||||
name: WIN_VMSS_VENV_F16S_WU2
|
||||
|
||||
variables:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
WORKERS_NUMBER: 8
|
||||
BUILD_TYPE: Release
|
||||
REPO_DIR: $(Build.Repository.LocalPath)
|
||||
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
|
||||
@@ -35,14 +45,13 @@ jobs:
|
||||
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
|
||||
INSTALL_DIR: $(WORK_DIR)\install_pkg
|
||||
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
|
||||
IB_DIR: C:\Program Files (x86)\IncrediBuild
|
||||
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
|
||||
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
|
||||
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;%PATH%
|
||||
|
||||
steps:
|
||||
- script: |
|
||||
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
|
||||
where python3
|
||||
python3 --version
|
||||
where python
|
||||
python --version
|
||||
where java
|
||||
@@ -60,12 +69,6 @@ jobs:
|
||||
rd /Q /S $(BUILD_SAMPLES_DIR) & mkdir $(BUILD_SAMPLES_DIR)
|
||||
displayName: 'Make dir'
|
||||
|
||||
- script: |
|
||||
certutil -urlcache -split -f https://openvinoweb.z5.web.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat
|
||||
call install_ib_console.bat
|
||||
workingDirectory: $(WORK_DIR)
|
||||
displayName: 'Install IncrediBuild'
|
||||
|
||||
- checkout: self
|
||||
clean: true
|
||||
lfs: false
|
||||
@@ -84,7 +87,8 @@ jobs:
|
||||
path: testdata
|
||||
|
||||
- script: |
|
||||
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
|
||||
rem Speed up build
|
||||
powershell -command "Invoke-WebRequest https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-win.zip -OutFile ninja-win.zip"
|
||||
powershell -command "Expand-Archive -Force ninja-win.zip"
|
||||
git clone https://github.com/google/gtest-parallel.git
|
||||
workingDirectory: $(WORK_DIR)
|
||||
@@ -92,13 +96,14 @@ jobs:
|
||||
|
||||
- script: |
|
||||
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
||||
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DENABLE_TEMPLATE_PLUGIN=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
|
||||
call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DENABLE_TEMPLATE_PLUGIN=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DBUILD_cuda_plugin=OFF -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'CMake'
|
||||
|
||||
- script: |
|
||||
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
||||
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja"
|
||||
- script: dir $(REPO_DIR)\inference-engine\temp\ /s
|
||||
displayName: 'List temp SDKs'
|
||||
|
||||
- script: call "$(MSVS_VARS_PATH)" && $(WORK_DIR)\ninja-win\ninja
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Build Win'
|
||||
|
||||
@@ -120,6 +125,9 @@ jobs:
|
||||
workingDirectory: $(BUILD_SAMPLES_DIR)
|
||||
displayName: 'Build c samples'
|
||||
|
||||
- script: rd /Q /S $(BUILD_DIR)
|
||||
displayName: 'Clean build dir'
|
||||
|
||||
- script: |
|
||||
set PATH=$(TEST_ENV_PATH)
|
||||
$(BIN_DIR)\unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml
|
||||
@@ -128,8 +136,8 @@ jobs:
|
||||
|
||||
- script: |
|
||||
set PATH=$(TEST_ENV_PATH)
|
||||
"$(IB_TESTCONSOLE)" $(BIN_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests-IB.xml
|
||||
displayName: 'IE UT old - IB'
|
||||
$(BIN_DIR)\InferenceEngineUnitTests.exe --gtest_output=xml:TEST-InferenceEngineUnitTests.xml
|
||||
displayName: 'IE UT old'
|
||||
|
||||
- script: |
|
||||
set PATH=$(TEST_ENV_PATH)
|
||||
@@ -175,9 +183,8 @@ jobs:
|
||||
|
||||
- script: |
|
||||
set PATH=$(TEST_ENV_PATH)
|
||||
rem $(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
|
||||
"$(IB_TESTCONSOLE)" $(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke*:-*CompareWithRefs/base_size=16_pre_nms_topn=100_post_nms_topn=100_nms_thresh=0.7_feat_stride=1_min_size=1_ratio* --gtest_output=xml:TEST-cpuFuncTests-IB.xml /testlevel=24
|
||||
displayName: 'CPU FuncTests - IB'
|
||||
$(BIN_DIR)\cpuFuncTests.exe --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
|
||||
displayName: 'CPU FuncTests'
|
||||
continueOnError: false
|
||||
|
||||
- script: |
|
||||
@@ -200,8 +207,3 @@ jobs:
|
||||
buildPlatform: 'x64' # Optional
|
||||
buildConfiguration: 'Windows' # Optional
|
||||
#publishRunAttachments: true # Optional
|
||||
|
||||
- script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
|
||||
displayName: Stop IncrediBuild
|
||||
continueOnError: true
|
||||
enabled: false
|
||||
|
||||
@@ -1,7 +1,16 @@
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- master
|
||||
- releases/*
|
||||
paths:
|
||||
exclude:
|
||||
- docs/*
|
||||
|
||||
jobs:
|
||||
- job: WinCC
|
||||
# About 150% of total time
|
||||
timeoutInMinutes: 120
|
||||
timeoutInMinutes: 60
|
||||
|
||||
pool:
|
||||
name: WIN_VMSS_VENV_F8S_WU2
|
||||
@@ -10,26 +19,22 @@ jobs:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
WORKERS_NUMBER: 8
|
||||
BUILD_TYPE: Release
|
||||
REPO_DIR: $(Build.Repository.LocalPath)
|
||||
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
|
||||
MODELS_PATH: $(REPO_DIR)\..\testdata
|
||||
WORK_DIR: $(Pipeline.Workspace)\_w
|
||||
BUILD_DIR: D:\build
|
||||
BIN_DIR: $(REPO_DIR)\bin\intel64
|
||||
MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat
|
||||
MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe
|
||||
INSTALL_DIR: $(WORK_DIR)\install_pkg
|
||||
SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat
|
||||
IB_DIR: C:\Program Files (x86)\IncrediBuild
|
||||
IB_TESTCONSOLE: $(IB_DIR)\IBTestConsole.exe
|
||||
TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.2\opencv\bin;$(IB_DIR);%PATH%
|
||||
|
||||
steps:
|
||||
- script: |
|
||||
powershell -command "Invoke-RestMethod -Headers @{\"Metadata\"=\"true\"} -Method GET -Uri http://169.254.169.254/metadata/instance/compute?api-version=2019-06-01 | format-custom"
|
||||
where python3
|
||||
python3 --version
|
||||
where python
|
||||
python --version
|
||||
where java
|
||||
@@ -46,12 +51,6 @@ jobs:
|
||||
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
|
||||
displayName: 'Make dir'
|
||||
|
||||
- script: |
|
||||
certutil -urlcache -split -f https://openvinoweb.z5.web.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat
|
||||
call install_ib_console.bat
|
||||
workingDirectory: $(WORK_DIR)
|
||||
displayName: 'Install IncrediBuild'
|
||||
|
||||
- checkout: self
|
||||
clean: true
|
||||
lfs: false
|
||||
@@ -59,7 +58,8 @@ jobs:
|
||||
path: openvino
|
||||
|
||||
- script: |
|
||||
certutil -urlcache -split -f https://github.com/ninja-build/ninja/releases/download/v1.10.0/ninja-win.zip ninja-win.zip
|
||||
rem Speed up build
|
||||
powershell -command "Invoke-WebRequest https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-win.zip -OutFile ninja-win.zip"
|
||||
powershell -command "Expand-Archive -Force ninja-win.zip"
|
||||
workingDirectory: $(WORK_DIR)
|
||||
displayName: 'Install dependencies'
|
||||
@@ -70,20 +70,19 @@ jobs:
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'CMake'
|
||||
|
||||
- script: |
|
||||
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
||||
call "$(MSVS_VARS_PATH)" && "C:\Program Files (x86)\IncrediBuild\BuildConsole.exe" /COMMAND="ninja"
|
||||
- script: dir $(REPO_DIR)\inference-engine\temp\ /s
|
||||
displayName: 'List temp SDKs'
|
||||
|
||||
- script: call "$(MSVS_VARS_PATH)" && $(WORK_DIR)\ninja-win\ninja
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Build Win'
|
||||
displayName: 'Build Win CC'
|
||||
|
||||
- script: dir $(REPO_DIR)\bin\ /s
|
||||
displayName: 'List files'
|
||||
displayName: 'List bin files'
|
||||
|
||||
- script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake
|
||||
workingDirectory: $(BUILD_DIR)
|
||||
displayName: 'Install'
|
||||
|
||||
- script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent
|
||||
displayName: Stop IncrediBuild
|
||||
continueOnError: true
|
||||
enabled: false
|
||||
- script: dir $(INSTALL_DIR) /s
|
||||
displayName: 'List install files'
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
import logging
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
import requests
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
import argparse
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
import requests
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
import datetime
|
||||
|
||||
2
.github/org_control/check_org.py
vendored
2
.github/org_control/check_org.py
vendored
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
|
||||
22
.github/org_control/check_pr.py
vendored
22
.github/org_control/check_pr.py
vendored
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
@@ -139,7 +139,7 @@ def update_labels(gh_api, pull, non_org_intel_pr_users, non_org_pr_users):
|
||||
|
||||
def get_wrong_commits(pull):
|
||||
"""Returns commits with incorrect user and email"""
|
||||
pr_author_email = pull.user.email.lower()
|
||||
pr_author_email = (pull.user.email or "").lower()
|
||||
print("GitHub PR author email:", pr_author_email)
|
||||
print("Check commits:")
|
||||
wrong_commits = set()
|
||||
@@ -147,21 +147,29 @@ def get_wrong_commits(pull):
|
||||
# import pprint; pprint.pprint(commit.raw_data)
|
||||
print("Commit SHA:", commit.sha)
|
||||
# Use raw data because commit author can be non GitHub user
|
||||
commit_email = commit.raw_data["commit"]["author"]["email"].lower()
|
||||
print(" Commit email:", commit_email)
|
||||
commit_author_email = (commit.raw_data["commit"]["author"]["email"] or "").lower()
|
||||
commit_committer_email = (commit.raw_data["commit"]["committer"]["email"] or "").lower()
|
||||
print(" Commit author email:", commit_author_email)
|
||||
print(" Commit committer email:", commit_committer_email)
|
||||
if not github_api.is_valid_user(commit.author):
|
||||
print(
|
||||
" ERROR: User with the commit email is absent in GitHub:",
|
||||
" ERROR: User with the commit author email is absent in GitHub:",
|
||||
commit.raw_data["commit"]["author"]["name"],
|
||||
)
|
||||
wrong_commits.add(commit.sha)
|
||||
if not github_api.is_valid_user(commit.committer):
|
||||
print(
|
||||
" ERROR: User with the commit committer email is absent in GitHub:",
|
||||
commit.raw_data["commit"]["committer"]["name"],
|
||||
)
|
||||
wrong_commits.add(commit.sha)
|
||||
if not commit.raw_data["commit"]["verification"]["verified"]:
|
||||
print(
|
||||
" WARNING: The commit is not verified. Reason:",
|
||||
commit.raw_data["commit"]["verification"]["reason"],
|
||||
)
|
||||
if pr_author_email != commit_email:
|
||||
print(" WARNING: Commit email and GitHub PR author public email are differnt")
|
||||
if pr_author_email != commit_author_email or pr_author_email != commit_committer_email:
|
||||
print(" WARNING: Commit emails and GitHub PR author public email are differnt")
|
||||
return wrong_commits
|
||||
|
||||
|
||||
|
||||
2
.github/org_control/configs.py
vendored
2
.github/org_control/configs.py
vendored
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
|
||||
2
.github/org_control/github_api.py
vendored
2
.github/org_control/github_api.py
vendored
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
|
||||
2
.github/org_control/ldap_api.py
vendored
2
.github/org_control/ldap_api.py
vendored
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
|
||||
47
.github/workflows/build_doc.yml
vendored
47
.github/workflows/build_doc.yml
vendored
@@ -14,14 +14,25 @@ jobs:
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
set -e
|
||||
# install doc dependencies
|
||||
sudo apt update
|
||||
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive
|
||||
python3 -m pip install lxml
|
||||
cd docs
|
||||
python -m pip install -r requirements.txt --user
|
||||
cd openvino_sphinx_theme
|
||||
python setup.py install --user
|
||||
cd ../..
|
||||
# install doxyrest
|
||||
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz
|
||||
tar -xf doxyrest-2.1.3-linux-amd64.tar.xz
|
||||
echo "$(pwd)/doxyrest-2.1.3-linux-amd64/bin/" >> $GITHUB_PATH
|
||||
# install doxygen
|
||||
mkdir doxygen
|
||||
cd doxygen
|
||||
git clone https://github.com/doxygen/doxygen.git
|
||||
cd doxygen
|
||||
git checkout Release_1_9_1
|
||||
git checkout Release_1_9_2
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
@@ -32,15 +43,37 @@ jobs:
|
||||
run: |
|
||||
mkdir build
|
||||
cd build
|
||||
cmake -DENABLE_DOCS=ON ..
|
||||
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DNGRAPH_PYTHON_BUILD_ENABLE=ON -DCMAKE_BUILD_TYPE=Release ..
|
||||
|
||||
- name: Build doc
|
||||
run: cmake --build . --target openvino_docs
|
||||
run: |
|
||||
cmake --build . --target sphinx_docs
|
||||
working-directory: build
|
||||
|
||||
- name: Archive HTML
|
||||
run: |
|
||||
zip -r openvino_html.zip _build
|
||||
working-directory: build/docs
|
||||
|
||||
- name: Run Pytest
|
||||
run: |
|
||||
pytest --doxygen="./build/docs/doxygen.log" \
|
||||
--confcutdir="./docs/scripts/tests/" \
|
||||
--html="./build/docs/_artifacts/doc-generation.html" \
|
||||
--doxygen-strip="$(pwd)" \
|
||||
--doxygen-xfail="./docs/doxygen-xfail.txt" \
|
||||
--self-contained-html ./docs/scripts/tests/test_docs.py
|
||||
|
||||
- name: 'Upload doc'
|
||||
- name: 'Upload test results'
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: openvino_doc_pytest
|
||||
path: build/docs/_artifacts/
|
||||
|
||||
- name: 'Upload html'
|
||||
if: github.event_name == 'push'
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: openvino_doc
|
||||
path: build/docs/html/
|
||||
name: openvino_html
|
||||
path: build/docs/openvino_html.zip
|
||||
|
||||
9
.github/workflows/code_style.yml
vendored
9
.github/workflows/code_style.yml
vendored
@@ -10,10 +10,13 @@ jobs:
|
||||
submodules: recursive
|
||||
|
||||
- name: Install clang-format-9
|
||||
run: sudo apt --assume-yes install clang-format-9
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt --assume-yes install clang-format-9
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
python3 -m pip install --upgrade pip
|
||||
python3 -m pip install -r ./inference-engine/ie_bridges/python/requirements.txt
|
||||
@@ -50,7 +53,9 @@ jobs:
|
||||
submodules: recursive
|
||||
|
||||
- name: Install ShellCheck
|
||||
run: sudo apt --assume-yes install shellcheck
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt --assume-yes install shellcheck
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
|
||||
1
.github/workflows/mo.yml
vendored
1
.github/workflows/mo.yml
vendored
@@ -41,6 +41,7 @@ jobs:
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements_dev.txt
|
||||
# requrements for CMake
|
||||
sudo apt update
|
||||
sudo apt --assume-yes install libusb-1.0-0-dev
|
||||
working-directory: model-optimizer
|
||||
|
||||
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -2,6 +2,8 @@
|
||||
_*
|
||||
# but ensure we don't skip __init__.py
|
||||
!__init__.py
|
||||
# and sphinx documentation folders
|
||||
!docs/_*
|
||||
|
||||
# developer tools
|
||||
*.idea
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
@@ -90,7 +90,7 @@ function(build_ngraph)
|
||||
ngraph_set(NGRAPH_PYTHON_BUILD_ENABLE OFF)
|
||||
endif()
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
|
||||
if(OV_COMPILER_IS_CLANG)
|
||||
ie_add_compiler_flags(-Wno-error=uninitialized -Wno-error=literal-conversion)
|
||||
elseif(UNIX)
|
||||
ie_add_compiler_flags(-Wno-error=maybe-uninitialized -Wno-error=return-type)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# OpenVINO™ Toolkit
|
||||
[](https://github.com/openvinotoolkit/openvino/releases/tag/2021.3)
|
||||
[](https://github.com/openvinotoolkit/openvino/releases/tag/2021.4.2)
|
||||
[](LICENSE)
|
||||

|
||||

|
||||
@@ -42,7 +42,7 @@ Please report questions, issues and suggestions using:
|
||||
---
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
[Open Model Zoo]:https://github.com/opencv/open_model_zoo
|
||||
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
|
||||
[Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine
|
||||
[Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
|
||||
[nGraph]:https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_DevGuide.html
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
@@ -17,7 +17,7 @@ if (ENABLE_SANITIZER)
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=gold")
|
||||
elseif(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
|
||||
elseif(OV_COMPILER_IS_CLANG AND NOT WIN32)
|
||||
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
|
||||
endif()
|
||||
@@ -35,7 +35,7 @@ if (ENABLE_THREAD_SANITIZER)
|
||||
set(SANITIZER_LINKER_FLAGS "-fsanitize=thread")
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -Wl,-z,nodelete")
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$" AND NOT WIN32)
|
||||
if(OV_COMPILER_IS_CLANG AND NOT WIN32)
|
||||
if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
|
||||
else()
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
@@ -23,7 +23,7 @@ if (CMAKE_BUILD_TYPE STREQUAL "Release")
|
||||
if (NOT ENABLE_SANITIZER)
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
|
||||
endif()
|
||||
elseif(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
|
||||
elseif(OV_COMPILER_IS_CLANG)
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if (NOT ENABLE_SANITIZER)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
set(OV_COVERAGE_GCDA_DATA_DIRECTORY "${CMAKE_BINARY_DIR}")
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2020 Intel Corporation
|
||||
# Copyright (C) 2020-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
@@ -56,7 +56,7 @@ ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
|
||||
|
||||
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "CMAKE_CXX_COMPILER_ID MATCHES ^(Apple)?Clang$; NOT WIN32" OFF)
|
||||
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG; NOT WIN32" OFF)
|
||||
|
||||
#
|
||||
# Check features
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
# Target system specific flags
|
||||
@@ -55,3 +55,9 @@ endif()
|
||||
if(UNIX AND NOT APPLE)
|
||||
set(LINUX ON)
|
||||
endif()
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Apple)?Clang$")
|
||||
set(OV_COMPILER_IS_CLANG ON)
|
||||
else()
|
||||
set(OV_COMPILER_IS_CLANG OFF)
|
||||
endif()
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
# TBB_FOUND should not be set explicitly. It is defined automatically by CMake.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
@@ -44,327 +44,212 @@ if(NOT ENABLE_DOCKER)
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check")
|
||||
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation")
|
||||
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation")
|
||||
set(POT_DOCS_DIR "" CACHE PATH "Path to post-training-compression-tool documentation")
|
||||
set(GST_DOCS_DIR "" CACHE PATH "Path to gst-video-analytics documentation")
|
||||
set(LINKCHECKER_PY "" CACHE FILEPATH "Path to linkchecker.py for documentation check dir.")
|
||||
set(ENABLE_OPENVINO_NOTEBOOKS OFF CACHE BOOL "Build with openvino notebooks")
|
||||
set(OMZ_DOCS_DIR "" CACHE PATH "Path to open_model_zoo documentation dir.")
|
||||
set(WORKBENCH_DOCS_DIR "" CACHE PATH "Path to workbench documentation dir.")
|
||||
set(POT_DOCS_DIR "" CACHE PATH "Path to post-training-compression-tool documentation dir.")
|
||||
set(OVMS_DOCS_DIR "" CACHE PATH "Path to model server documentation dir.")
|
||||
set(GST_DOCS_DIR "" CACHE PATH "Path to gst-video-analytics documentation dir.")
|
||||
set(GRAPH_CSV_DIR "" CACHE PATH "Path to the folder containing csv data for rendering graphs.")
|
||||
|
||||
function(build_docs)
|
||||
find_package(Doxygen REQUIRED dot)
|
||||
find_package(PythonInterp 3 REQUIRED)
|
||||
find_package(LATEX REQUIRED)
|
||||
|
||||
execute_process(
|
||||
COMMAND ${PYTHON_EXECUTABLE} -m pip show lxml
|
||||
RESULT_VARIABLE PIP_EXIT_CODE
|
||||
OUTPUT_QUIET
|
||||
)
|
||||
|
||||
if (NOT ${PIP_EXIT_CODE} EQUAL 0)
|
||||
message(FATAL_ERROR "lxml package is not installed. Please use \"pip install lxml\".")
|
||||
find_program(DOXYREST_EXECUTABLE NAMES doxyrest)
|
||||
if (NOT DOXYREST_EXECUTABLE)
|
||||
message(FATAL_ERROR "No doxyrest found. Documentation output is not available")
|
||||
endif()
|
||||
|
||||
set(DOCS_BUILD_DIR "${CMAKE_CURRENT_BINARY_DIR}")
|
||||
set(DOXYGEN_DIR "${OpenVINO_MAIN_SOURCE_DIR}/docs/doxygen")
|
||||
set(IE_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/inference-engine")
|
||||
set(PYTHON_API_IN "${IE_SOURCE_DIR}/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx")
|
||||
set(PYTHON_API_OUT "${DOCS_BUILD_DIR}/python_api/ie_api.pyx")
|
||||
set(C_API "${IE_SOURCE_DIR}/ie_bridges/c/include")
|
||||
set(PLUGIN_API_DIR "${DOCS_BUILD_DIR}/IE_PLUGIN_DG")
|
||||
set(DOCS_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/docs")
|
||||
set(SCRIPTS_DIR "${DOCS_SOURCE_DIR}/scripts")
|
||||
|
||||
# API INPUT
|
||||
|
||||
set(NGRAPH_DIR "${OpenVINO_MAIN_SOURCE_DIR}/ngraph")
|
||||
set(NGRAPH_PY_DIR "${NGRAPH_DIR}/python/src/ngraph/")
|
||||
set(NGRAPH_CPP_DIR "${NGRAPH_DIR}/core/include/" "${NGRAPH_DIR}/frontend/onnx_import/include")
|
||||
|
||||
# Preprocessing scripts
|
||||
set(DOXY_MD_FILTER "${DOXYGEN_DIR}/doxy_md_filter.py")
|
||||
set(DOXY_LAYOUT_SCRIPT "${DOXYGEN_DIR}/build_main_layout.py")
|
||||
set(DOXY_LOG_SCRIPT "${DOXYGEN_DIR}/log.py")
|
||||
set(PYX_FILTER "${DOXYGEN_DIR}/pyx_filter.py")
|
||||
|
||||
# assets dir
|
||||
set(ASSETS_DIR "${DOXYGEN_DIR}/assets")
|
||||
# markdown docs
|
||||
set(MARKDOWN_INPUT "${DOCS_BUILD_DIR}")
|
||||
|
||||
# header and footer
|
||||
set(HEADER_SOURCE "${DOXYGEN_DIR}/header.html.in")
|
||||
set(FOOTER_SOURCE "${DOXYGEN_DIR}/footer.html.in")
|
||||
set(HEADER_BUILD "${DOCS_BUILD_DIR}/header.html")
|
||||
set(FOOTER_BUILD "${DOCS_BUILD_DIR}/footer.html")
|
||||
|
||||
configure_file(${HEADER_SOURCE} ${HEADER_BUILD} @ONLY)
|
||||
configure_file(${FOOTER_SOURCE} ${FOOTER_BUILD} @ONLY)
|
||||
|
||||
file(GLOB_RECURSE doc_source_files
|
||||
LIST_DIRECTORIES true RELATIVE ${OpenVINO_MAIN_SOURCE_DIR}
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.svg"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.md"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.png"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.gif"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.svg")
|
||||
|
||||
configure_file(${PYTHON_API_IN} ${PYTHON_API_OUT} @ONLY)
|
||||
|
||||
set(NGRAPH_CPP_CONFIG_SOURCE "${DOXYGEN_DIR}/ngraph_cpp_api.config")
|
||||
set(NGRAPH_PY_CONFIG_SOURCE "${DOXYGEN_DIR}/ngraph_py_api.config")
|
||||
set(IE_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_docs.config")
|
||||
set(C_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_c_api.config")
|
||||
set(PY_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_py_api.config")
|
||||
set(PLUGIN_CONFIG_SOURCE "${DOXYGEN_DIR}/ie_plugin_api.config")
|
||||
|
||||
set(NGRAPH_CPP_CONFIG_BUILD "${DOCS_BUILD_DIR}/ngraph_cpp_api.config")
|
||||
set(NGRAPH_PY_CONFIG_BUILD "${DOCS_BUILD_DIR}/ngraph_py_api.config")
|
||||
set(IE_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_docs.config")
|
||||
set(C_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_c_api.config")
|
||||
set(PY_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_py_api.config")
|
||||
set(PLUGIN_CONFIG_BUILD "${DOCS_BUILD_DIR}/ie_plugin_api.config")
|
||||
|
||||
set(NGRAPH_CPP_LAYOUT_SOURCE "${DOXYGEN_DIR}/ngraph_cpp_api.xml")
|
||||
set(NGRAPH_PY_LAYOUT_SOURCE "${DOXYGEN_DIR}/ngraph_py_api.xml")
|
||||
set(IE_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_docs.xml")
|
||||
set(OPENVINO_LAYOUT_SOURCE "${DOXYGEN_DIR}/openvino_docs.xml")
|
||||
set(C_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_c_api.xml")
|
||||
set(PY_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_py_api.xml")
|
||||
set(PLUGIN_LAYOUT_SOURCE "${DOXYGEN_DIR}/ie_plugin_api.xml")
|
||||
|
||||
set(NGRAPH_CPP_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ngraph_cpp_api.xml")
|
||||
set(NGRAPH_PY_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ngraph_py_api.xml")
|
||||
set(IE_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_docs.xml")
|
||||
set(OPENVINO_LAYOUT_BUILD "${DOCS_BUILD_DIR}/openvino_docs.xml")
|
||||
set(C_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_c_api.xml")
|
||||
set(PY_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_py_api.xml")
|
||||
set(PLUGIN_LAYOUT_BUILD "${DOCS_BUILD_DIR}/ie_plugin_api.xml")
|
||||
# IE C++ API
|
||||
set(IE_SOURCE_DIR "${OpenVINO_MAIN_SOURCE_DIR}/inference-engine")
|
||||
|
||||
# IE C API
|
||||
set(IE_C_API "${IE_SOURCE_DIR}/ie_bridges/c/include")
|
||||
|
||||
# Preprocessing scripts
|
||||
set(DOXY_MD_FILTER "${SCRIPTS_DIR}/doxy_md_filter.py")
|
||||
set(PYNGRAPH_REF_SCRIPT "${SCRIPTS_DIR}/pyngraph_ref.py")
|
||||
set(DOXY_LOG_SCRIPT "${SCRIPTS_DIR}/log.py")
|
||||
set(PYX_FILTER "${SCRIPTS_DIR}/pyx_filter.py")
|
||||
set(PREPARE_XML_SCRIPT "${SCRIPTS_DIR}/prepare_xml.py")
|
||||
set(REMOVE_XML_SCRIPT "${SCRIPTS_DIR}/remove_xml.py")
|
||||
set(COPY_IMAGES_SCRIPT "${SCRIPTS_DIR}/copy_images.py")
|
||||
set(DOC_TEST_DIR "${SCRIPTS_DIR}/tests")
|
||||
set(DOXYGEN_MAPPING_SCRIPT "${SCRIPTS_DIR}/create_mapping.py")
|
||||
set(DOXYGEN_MAPPING_FILE "${DOCS_BUILD_DIR}/mapping.json")
|
||||
|
||||
# out dirs
|
||||
set(OUTPUT_DIRECTORY "${DOCS_BUILD_DIR}/html")
|
||||
set(IE_OUTPUT "${OUTPUT_DIRECTORY}")
|
||||
set(C_OUTPUT "${OUTPUT_DIRECTORY}/ie_c_api")
|
||||
set(PY_OUTPUT "${OUTPUT_DIRECTORY}/ie_python_api")
|
||||
set(PLUGIN_OUTPUT "${OUTPUT_DIRECTORY}/ie_plugin_api")
|
||||
set(NGRAPH_CPP_OUTPUT "${OUTPUT_DIRECTORY}/ngraph_cpp_api")
|
||||
set(NGRAPH_PY_OUTPUT "${OUTPUT_DIRECTORY}/ngraph_python_api")
|
||||
set(XML_OUTPUT "${DOCS_BUILD_DIR}/xml")
|
||||
set(RST_OUTPUT "${DOCS_BUILD_DIR}/rst")
|
||||
set(SPHINX_OUTPUT "${DOCS_BUILD_DIR}/_build")
|
||||
|
||||
# Tables of contents
|
||||
configure_file(${NGRAPH_CPP_LAYOUT_SOURCE} ${NGRAPH_CPP_LAYOUT_BUILD} @ONLY)
|
||||
configure_file(${NGRAPH_PY_LAYOUT_SOURCE} ${NGRAPH_PY_LAYOUT_BUILD} @ONLY)
|
||||
configure_file(${IE_LAYOUT_SOURCE} ${IE_LAYOUT_BUILD} @ONLY)
|
||||
configure_file(${OPENVINO_LAYOUT_SOURCE} ${OPENVINO_LAYOUT_BUILD} @ONLY)
|
||||
configure_file(${C_LAYOUT_SOURCE} ${C_LAYOUT_BUILD} @ONLY)
|
||||
configure_file(${PY_LAYOUT_SOURCE} ${PY_LAYOUT_BUILD} @ONLY)
|
||||
configure_file(${PLUGIN_LAYOUT_SOURCE} ${PLUGIN_LAYOUT_BUILD} @ONLY)
|
||||
# Sphinx folders, doxyrest templates and config
|
||||
set(SPHINX_CONF_IN "${DOCS_SOURCE_DIR}/conf.py")
|
||||
set(SPHINX_CONF_OUT "${RST_OUTPUT}/conf.py")
|
||||
set(SPHINX_STATIC_IN "${DOCS_SOURCE_DIR}/_static")
|
||||
set(SPHINX_STATIC_OUT "${RST_OUTPUT}/_static")
|
||||
set(SPHINX_INDEX_IN "${DOCS_SOURCE_DIR}/index.rst")
|
||||
set(SPHINX_INDEX_OUT "${RST_OUTPUT}/index.rst")
|
||||
set(API_DOCS_IN "${DOCS_SOURCE_DIR}/api")
|
||||
set(API_DOCS_OUT "${RST_OUTPUT}/api")
|
||||
set(DOXYREST_IN "${DOCS_SOURCE_DIR}/doxyrest")
|
||||
set(DOXYREST_OUT "${DOCS_BUILD_DIR}/doxyrest")
|
||||
set(DOXYREST_SPHINX_IN "${DOCS_SOURCE_DIR}/doxyrest-sphinx")
|
||||
set(DOXYREST_SPHINX_OUT "${RST_OUTPUT}/doxyrest-sphinx")
|
||||
set(DOXYREST_CONFIG_IN "${DOCS_SOURCE_DIR}/doxyrest-config.lua")
|
||||
set(DOXYREST_CONFIG_OUT "${DOCS_BUILD_DIR}/doxyrest-config.lua")
|
||||
configure_file(${DOXYREST_CONFIG_IN} ${DOXYREST_CONFIG_OUT} @ONLY)
|
||||
configure_file(${SPHINX_CONF_IN} ${SPHINX_CONF_OUT} @ONLY)
|
||||
|
||||
# Doxygen config files
|
||||
configure_file(${NGRAPH_CPP_CONFIG_SOURCE} ${NGRAPH_CPP_CONFIG_BUILD} @ONLY)
|
||||
configure_file(${NGRAPH_PY_CONFIG_SOURCE} ${NGRAPH_PY_CONFIG_BUILD} @ONLY)
|
||||
configure_file(${IE_CONFIG_SOURCE} ${IE_CONFIG_BUILD} @ONLY)
|
||||
configure_file(${C_CONFIG_SOURCE} ${C_CONFIG_BUILD} @ONLY)
|
||||
configure_file(${PY_CONFIG_SOURCE} ${PY_CONFIG_BUILD} @ONLY)
|
||||
configure_file(${PLUGIN_CONFIG_SOURCE} ${PLUGIN_CONFIG_BUILD} @ONLY)
|
||||
# Doxygen config
|
||||
set(DOXYFILE_SOURCE "${DOCS_SOURCE_DIR}/Doxyfile.config")
|
||||
set(DOXYFILE_BUILD "${DOCS_BUILD_DIR}/Doxyfile.config")
|
||||
configure_file(${DOXYFILE_SOURCE} ${DOXYFILE_BUILD} @ONLY)
|
||||
|
||||
# Preprocessing scripts
|
||||
set(DOXY_MD_FILTER "${DOXYGEN_DIR}/doxy_md_filter.py")
|
||||
set(PYX_FILTER "${DOXYGEN_DIR}/pyx_filter.py")
|
||||
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
|
||||
--input_dir=${OpenVINO_MAIN_SOURCE_DIR}
|
||||
--output_dir=${DOCS_BUILD_DIR}/openvino
|
||||
--exclude_dir=${DOCS_BUILD_DIR})
|
||||
|
||||
# nGraph C++ API
|
||||
# include additional repositories
|
||||
|
||||
add_custom_target(ngraph_cpp_api
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${NGRAPH_CPP_OUTPUT}/assets
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${NGRAPH_CPP_CONFIG_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
VERBATIM)
|
||||
# build with openvino notebooks
|
||||
if(ENABLE_OPENVINO_NOTEBOOKS)
|
||||
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
|
||||
list(APPEND commands
|
||||
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${RST_OUTPUT}/notebooks"
|
||||
)
|
||||
endif()
|
||||
|
||||
# nGraph Python API
|
||||
if(GRAPH_CSV_DIR)
|
||||
set(GRAPH_CSV_DIR_OUT "${RST_OUTPUT}/csv")
|
||||
list(APPEND commands
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory "${GRAPH_CSV_DIR}" "${GRAPH_CSV_DIR_OUT}"
|
||||
)
|
||||
endif()
|
||||
|
||||
add_custom_target(ngraph_py_api
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${NGRAPH_PY_OUTPUT}/assets
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${NGRAPH_PY_CONFIG_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
VERBATIM)
|
||||
list(APPEND commands
|
||||
COMMAND ${CMAKE_COMMAND} -E copy ${API_DOCS_IN}/api_reference.rst ${API_DOCS_OUT}/api_reference.rst
|
||||
)
|
||||
|
||||
# C API
|
||||
|
||||
add_custom_target(c_api
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${C_OUTPUT}/assets
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${C_CONFIG_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
COMMENT "Generating C API Reference"
|
||||
VERBATIM)
|
||||
|
||||
# Python API
|
||||
|
||||
add_custom_target(py_api
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${PY_OUTPUT}/assets
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${PY_CONFIG_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
COMMENT "Generating Python API Reference"
|
||||
VERBATIM)
|
||||
|
||||
add_custom_command(TARGET py_api
|
||||
PRE_BUILD
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${PYX_FILTER} ${PYTHON_API_OUT}
|
||||
COMMENT "Pre-process Python API")
|
||||
|
||||
# Preprocess docs
|
||||
|
||||
add_custom_target(preprocess_docs
|
||||
COMMENT "Pre-process docs"
|
||||
VERBATIM)
|
||||
|
||||
# ovino doc files
|
||||
file(GLOB_RECURSE ovino_doc_files
|
||||
LIST_DIRECTORIES true RELATIVE ${OpenVINO_MAIN_SOURCE_DIR}
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.md"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.png"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.gif"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/docs/*.jpg"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.md"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.png"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.gif"
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/inference-engine/*.jpg")
|
||||
|
||||
foreach(source_file ${ovino_doc_files})
|
||||
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
|
||||
"${OpenVINO_MAIN_SOURCE_DIR}/${source_file}" "${DOCS_BUILD_DIR}/openvino/${source_file}")
|
||||
endforeach()
|
||||
if(ENABLE_PYTHON)
|
||||
list(APPEND commands
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${API_DOCS_IN}/ie_python_api ${API_DOCS_OUT}/ie_python_api
|
||||
)
|
||||
list(APPEND commands
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${API_DOCS_IN}/ngraph_python_api ${API_DOCS_OUT}/ngraph_python_api
|
||||
)
|
||||
endif()
|
||||
|
||||
# omz doc files
|
||||
if(EXISTS "${OMZ_DOCS_DIR}")
|
||||
get_filename_component(OMZ_DOCS_DIR "${OMZ_DOCS_DIR}" ABSOLUTE)
|
||||
|
||||
file(GLOB_RECURSE omz_doc_files
|
||||
LIST_DIRECTORIES true RELATIVE ${OMZ_DOCS_DIR}
|
||||
"${OMZ_DOCS_DIR}/*.md"
|
||||
"${OMZ_DOCS_DIR}/*.png"
|
||||
"${OMZ_DOCS_DIR}/*.gif"
|
||||
"${OMZ_DOCS_DIR}/*.jpg")
|
||||
|
||||
foreach(source_file ${omz_doc_files})
|
||||
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
|
||||
"${OMZ_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/omz/${source_file}")
|
||||
endforeach()
|
||||
configure_file("${OMZ_DOCS_DIR}/omz_docs.xml" "${DOCS_BUILD_DIR}/omz_docs.xml" @ONLY)
|
||||
list(APPEND commands
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${OMZ_DOCS_DIR}/ci/prepare-documentation.py ${CMAKE_BINARY_DIR}/open_model_zoo)
|
||||
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
|
||||
--input_dir=${CMAKE_BINARY_DIR}/open_model_zoo
|
||||
--output_dir=${DOCS_BUILD_DIR}/open_model_zoo)
|
||||
endif()
|
||||
|
||||
# workbench doc files
|
||||
if(EXISTS "${WORKBENCH_DOCS_DIR}")
|
||||
get_filename_component(WORKBENCH_DOCS_DIR "${WORKBENCH_DOCS_DIR}" ABSOLUTE)
|
||||
|
||||
file(GLOB_RECURSE workbench_doc_files
|
||||
LIST_DIRECTORIES true RELATIVE ${WORKBENCH_DOCS_DIR}
|
||||
"${WORKBENCH_DOCS_DIR}/*.md"
|
||||
"${WORKBENCH_DOCS_DIR}/*.png"
|
||||
"${WORKBENCH_DOCS_DIR}/*.gif"
|
||||
"${WORKBENCH_DOCS_DIR}/*.jpg")
|
||||
|
||||
foreach(source_file ${workbench_doc_files})
|
||||
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
|
||||
"${WORKBENCH_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/workbench/${source_file}")
|
||||
endforeach()
|
||||
configure_file("${WORKBENCH_DOCS_DIR}/docs/Workbench_DG/workbench_docs.xml" "${DOCS_BUILD_DIR}/workbench_docs.xml" @ONLY)
|
||||
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
|
||||
--input_dir=${WORKBENCH_DOCS_DIR}
|
||||
--output_dir=${DOCS_BUILD_DIR}/workbench)
|
||||
endif()
|
||||
|
||||
# pot doc files
|
||||
if(EXISTS "${POT_DOCS_DIR}")
|
||||
get_filename_component(POT_DOCS_DIR "${POT_DOCS_DIR}" ABSOLUTE)
|
||||
|
||||
file(GLOB_RECURSE pot_doc_files
|
||||
LIST_DIRECTORIES true RELATIVE ${POT_DOCS_DIR}
|
||||
"${POT_DOCS_DIR}/*.md"
|
||||
"${POT_DOCS_DIR}/*.png"
|
||||
"${POT_DOCS_DIR}/*.gif"
|
||||
"${POT_DOCS_DIR}/*.jpg")
|
||||
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
|
||||
--input_dir=${POT_DOCS_DIR}
|
||||
--output_dir=${DOCS_BUILD_DIR}/pot)
|
||||
endif()
|
||||
|
||||
foreach(source_file ${pot_doc_files})
|
||||
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
|
||||
"${POT_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/pot/${source_file}")
|
||||
endforeach()
|
||||
configure_file("${POT_DOCS_DIR}/docs/pot_docs.xml" "${DOCS_BUILD_DIR}/pot_docs.xml" @ONLY)
|
||||
# ovms doc files
|
||||
if(EXISTS "${OVMS_DOCS_DIR}")
|
||||
get_filename_component(OVMS_DOCS_DIR "${OVMS_DOCS_DIR}" ABSOLUTE)
|
||||
|
||||
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
|
||||
--input_dir=${OVMS_DOCS_DIR}
|
||||
--output_dir=${DOCS_BUILD_DIR}/ovms)
|
||||
endif()
|
||||
|
||||
# gst doc files
|
||||
if(EXISTS "${GST_DOCS_DIR}")
|
||||
get_filename_component(GST_DOCS_DIR "${GST_DOCS_DIR}" ABSOLUTE)
|
||||
|
||||
file(GLOB_RECURSE gst_doc_files
|
||||
LIST_DIRECTORIES true RELATIVE ${GST_DOCS_DIR}
|
||||
"${GST_DOCS_DIR}/*.md"
|
||||
"${GST_DOCS_DIR}/*.png"
|
||||
"${GST_DOCS_DIR}/*.gif"
|
||||
"${GST_DOCS_DIR}/*.jpg")
|
||||
|
||||
foreach(source_file ${gst_doc_files})
|
||||
list(APPEND commands COMMAND ${CMAKE_COMMAND} -E copy
|
||||
"${GST_DOCS_DIR}/${source_file}" "${DOCS_BUILD_DIR}/gst/${source_file}")
|
||||
endforeach()
|
||||
list(APPEND commands COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER}
|
||||
--input_dir=${GST_DOCS_DIR}
|
||||
--output_dir=${DOCS_BUILD_DIR}/gst)
|
||||
endif()
|
||||
|
||||
add_custom_target(preprocess_docs
|
||||
COMMENT "Preprocess documentation"
|
||||
VERBATIM)
|
||||
|
||||
# Preprocess docs
|
||||
add_custom_command(TARGET preprocess_docs
|
||||
PRE_BUILD
|
||||
${commands}
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${DOXY_LAYOUT_SCRIPT} --openvino ${OPENVINO_LAYOUT_BUILD}
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${DOXY_MD_FILTER} ${DOCS_BUILD_DIR}
|
||||
COMMENT "Pre-process markdown and image links")
|
||||
|
||||
# IE dev guide and C++ API
|
||||
|
||||
add_custom_target(ie_docs
|
||||
DEPENDS ngraph_cpp_api preprocess_docs
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${IE_OUTPUT}/assets
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${IE_CONFIG_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
VERBATIM)
|
||||
|
||||
# Plugin API
|
||||
|
||||
add_custom_target(plugin_api
|
||||
DEPENDS ngraph_cpp_api ie_docs
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${ASSETS_DIR} ${PLUGIN_OUTPUT}/assets
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${PLUGIN_CONFIG_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
COMMENT "Generating Plugin API Reference"
|
||||
VERBATIM)
|
||||
|
||||
# Umbrella OpenVINO target
|
||||
|
||||
add_custom_target(openvino_docs
|
||||
DEPENDS ngraph_cpp_api ngraph_py_api c_api py_api ie_docs plugin_api
|
||||
COMMENT "Generating OpenVINO documentation"
|
||||
VERBATIM)
|
||||
|
||||
set_target_properties(openvino_docs ie_docs c_api py_api preprocess_docs plugin_api
|
||||
ngraph_py_api ngraph_cpp_api
|
||||
PROPERTIES FOLDER docs)
|
||||
|
||||
add_custom_command(TARGET openvino_docs
|
||||
POST_BUILD
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${DOXY_LOG_SCRIPT} --log "${DOCS_BUILD_DIR}/ie_docs.log"
|
||||
--include_omz $<BOOL:${OMZ_DOCS_DIR}>
|
||||
--include_wb $<BOOL:${WORKBENCH_DOCS_DIR}>
|
||||
--include_pot $<BOOL:${POT_DOCS_DIR}>
|
||||
--include_gst $<BOOL:${GST_DOCS_DIR}>
|
||||
COMMENT "Parse doxygen log to find errors."
|
||||
${commands}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
COMMENT "Preprocess documentation"
|
||||
VERBATIM)
|
||||
|
||||
# added linkcheker
|
||||
add_custom_target(doxygen_xml
|
||||
DEPENDS preprocess_docs
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${REMOVE_XML_SCRIPT} ${XML_OUTPUT}
|
||||
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_BUILD}
|
||||
WORKING_DIRECTORY ${DOCS_BUILD_DIR}
|
||||
COMMENT "Generate doxygen XML output"
|
||||
VERBATIM)
|
||||
|
||||
# Post-process docs
|
||||
add_custom_command(TARGET doxygen_xml
|
||||
POST_BUILD
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${PREPARE_XML_SCRIPT} ${XML_OUTPUT}
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_IN} ${DOXYREST_OUT}
|
||||
COMMAND ${DOXYREST_EXECUTABLE} -c ${DOXYREST_CONFIG_OUT}
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${COPY_IMAGES_SCRIPT} ${XML_OUTPUT} ${RST_OUTPUT}
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${DOXYGEN_MAPPING_SCRIPT} ${XML_OUTPUT} ${DOCS_BUILD_DIR} ${OpenVINO_MAIN_SOURCE_DIR}/../
|
||||
COMMAND ${CMAKE_COMMAND} -E copy ${SPHINX_INDEX_IN} ${SPHINX_INDEX_OUT}
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_IN} ${DOXYREST_OUT}
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${DOXYREST_SPHINX_IN} ${DOXYREST_SPHINX_OUT}
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_directory ${SPHINX_STATIC_IN} ${SPHINX_STATIC_OUT}
|
||||
COMMENT "Prepare xml"
|
||||
VERBATIM)
|
||||
|
||||
add_custom_target(sphinx_docs
|
||||
DEPENDS doxygen_xml
|
||||
COMMAND sphinx-build -b html ${RST_OUTPUT} ${SPHINX_OUTPUT}
|
||||
WORKING_DIRECTORY ${RST_OUTPUT}
|
||||
VERBATIM)
|
||||
|
||||
set_target_properties(doxygen_xml sphinx_docs
|
||||
PROPERTIES FOLDER docs)
|
||||
|
||||
if(EXISTS "${LINKCHECKER_PY}")
|
||||
add_custom_target(docs_check
|
||||
COMMAND ${PYTHON_EXECUTABLE} "${LINKCHECKER_PY}" -v "${DOCS_BUILD_DIR}/html/"
|
||||
COMMENT "Check links in generated documentation"
|
||||
WORKING_DIRECTORY "${DOCS_BUILD_DIR}"
|
||||
VERBATIM)
|
||||
set_target_properties(docs_check PROPERTIES FOLDER docs)
|
||||
endif()
|
||||
|
||||
find_program(browser NAMES xdg-open)
|
||||
if(browser)
|
||||
add_custom_target(ie_docs_open
|
||||
COMMAND ${browser} "${OpenVINO_MAIN_SOURCE_DIR}/docs/html/index.html"
|
||||
DEPENDS ie_docs
|
||||
COMMAND ${browser} "${SPHINX_OUTPUT}/index.html"
|
||||
DEPENDS sphinx_docs
|
||||
COMMENT "Open OpenVINO documentation"
|
||||
VERBATIM)
|
||||
set_target_properties(ie_docs_open PROPERTIES FOLDER docs)
|
||||
|
||||
@@ -58,7 +58,7 @@ PROJECT_LOGO =
|
||||
# entered, it will be relative to the location where doxygen was started. If
|
||||
# left blank the current directory will be used.
|
||||
|
||||
OUTPUT_DIRECTORY = "@OUTPUT_DIRECTORY@"
|
||||
OUTPUT_DIRECTORY = "@DOCS_BUILD_DIR@"
|
||||
|
||||
# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
|
||||
# directories (in 2 levels) under the output directory of each output format and
|
||||
@@ -262,6 +262,8 @@ TAB_SIZE = 4
|
||||
# a double escape (\\{ and \\})
|
||||
|
||||
ALIASES = "ref_ie{1}=@ref InferenceEngine::\1 \"\1\""
|
||||
ALIASES += sphinxdirective="\n\xmlonly<sphinxdirective>"
|
||||
ALIASES += endsphinxdirective="</sphinxdirective>\endxmlonly"
|
||||
|
||||
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
|
||||
# only. Doxygen will then generate output that is more tailored for C. For
|
||||
@@ -317,7 +319,7 @@ OPTIMIZE_OUTPUT_SLICE = NO
|
||||
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
|
||||
# the files are not read by doxygen.
|
||||
|
||||
EXTENSION_MAPPING =
|
||||
EXTENSION_MAPPING = pyx=Python
|
||||
|
||||
# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
|
||||
# according to the Markdown format, which allows for more readable
|
||||
@@ -461,7 +463,7 @@ LOOKUP_CACHE_SIZE = 0
|
||||
# normally produced when WARNINGS is set to YES.
|
||||
# The default value is: NO.
|
||||
|
||||
EXTRACT_ALL = NO
|
||||
EXTRACT_ALL = YES
|
||||
|
||||
# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
|
||||
# be included in the documentation.
|
||||
@@ -556,7 +558,7 @@ INTERNAL_DOCS = NO
|
||||
# (including Cygwin) ands Mac users are advised to set this option to NO.
|
||||
# The default value is: system dependent.
|
||||
|
||||
CASE_SENSE_NAMES = YES
|
||||
CASE_SENSE_NAMES = NO
|
||||
|
||||
# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
|
||||
# their full class and namespace scopes in the documentation. If set to YES, the
|
||||
@@ -811,7 +813,7 @@ WARN_FORMAT = "$file:$line: $text"
|
||||
# messages should be written. If left blank the output is written to standard
|
||||
# error (stderr).
|
||||
|
||||
WARN_LOGFILE = "@DOCS_BUILD_DIR@/ie_docs.log"
|
||||
WARN_LOGFILE = "@DOCS_BUILD_DIR@/doxygen.log"
|
||||
|
||||
#---------------------------------------------------------------------------
|
||||
# Configuration options related to the input files
|
||||
@@ -823,8 +825,25 @@ WARN_LOGFILE = "@DOCS_BUILD_DIR@/ie_docs.log"
|
||||
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
|
||||
# Note: If this tag is empty the current directory is searched.
|
||||
|
||||
INPUT = "@DOCS_BUILD_DIR@" \
|
||||
"@IE_SOURCE_DIR@/include"
|
||||
INPUT = "@MARKDOWN_INPUT@" \
|
||||
"@IE_SOURCE_DIR@/include/" \
|
||||
"@IE_C_API@" \
|
||||
"@OpenVINO_MAIN_SOURCE_DIR@/openvino/itt/include/openvino/" \
|
||||
"@IE_SOURCE_DIR@/src/plugin_api/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/ngraph_ops/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/common_optimizations/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/control_flow/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/low_precision/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/op_conversions/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/opset_conversions/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/rt_info/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/smart_reshape/" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/common_optimizations" \
|
||||
"@IE_SOURCE_DIR@/src/transformations/include/transformations/utils" \
|
||||
"@NGRAPH_DIR@/core/include/" \
|
||||
"@NGRAPH_DIR@/frontend/onnx_import/include \
|
||||
|
||||
# This tag can be used to specify the character encoding of the source files
|
||||
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
|
||||
@@ -855,7 +874,8 @@ FILE_PATTERNS = *.md \
|
||||
*.cpp \
|
||||
*.c \
|
||||
*.hpp \
|
||||
*.h
|
||||
*.h \
|
||||
*.c++ \
|
||||
|
||||
# The RECURSIVE tag can be used to specify whether or not subdirectories should
|
||||
# be searched for input files as well.
|
||||
@@ -891,7 +911,11 @@ EXCLUDE_PATTERNS = */temp/* \
|
||||
*/tests/* \
|
||||
*/openvx/* \
|
||||
*/thirdparty/* \
|
||||
*/IE_PLUGIN_DG/*
|
||||
"@DOXYREST_OUT@" \
|
||||
"@XML_OUTPUT@" \
|
||||
"@RST_OUTPUT@" \
|
||||
"@SPHINX_OUTPUT@" \
|
||||
*/build/open_model_zoo/*
|
||||
|
||||
# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
|
||||
# (namespaces, classes, functions, etc.) that should be excluded from the
|
||||
@@ -943,20 +967,61 @@ EXCLUDE_SYMBOLS = InferenceEngine::details \
|
||||
InferenceEngine::parallel_* \
|
||||
NOMINMAX \
|
||||
TBB_PREVIEW_NUMA_SUPPORT \
|
||||
IE_THREAD_*
|
||||
IE_THREAD_* \
|
||||
INFERENCE_ENGINE_C_API_EXTERN \
|
||||
INFERENCE_ENGINE_C_API \
|
||||
INFERENCE_ENGINE_C_API_CALLBACK \
|
||||
IE_NODISCARD \
|
||||
InferenceEngine::details \
|
||||
ie_api::BlobBuffer \
|
||||
*impl* \
|
||||
*device_name* \
|
||||
*num_requests* \
|
||||
*exec_net* \
|
||||
*c_config* \
|
||||
*ie_core_impl* \
|
||||
*plugin_impl* \
|
||||
*extension_str* \
|
||||
*buffer* \
|
||||
*__cinit__* \
|
||||
ngraph::utils \
|
||||
ie_core_version \
|
||||
ie_core_versions \
|
||||
ie_available_devices \
|
||||
ie_core_versions \
|
||||
input_shapes \
|
||||
colorformat_e \
|
||||
layout_e \
|
||||
dimensions \
|
||||
ie_param_config \
|
||||
struct_desc \
|
||||
ie_param \
|
||||
ie_complete_call_back \
|
||||
IEStatusCode \
|
||||
input_shape \
|
||||
struct_desc
|
||||
|
||||
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
|
||||
# that contain example code fragments that are included (see the \include
|
||||
# command).
|
||||
|
||||
EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@"
|
||||
EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/include" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src/CMakeLists.txt" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/CMakeLists.txt" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/transformations" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/shared_tests_instances/" \
|
||||
"@CMAKE_CURRENT_SOURCE_DIR@/snippets"
|
||||
"@IE_SOURCE_DIR@/tests/functional/plugin/shared/include"
|
||||
|
||||
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
|
||||
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
|
||||
# *.h) to filter out the source-files in the directories. If left blank all
|
||||
# files are included.
|
||||
|
||||
EXAMPLE_PATTERNS =
|
||||
EXAMPLE_PATTERNS = *.cpp \
|
||||
*.hpp
|
||||
|
||||
# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
|
||||
# searched for input files to be used with the \include or \dontinclude commands
|
||||
@@ -969,7 +1034,7 @@ EXAMPLE_RECURSIVE = YES
|
||||
# that contain images that are to be included in the documentation (see the
|
||||
# \image command).
|
||||
|
||||
IMAGE_PATH = .
|
||||
IMAGE_PATH = "@DOCS_BUILD_DIR@"
|
||||
|
||||
# The INPUT_FILTER tag can be used to specify a program that doxygen should
|
||||
# invoke to filter for each input file. Doxygen will invoke the filter program
|
||||
@@ -1175,7 +1240,7 @@ IGNORE_PREFIX =
|
||||
# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
|
||||
# The default value is: YES.
|
||||
|
||||
GENERATE_HTML = YES
|
||||
GENERATE_HTML = NO
|
||||
|
||||
# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
|
||||
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
|
||||
@@ -1183,7 +1248,7 @@ GENERATE_HTML = YES
|
||||
# The default directory is: html.
|
||||
# This tag requires that the tag GENERATE_HTML is set to YES.
|
||||
|
||||
HTML_OUTPUT = @OUTPUT_DIRECTORY@
|
||||
HTML_OUTPUT = html
|
||||
|
||||
# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
|
||||
# generated HTML page (for example: .htm, .php, .asp).
|
||||
@@ -1210,7 +1275,7 @@ HTML_FILE_EXTENSION = .html
|
||||
# of the possible markers and block names see the documentation.
|
||||
# This tag requires that the tag GENERATE_HTML is set to YES.
|
||||
|
||||
HTML_HEADER = @HEADER_BUILD@
|
||||
HTML_HEADER =
|
||||
|
||||
# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
|
||||
# generated HTML page. If the tag is left blank doxygen will generate a standard
|
||||
@@ -1220,7 +1285,7 @@ HTML_HEADER = @HEADER_BUILD@
|
||||
# that doxygen normally uses.
|
||||
# This tag requires that the tag GENERATE_HTML is set to YES.
|
||||
|
||||
HTML_FOOTER = @FOOTER_BUILD@
|
||||
HTML_FOOTER =
|
||||
|
||||
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
|
||||
# sheet that is used by each HTML page. It can be used to fine-tune the look of
|
||||
@@ -2049,7 +2114,7 @@ MAN_LINKS = NO
|
||||
# captures the structure of the code including all documentation.
|
||||
# The default value is: NO.
|
||||
|
||||
GENERATE_XML = NO
|
||||
GENERATE_XML = YES
|
||||
|
||||
# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
|
||||
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
|
||||
@@ -2057,7 +2122,7 @@ GENERATE_XML = NO
|
||||
# The default directory is: xml.
|
||||
# This tag requires that the tag GENERATE_XML is set to YES.
|
||||
|
||||
XML_OUTPUT = xml
|
||||
XML_OUTPUT = "@XML_OUTPUT@"
|
||||
|
||||
# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
|
||||
# listings (including syntax highlighting and cross-referencing information) to
|
||||
@@ -2066,7 +2131,7 @@ XML_OUTPUT = xml
|
||||
# The default value is: YES.
|
||||
# This tag requires that the tag GENERATE_XML is set to YES.
|
||||
|
||||
XML_PROGRAMLISTING = YES
|
||||
XML_PROGRAMLISTING = NO
|
||||
|
||||
# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include
|
||||
# namespace members in file scope as well, matching the HTML output.
|
||||
@@ -2223,7 +2288,41 @@ PREDEFINED = "INFERENCE_ENGINE_API_CLASS=" \
|
||||
"INFERENCE_ENGINE_NN_BUILDER_API_CLASS=" \
|
||||
"INFERENCE_ENGINE_NN_BUILDER_DEPRECATED(x)=" \
|
||||
"INFERENCE_ENGINE_INTERNAL(x)=" \
|
||||
"INFERENCE_ENGINE_INTERNAL_CNNLAYER_CLASS(x)="
|
||||
"INFERENCE_ENGINE_INTERNAL_CNNLAYER_CLASS(x)=" \
|
||||
"__attribute__(x)=" \
|
||||
"__VA_ARGS__=" \
|
||||
"INFERENCE_ENGINE_C_API_EXTERN=" \
|
||||
"INFERENCE_ENGINE_C_API_CALLBACK=" \
|
||||
"INFERENCE_ENGINE_C_API=" \
|
||||
"IE_NODISCARD=" \
|
||||
"__cdecl=" \
|
||||
"__declspec(x)=" \
|
||||
"_WIN32" \
|
||||
"INFERENCE_ENGINE_API=" \
|
||||
"INFERENCE_ENGINE_API_CPP=" \
|
||||
"INFERENCE_ENGINE_API_CLASS=" \
|
||||
"INFERENCE_ENGINE_DEPRECATED=" \
|
||||
"inference_engine_transformations_EXPORTS" \
|
||||
"TRANSFORMATIONS_API=" \
|
||||
"NGRAPH_HELPER_DLL_EXPORT=" \
|
||||
"NGRAPH_HELPER_DLL_IMPORT=" \
|
||||
"IE_SUPPRESS_DEPRECATED_START=" \
|
||||
"IE_SUPPRESS_DEPRECATED_END=" \
|
||||
"_IE_SUPPRESS_DEPRECATED_START_MSVC=" \
|
||||
"_IE_SUPPRESS_DEPRECATED_END_MSVC=" \
|
||||
"_IE_SUPPRESS_DEPRECATED_START_GCC=" \
|
||||
"_IE_SUPPRESS_DEPRECATED_END_GCC=" \
|
||||
"IE_THREAD=IE_THREAD_TBB" \
|
||||
"NGRAPH_RTTI_DECLARATION=" \
|
||||
"__attribute__(x)=" \
|
||||
"__VA_ARGS__=" \
|
||||
"INFERENCE_ENGINE_C_API_EXTERN=" \
|
||||
"INFERENCE_ENGINE_C_API=" \
|
||||
"IE_NODISCARD=" \
|
||||
"__cdecl=" \
|
||||
"__declspec(x)=" \
|
||||
"__GNUC__=" \
|
||||
"_WIN32"
|
||||
|
||||
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
|
||||
# tag can be used to specify a list of macro names that should be expanded. The
|
||||
@@ -12,7 +12,7 @@ Representation (IR) for this model.
|
||||
This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to
|
||||
plug in your own implementation for existing or completely new operation.
|
||||
|
||||
> **NOTE:** *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used.
|
||||
> **NOTE**: *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used.
|
||||
> Refer to the [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../MO_DG/IR_and_opsets.md)
|
||||
> for more information on the topic.
|
||||
|
||||
@@ -44,7 +44,7 @@ plugins to support inference of this operation using a particular target hardwar
|
||||
To see the operations that are supported by each device plugin for the Inference Engine, refer to the
|
||||
[Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md).
|
||||
|
||||
> **NOTE:** If a device doesn't support a particular operation, an alternative to creating a new operation is to target
|
||||
> **NOTE**: If a device doesn't support a particular operation, an alternative to creating a new operation is to target
|
||||
> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be
|
||||
> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to
|
||||
> run on another device (e.g., CPU) that does support those operations.
|
||||
@@ -63,7 +63,7 @@ operation and uses corresponding operation class to update graph node attributes
|
||||
operation. Refer to the "Operation Extractor" section of
|
||||
[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for detailed instructions on how to implement it.
|
||||
|
||||
> **NOTE:** In some cases you may need to implement some transformation to support the operation. This topic is covered in the "Graph Transformation Extensions" section of [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
|
||||
> **NOTE**: In some cases you may need to implement some transformation to support the operation. This topic is covered in the "Graph Transformation Extensions" section of [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md).
|
||||
|
||||
## Custom Operations Extensions for the Inference Engine
|
||||
|
||||
@@ -131,15 +131,26 @@ Firstly, open the model in the TensorBoard or other TensorFlow* model visualizat
|
||||
batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all
|
||||
dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set
|
||||
the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API
|
||||
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to
|
||||
[Converting a Model Using General Conversion Parameters](../MO_DG/prepare_model/convert_model/Converting_Model_General.md)
|
||||
and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
|
||||
described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to the General Conversion Parameters section in [Converting a Model to Intermediate Representation (IR)](../MO_DG/prepare_model/convert_model/Converting_Model.md) and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
|
||||
for more details and command line parameters used for the model conversion.
|
||||
|
||||
```bash
|
||||
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
|
||||
```
|
||||
> **NOTE:** This conversion guide is applicable for the 2021.3 release of OpenVINO and that starting from 2021.4
|
||||
@sphinxdirective
|
||||
.. tab:: Package, Docker, open-source installation
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/deployment_tools/model_optimizer/
|
||||
python3 mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
|
||||
|
||||
.. tab:: pip installation
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
> **NOTE**: This conversion guide is applicable for the 2021.3 release of OpenVINO and that starting from 2021.4
|
||||
> the OpenVINO supports this model out of the box.
|
||||
|
||||
Model Optimizer produces the following error:
|
||||
@@ -221,7 +232,7 @@ following snippet provides two extractors: one for "IFFT2D", another one for "FF
|
||||
|
||||
@snippet FFT_ext.py fft_ext:extractor
|
||||
|
||||
> **NOTE:** The graph is in inconsistent state after extracting node attributes because according to original operation
|
||||
> **NOTE**: The graph is in inconsistent state after extracting node attributes because according to original operation
|
||||
> "IFFT2D" semantic it should have an input consuming a tensor of complex numbers, but the extractor instantiated an
|
||||
> operation "FFT" which expects a real tensor with specific layout. But the inconsistency will be resolved during
|
||||
> applying front phase transformations discussed below.
|
||||
@@ -239,7 +250,7 @@ information on how this type of transformation works. The code snippet should be
|
||||
|
||||
@snippet Complex.py complex:transformation
|
||||
|
||||
> **NOTE:** The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but
|
||||
> **NOTE**: The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but
|
||||
> "FFT" produces real value tensor.
|
||||
|
||||
Now lets implement a transformation which replace a "ComplexAbs" operation with a sub-graph of primitive operations
|
||||
@@ -257,15 +268,27 @@ The implementation should be saved to the file `mo_extensions/front/tf/ComplexAb
|
||||
@snippet ComplexAbs.py complex_abs:transformation
|
||||
|
||||
Now it is possible to convert the model using the following command line:
|
||||
```bash
|
||||
./<MO_INSTALL_DIR>/mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
|
||||
```
|
||||
@sphinxdirective
|
||||
.. tab:: Package, Docker, open-source installation
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
cd <INSTALL_DIR>/deployment_tools/model_optimizer/
|
||||
python3 mo.py --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
|
||||
|
||||
.. tab:: pip installation
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <PATH_TO_MODEL>/wnet_20.pb -b 1 --extensions mo_extensions/
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The sub-graph corresponding to the originally non-supported one is depicted in the image below:
|
||||
|
||||

|
||||
|
||||
> **NOTE:** Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with
|
||||
> **NOTE**: Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with
|
||||
> the value 2 moved to another position.
|
||||
|
||||
### Inference Engine Extension Implementation
|
||||
@@ -350,7 +373,7 @@ python3 mri_reconstruction_demo.py \
|
||||
## Converting Models:
|
||||
|
||||
- [Convert Your Caffe* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md)
|
||||
- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
|
||||
- [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
|
||||
- [Convert Your MXNet* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md)
|
||||
- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md)
|
||||
- [Convert Your ONNX* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [complex:transformation]
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [complex_abs:transformation]
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# ! [fft_ext:extractor]
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [fft:operation]
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright (C) 2018-2021 Intel Corporation
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [mri_demo:demo]
|
||||
|
||||
@@ -10,10 +10,14 @@ The sections below contain detailed list of changes made to the Inference Engine
|
||||
|
||||
### Deprecated API
|
||||
|
||||
**InferenceEngine::Parameter**
|
||||
|
||||
* InferenceEngine::Parameter(const std::shared_ptr<ngraph::Variant>&)
|
||||
* InferenceEngine::Parameter(std::shared_ptr<ngraph::Variant>& var)
|
||||
* std::shared_ptr<ngraph::Variant> InferenceEngine::Parameter::asVariant() const
|
||||
* InferenceEngine::Parameter::operator std::shared_ptr<ngraph::Variant>() const
|
||||
|
||||
**GPU plugin configuration keys**
|
||||
* KEY_CLDNN_NV12_TWO_INPUTS GPU plugin option. Use KEY_GPU_NV12_TWO_INPUTS instead
|
||||
* KEY_CLDNN_PLUGIN_PRIORITY GPU plugin option. Use KEY_GPU_PLUGIN_PRIORITY instead
|
||||
* KEY_CLDNN_PLUGIN_THROTTLE GPU plugin option. Use KEY_GPU_PLUGIN_THROTTLE instead
|
||||
@@ -24,6 +28,38 @@ The sections below contain detailed list of changes made to the Inference Engine
|
||||
* KEY_TUNING_MODE GPU plugin option
|
||||
* KEY_TUNING_FILE GPU plugin option
|
||||
|
||||
**InferenceEngine::IInferRequest**
|
||||
* IInferRequest interface is deprecated, use InferRequest wrapper:
|
||||
* Constructor for InferRequest from IInferRequest:: Ptr is deprecated
|
||||
* Cast operator for InferRequest to IInferRequest shared pointer is deprecated
|
||||
|
||||
**InferenceEngine::ICNNNetwork**
|
||||
* ICNNNetwork interface is deprecated by means of deprecation of all its methods, use CNNNetwork wrapper
|
||||
* CNNNetwork methods working with ICNNNetwork are deprecated:
|
||||
* Cast to ICNNNetwork shared pointer
|
||||
* Cast to reference to ICNNNetwork interface
|
||||
* Constructor from ICNNNetwork shared pointer
|
||||
|
||||
**InferenceEngine::IExecutableNetwork**
|
||||
* IExecutableNetwork is deprecated, use ExecutableNetwork wrappers:
|
||||
* Constructor of ExecutableNetwork from IExecutableNetwork shared pointer is deprecated
|
||||
* The following ExecutableNetwork methods are deprecated:
|
||||
* ExecutableNetwork::reset
|
||||
* Cast operator to IExecutableNetwork shared pointer
|
||||
* ExecutableNetwork::CreateInferRequestPtr - use ExecutableNetwork::CreateInferRequest instead
|
||||
|
||||
**Extensions API**
|
||||
* InferenceEngine::make_so_pointer which is used to create Extensions library is replaced by std::make_shared<Extension>(..)
|
||||
* InferenceEngine::IExtension::Release is deprecated with no replacement
|
||||
* Use IE_DEFINE_EXTENSION_CREATE_FUNCTION helper macro instead of explicit declaration of CreateExtension function, which create extension.
|
||||
|
||||
**Other changes**
|
||||
* Version::ApiVersion structure is deprecated, Inference Engine does not have API version anymore
|
||||
* LowLatency - use lowLatency2 instead
|
||||
* CONFIG_KEY(DUMP_EXEC_GRAPH_AS_DOT) - use InferenceEngine::ExecutableNetwork::GetExecGraphInfo::serialize() instead
|
||||
* Core::ImportNetwork with no device - pass device name explicitly.
|
||||
* details::InferenceEngineException - use InferenceEngine::Exception and its derivatives instead.
|
||||
|
||||
## 2021.3
|
||||
|
||||
### New API
|
||||
|
||||
@@ -1,50 +1,57 @@
|
||||
# Bfloat16 Inference {#openvino_docs_IE_DG_Bfloat16Inference}
|
||||
|
||||
## Disclaimer
|
||||
## Bfloat16 Inference Usage (C++)
|
||||
|
||||
Inference Engine with the bfloat16 inference implemented on CPU must support the native `avx512_bf16` instruction and therefore the bfloat16 data format.
|
||||
It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native `avx512_bf16` instruction usage.
|
||||
@sphinxdirective
|
||||
.. raw:: html
|
||||
|
||||
## Introduction
|
||||
<div id="switcher-cpp" class="switcher-anchor">C++</div>
|
||||
@endsphinxdirective
|
||||
|
||||
### Disclaimer
|
||||
|
||||
Inference Engine with the bfloat16 inference implemented on CPU must support the native *avx512_bf16* instruction and therefore the bfloat16 data format. It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native *avx512_bf16* instruction usage.
|
||||
|
||||
### Introduction
|
||||
Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format with 16 bits. This is a truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format FP32. BF16 preserves 8 exponent bits as FP32 but reduces precision of the sign and mantissa from 24 bits to 8 bits.
|
||||
|
||||
![bf16_format]
|
||||
|
||||
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits.
|
||||
Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range.
|
||||
Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
|
||||
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits. Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range. Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
|
||||
|
||||
See the ["BFLOAT16 – Hardware Numerics Definition" white paper"](https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
|
||||
See the [BFLOAT16 – Hardware Numerics Definition white paper](https://software.intel.com/content/dam/develop/external/us/en/documents/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
|
||||
|
||||
There are two ways to check if CPU device can support bfloat16 computations for models:
|
||||
1. Query the instruction set via system `lscpu | grep avx512_bf16` or `cat /proc/cpuinfo | grep avx512_bf16`.
|
||||
2. Use [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
|
||||
|
||||
1. Query the instruction set using one of these system commands:
|
||||
* `lscpu | grep avx512_bf16`
|
||||
* `cat /proc/cpuinfo | grep avx512_bf16`
|
||||
2. Use the [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
|
||||
|
||||
@snippet snippets/Bfloat16Inference0.cpp part0
|
||||
|
||||
Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
|
||||
The current Inference Engine solution for bfloat16 inference uses the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
|
||||
|
||||
## Lowering Inference Precision
|
||||
### Lowering Inference Precision
|
||||
|
||||
Lowering precision to increase performance is [widely used](https://software.intel.com/content/www/us/en/develop/articles/lower-numerical-precision-deep-learning-inference-and-training.html) for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach.
|
||||
The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
|
||||
Lowering precision to increase performance is [widely used](https://software.intel.com/content/www/us/en/develop/articles/lower-numerical-precision-deep-learning-inference-and-training.html) for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach. The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
|
||||
|
||||
Using Bfloat16 precision provides the following performance benefits:
|
||||
|
||||
Bfloat16 data usage provides the following benefits that increase performance:
|
||||
1. Faster multiplication of two BF16 numbers because of shorter mantissa of bfloat16 data.
|
||||
2. No need to support denormals and handling exceptions as this is a performance optimization.
|
||||
3. Fast conversion of float32 to bfloat16 and vice versa.
|
||||
4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds.
|
||||
5. Reduced amount of data that must be transferred, as a result, reduced data transition time.
|
||||
|
||||
For default optimization on CPU, source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, `KEY_ENFORCE_BF16` is set to `YES`.
|
||||
The code below demonstrates how to check if the key is set:
|
||||
For default optimization on CPU, the source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, `KEY_ENFORCE_BF16` is set to `YES` in the `PluginConfigParams` for `GetConfig()`. The code below demonstrates how to check if the key is set:
|
||||
|
||||
@snippet snippets/Bfloat16Inference1.cpp part1
|
||||
|
||||
To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers as is without modifications with precisions that were set on each layer edge.
|
||||
To disable BF16 internal transformations in C++ API, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers as is without modifications with precisions that were set on each layer edge.
|
||||
|
||||
@snippet snippets/Bfloat16Inference2.cpp part2
|
||||
|
||||
To disable BF16 in C API:
|
||||
|
||||
```
|
||||
@@ -52,15 +59,16 @@ ie_config_t config = { "ENFORCE_BF16", "NO", NULL};
|
||||
ie_core_load_network(core, network, device_name, &config, &exe_network);
|
||||
```
|
||||
|
||||
An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support or BF16 simulation mode.
|
||||
An exception with the message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support or BF16 simulation mode.
|
||||
|
||||
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
|
||||
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
|
||||
|
||||
## Bfloat16 Simulation Mode
|
||||
### Bfloat16 Simulation Mode
|
||||
|
||||
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee an adequate performance.
|
||||
To enable Bfloat16 simulator:
|
||||
* In [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option
|
||||
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee good performance. Note that the CPU must still support the AVX-512 extensions.
|
||||
|
||||
To enable the simulation of Bfloat16:
|
||||
* In the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option
|
||||
* In C++ API, set `KEY_ENFORCE_BF16` to `YES`
|
||||
* In C API:
|
||||
```
|
||||
@@ -68,25 +76,139 @@ ie_config_t config = { "ENFORCE_BF16", "YES", NULL};
|
||||
ie_core_load_network(core, network, device_name, &config, &exe_network);
|
||||
```
|
||||
|
||||
## Performance Counters
|
||||
### Performance Counters
|
||||
|
||||
Information about layer precision is stored in the performance counters that are available from the Inference Engine API. The layers have the following marks:
|
||||
|
||||
Information about layer precision is stored in the performance counters that are
|
||||
available from the Inference Engine API. The layers have the following marks:
|
||||
* Suffix `BF16` for layers that had bfloat16 data type input and were computed in BF16 precision
|
||||
* Suffix `FP32` for layers computed in 32-bit precision
|
||||
|
||||
For example, the performance counters table for the Inception model can look as follows:
|
||||
|
||||
```
|
||||
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
|
||||
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
|
||||
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
|
||||
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
|
||||
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
|
||||
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
|
||||
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
|
||||
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
|
||||
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
|
||||
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
|
||||
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
|
||||
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
|
||||
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
|
||||
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
|
||||
```
|
||||
|
||||
The `execType` column of the table includes inference primitives with specific suffixes.
|
||||
The **execType** column of the table includes inference primitives with specific suffixes.
|
||||
|
||||
## Bfloat16 Inference Usage (Python)
|
||||
|
||||
@sphinxdirective
|
||||
.. raw:: html
|
||||
|
||||
<div id="switcher-python" class="switcher-anchor">Python</div>
|
||||
@endsphinxdirective
|
||||
|
||||
### Disclaimer
|
||||
|
||||
Inference Engine with the bfloat16 inference implemented on CPU must support the native *avx512_bf16* instruction and therefore the bfloat16 data format. It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native *avx512_bf16* instruction usage.
|
||||
|
||||
### Introduction
|
||||
Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format with 16 bits. This is a truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format FP32. BF16 preserves 8 exponent bits as FP32 but reduces precision of the sign and mantissa from 24 bits to 8 bits.
|
||||
|
||||
![bf16_format]
|
||||
|
||||
Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits. Truncated mantissa leads to occasionally less precision, but according to investigations, neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range. Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers.
|
||||
|
||||
See the [BFLOAT16 – Hardware Numerics Definition white paper](https://software.intel.com/content/dam/develop/external/us/en/documents/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details.
|
||||
|
||||
There are two ways to check if CPU device can support bfloat16 computations for models:
|
||||
|
||||
1. Query the instruction set using one of these system commands:
|
||||
* `lscpu | grep avx512_bf16`
|
||||
* `cat /proc/cpuinfo | grep avx512_bf16`
|
||||
2. Use the Query API with METRIC_KEY(OPTIMIZATION_CAPABILITIES), which should return BF16 in the list of CPU optimization options:
|
||||
|
||||
```python
|
||||
from openvino.inference_engine import IECore
|
||||
|
||||
ie = IECore()
|
||||
net = ie.read_network(path_to_xml_file)
|
||||
cpu_caps = ie.get_metric(metric_name="OPTIMIZATION_CAPABILITIES", device_name="CPU")
|
||||
```
|
||||
|
||||
The current Inference Engine solution for bfloat16 inference uses the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode.
|
||||
|
||||
### Lowering Inference Precision
|
||||
|
||||
Lowering precision to increase performance is widely used for optimization of inference. The bfloat16 data type usage on CPU for the first time opens the possibility of default optimization approach. The embodiment of this approach is to use the optimization capabilities of the current platform to achieve maximum performance while maintaining the accuracy of calculations within the acceptable range.
|
||||
|
||||
Using Bfloat16 precision provides the following performance benefits:
|
||||
|
||||
1. Faster multiplication of two BF16 numbers because of shorter mantissa of bfloat16 data.
|
||||
2. No need to support denormals and handling exceptions as this is a performance optimization.
|
||||
3. Fast conversion of float32 to bfloat16 and vice versa.
|
||||
4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds.
|
||||
5. Reduced amount of data that must be transferred, as a result, reduced data transition time.
|
||||
|
||||
For default optimization on CPU, the source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, ENFORCE_BF16 is set to YES. The code below demonstrates how to check if the key is set:
|
||||
|
||||
```python
|
||||
from openvino.inference_engine import IECore
|
||||
|
||||
ie = IECore()
|
||||
net = ie.read_network(path_to_xml_file)
|
||||
exec_net = ie.load_network(network=net, device_name="CPU")
|
||||
exec_net.get_config("ENFORCE_BF16")
|
||||
```
|
||||
|
||||
To enable BF16 internal transformations, set the key "ENFORCE_BF16" to "YES" in the ExecutableNetwork configuration.
|
||||
|
||||
```python
|
||||
bf16_config = {"ENFORCE_BF16" : "YES"}
|
||||
exec_net = ie.load_network(network=net, device_name="CPU", config = bf16_config)
|
||||
```
|
||||
|
||||
To disable BF16 internal transformations, set the key "ENFORCE_BF16" to "NO". In this case, the model infers as is without modifications with precisions that were set on each layer edge.
|
||||
|
||||
An exception with the message `Platform doesn't support BF16 format` is formed in case of setting "ENFORCE_BF16" to "YES"on CPU without native BF16 support or BF16 simulation mode.
|
||||
|
||||
Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default.
|
||||
|
||||
### Bfloat16 Simulation Mode
|
||||
|
||||
Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native avx512_bf16 instruction. The simulator does not guarantee good performance. Note that the CPU must still support the AVX-512 extensions.
|
||||
|
||||
#### To Enable the simulation of Bfloat16:
|
||||
|
||||
* In the Benchmark App, add the -enforcebf16=true option
|
||||
* In Python, use the following code as an example:
|
||||
|
||||
```python
|
||||
from openvino.inference_engine import IECore
|
||||
|
||||
ie = IECore()
|
||||
net = ie.read_network(path_to_xml_file)
|
||||
bf16_config = {"ENFORCE_BF16" : "YES"}
|
||||
exec_net = ie.load_network(network=net, device_name="CPU", config=bf16_config)
|
||||
```
|
||||
|
||||
### Performance Counters
|
||||
|
||||
Information about layer precision is stored in the performance counters that are available from the Inference Engine API. The layers have the following marks:
|
||||
|
||||
* Suffix *BF16* for layers that had bfloat16 data type input and were computed in BF16 precision
|
||||
* Suffix *FP32* for layers computed in 32-bit precision
|
||||
|
||||
For example, the performance counters table for the Inception model can look as follows:
|
||||
|
||||
```
|
||||
pool5 EXECUTED layerType: Pooling realTime: 143 cpu: 143 execType: jit_avx512_BF16
|
||||
fc6 EXECUTED layerType: FullyConnected realTime: 47723 cpu: 47723 execType: jit_gemm_BF16
|
||||
relu6 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
|
||||
fc7 EXECUTED layerType: FullyConnected realTime: 7558 cpu: 7558 execType: jit_gemm_BF16
|
||||
relu7 NOT_RUN layerType: ReLU realTime: 0 cpu: 0 execType: undef
|
||||
fc8 EXECUTED layerType: FullyConnected realTime: 2193 cpu: 2193 execType: jit_gemm_BF16
|
||||
prob EXECUTED layerType: SoftMax realTime: 68 cpu: 68 execType: jit_avx512_FP32
|
||||
```
|
||||
|
||||
|
||||
The **execType** column of the table includes inference primitives with specific suffixes.
|
||||
|
||||
[bf16_format]: img/bf16_format.png
|
||||
|
||||
@@ -1,121 +1,52 @@
|
||||
# Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide}
|
||||
|
||||
> **NOTE:** [Intel® System Studio](https://software.intel.com/content/www/us/en/develop/tools/oneapi/commercial-base-iot.html) (click "Intel® System Studio Users" tab) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
|
||||
@sphinxdirective
|
||||
|
||||
This Guide provides an overview of the Inference Engine describing the typical workflow for performing inference of a pre-trained and optimized deep learning model and a set of sample applications.
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_IE_DG_Integrate_with_customer_application_new_API
|
||||
openvino_docs_deployment_optimization_guide_dldt_optimization_guide
|
||||
openvino_docs_IE_DG_Device_Plugins
|
||||
Direct ONNX Format Support <openvino_docs_IE_DG_ONNX_Support>
|
||||
openvino_docs_IE_DG_Int8Inference
|
||||
openvino_docs_IE_DG_Bfloat16Inference
|
||||
openvino_docs_IE_DG_DynamicBatching
|
||||
openvino_docs_IE_DG_ShapeInference
|
||||
openvino_docs_IE_DG_Model_caching_overview
|
||||
openvino_docs_IE_DG_Extensibility_DG_Intro
|
||||
openvino_docs_IE_DG_Memory_primitives
|
||||
openvino_docs_IE_DG_network_state_intro
|
||||
openvino_docs_IE_DG_API_Changes
|
||||
openvino_docs_IE_DG_Known_Issues_Limitations
|
||||
openvino_docs_IE_DG_Glossary
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in runtime using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel).
|
||||
## Introduction
|
||||
Inference Engine is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the Inference Engine API to read the Intermediate Representation (IR), ONNX and execute the model on devices.
|
||||
|
||||
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.
|
||||
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
|
||||
|
||||
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
|
||||
|
||||
Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, or VPU. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available.
|
||||

|
||||
|
||||
For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.
|
||||
\\* _nGraph_ is the internal graph representation in the OpenVINO™ toolkit. Use it to [build a model from source code](https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_build_function.html).
|
||||
|
||||
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
|
||||
|
||||
To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
|
||||
## Video
|
||||
|
||||
For complete API Reference, see the [Inference Engine API References](./api_references.html) section.
|
||||
@sphinxdirective
|
||||
|
||||
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
|
||||
.. list-table::
|
||||
|
||||
## Modules in the Inference Engine component
|
||||
### Core Inference Engine Libraries
|
||||
* - .. raw:: html
|
||||
|
||||
Your application must link to the core Inference Engine libraries:
|
||||
* Linux* OS:
|
||||
- `libinference_engine.so`, which depends on `libinference_engine_transformations.so`, `libtbb.so`, `libtbbmalloc.so` and `libngraph.so`
|
||||
* Windows* OS:
|
||||
- `inference_engine.dll`, which depends on `inference_engine_transformations.dll`, `tbb.dll`, `tbbmalloc.dll` and `ngraph.dll`
|
||||
* macOS*:
|
||||
- `libinference_engine.dylib`, which depends on `libinference_engine_transformations.dylib`, `libtbb.dylib`, `libtbbmalloc.dylib` and `libngraph.dylib`
|
||||
|
||||
The required C++ header files are located in the `include` directory.
|
||||
|
||||
This library contains the classes to:
|
||||
* Create Inference Engine Core object to work with devices and read network (InferenceEngine::Core)
|
||||
* Manipulate network information (InferenceEngine::CNNNetwork)
|
||||
* Execute and pass inputs and outputs (InferenceEngine::ExecutableNetwork and InferenceEngine::InferRequest)
|
||||
|
||||
### Plugin Libraries to Read a Network Object
|
||||
|
||||
Starting from 2020.4 release, Inference Engine introduced a concept of `CNNNetwork` reader plugins. Such plugins can be automatically dynamically loaded by Inference Engine in runtime depending on file format:
|
||||
* Linux* OS:
|
||||
- `libinference_engine_ir_reader.so` to read a network from IR
|
||||
- `libinference_engine_onnx_reader.so` to read a network from ONNX model format
|
||||
* Windows* OS:
|
||||
- `inference_engine_ir_reader.dll` to read a network from IR
|
||||
- `inference_engine_onnx_reader.dll` to read a network from ONNX model format
|
||||
|
||||
### Device-Specific Plugin Libraries
|
||||
|
||||
For each supported target device, Inference Engine provides a plugin — a DLL/shared library that contains complete implementation for inference on this particular device. The following plugins are available:
|
||||
|
||||
| Plugin | Device Type |
|
||||
| ------- | ----------------------------- |
|
||||
|CPU | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
|
||||
|GPU | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
|
||||
|MYRIAD | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
|
||||
|GNA | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, Intel® Core™ i3-1000G4 Processor |
|
||||
|HETERO | Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers|
|
||||
|MULTI | Simultaneous inference of the same network on several devices in parallel|
|
||||
|
||||
The table below shows the plugin libraries and additional dependencies for Linux, Windows and macOS platforms.
|
||||
|
||||
| Plugin | Library name for Linux | Dependency libraries for Linux | Library name for Windows | Dependency libraries for Windows | Library name for macOS | Dependency libraries for macOS |
|
||||
|--------|-----------------------------|-------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------|------------------------------|---------------------------------------------|
|
||||
| CPU | `libMKLDNNPlugin.so` | `libinference_engine_lp_transformations.so` | `MKLDNNPlugin.dll` | `inference_engine_lp_transformations.dll` | `libMKLDNNPlugin.so` | `inference_engine_lp_transformations.dylib` |
|
||||
| GPU | `libclDNNPlugin.so` | `libinference_engine_lp_transformations.so`, `libOpenCL.so` | `clDNNPlugin.dll` | `OpenCL.dll`, `inference_engine_lp_transformations.dll` | Is not supported | - |
|
||||
| MYRIAD | `libmyriadPlugin.so` | `libusb.so`, | `myriadPlugin.dll` | `usb.dll` | `libmyriadPlugin.so` | `libusb.dylib` |
|
||||
| HDDL | `libHDDLPlugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so` | `HDDLPlugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll` | Is not supported | - |
|
||||
| GNA | `libGNAPlugin.so` | `libgna.so`, | `GNAPlugin.dll` | `gna.dll` | Is not supported | - |
|
||||
| HETERO | `libHeteroPlugin.so` | Same as for selected plugins | `HeteroPlugin.dll` | Same as for selected plugins | `libHeteroPlugin.so` | Same as for selected plugins |
|
||||
| MULTI | `libMultiDevicePlugin.so` | Same as for selected plugins | `MultiDevicePlugin.dll` | Same as for selected plugins | `libMultiDevicePlugin.so` | Same as for selected plugins |
|
||||
|
||||
> **NOTE**: All plugin libraries also depend on core Inference Engine libraries.
|
||||
|
||||
Make sure those libraries are in your computer's path or in the place you pointed to in the plugin loader. Make sure each plugin's related dependencies are in the:
|
||||
|
||||
* Linux: `LD_LIBRARY_PATH`
|
||||
* Windows: `PATH`
|
||||
* macOS: `DYLD_LIBRARY_PATH`
|
||||
|
||||
On Linux and macOS, use the script `bin/setupvars.sh` to set the environment variables.
|
||||
|
||||
On Windows, run the `bin\setupvars.bat` batch file to set the environment variables.
|
||||
|
||||
To learn more about supported devices and corresponding plugins, see the [Supported Devices](supported_plugins/Supported_Devices.md) chapter.
|
||||
|
||||
## Common Workflow for Using the Inference Engine API
|
||||
|
||||
The common workflow contains the following steps:
|
||||
|
||||
1. **Create Inference Engine Core object** - Create an `InferenceEngine::Core` object to work with different devices, all device plugins are managed internally by the `Core` object. Register extensions with custom nGraph operations (`InferenceEngine::Core::AddExtension`).
|
||||
|
||||
2. **Read the Intermediate Representation** - Using the `InferenceEngine::Core` class, read an Intermediate Representation file into an object of the `InferenceEngine::CNNNetwork` class. This class represents the network in the host memory.
|
||||
|
||||
3. **Prepare inputs and outputs format** - After loading the network, specify input and output precision and the layout on the network. For these specification, use the `InferenceEngine::CNNNetwork::getInputsInfo()` and `InferenceEngine::CNNNetwork::getOutputsInfo()`.
|
||||
|
||||
4. Pass per device loading configurations specific to this device (`InferenceEngine::Core::SetConfig`), and register extensions to this device (`InferenceEngine::Core::AddExtension`).
|
||||
|
||||
5. **Compile and Load Network to device** - Use the `InferenceEngine::Core::LoadNetwork()` method with specific device (e.g. `CPU`, `GPU`, etc.) to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation.
|
||||
|
||||
6. **Set input data** - With the network loaded, you have an `InferenceEngine::ExecutableNetwork` object. Use this object to create an `InferenceEngine::InferRequest` in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy.
|
||||
|
||||
7. **Execute** - With the input and output memory now defined, choose your execution mode:
|
||||
|
||||
* Synchronously - `InferenceEngine::InferRequest::Infer()` method. Blocks until inference is completed.
|
||||
* Asynchronously - `InferenceEngine::InferRequest::StartAsync()` method. Check status with the `InferenceEngine::InferRequest::Wait()` method (0 timeout), wait, or specify a completion callback.
|
||||
|
||||
8. **Get the output** - After inference is completed, get the output memory or read the memory you provided earlier. Do this with the `InferenceEngine::IInferRequest::GetBlob()` method.
|
||||
|
||||
## Video: Inference Engine Concept
|
||||
[](https://www.youtube.com/watch?v=e6R13V8nbak)
|
||||
\htmlonly
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/e6R13V8nbak" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
\endhtmlonly
|
||||
|
||||
## Further Reading
|
||||
|
||||
For more details on the Inference Engine API, refer to the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.
|
||||
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="100%"
|
||||
src="https://www.youtube.com/embed/e6R13V8nbak">
|
||||
</iframe>
|
||||
* - **Inference Engine Concept**. Duration: 3:43
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -1,52 +1,106 @@
|
||||
Using Dynamic Batching {#openvino_docs_IE_DG_DynamicBatching}
|
||||
======================
|
||||
# Using Dynamic Batching {#openvino_docs_IE_DG_DynamicBatching}
|
||||
|
||||
Dynamic Batching feature allows you to dynamically change batch size for inference calls
|
||||
within preset batch size limit.
|
||||
This feature might be useful when batch size is unknown beforehand, and using extra large batch size is
|
||||
undesired or impossible due to resource limitations.
|
||||
For example, face detection with person age, gender, or mood recognition is a typical usage scenario.
|
||||
## Using Dynamic Batching (C++)
|
||||
|
||||
@sphinxdirective
|
||||
.. raw:: html
|
||||
|
||||
<div id="switcher-cpp" class="switcher-anchor">C++</div>
|
||||
@endsphinxdirective
|
||||
|
||||
The Dynamic Batching feature allows you to dynamically change batch size for inference calls
|
||||
within a preset batch size limit. This feature might be useful when batch size is unknown beforehand and using an extra-large batch size is undesirable or impossible due to resource limitations. For example, applying face detection and then mood labeling to a video, you won't know in advance how many frames will contain a face when you pass inferencing results to a secondary model.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
You can activate Dynamic Batching by setting <code>KEY_DYN_BATCH_ENABLED</code> flag to <code>YES</code> in a configuration map that is
|
||||
You can activate Dynamic Batching by setting `KEY_DYN_BATCH_ENABLED` flag to `YES` in a configuration map that is
|
||||
passed to the plugin while loading a network.
|
||||
This configuration creates an <code>ExecutableNetwork</code> object that will allow setting batch size
|
||||
dynamically in all of its infer requests using <code>SetBatch()</code> method.
|
||||
The batch size that was set in passed <code>CNNNetwork</code> object will be used as a maximum batch size limit.
|
||||
This configuration creates an `ExecutableNetwork` object that will allow setting batch size
|
||||
dynamically in all of its infer requests using `SetBatch()` method.
|
||||
The batch size that was set in the passed `CNNNetwork` object will be used as a maximum batch size limit.
|
||||
|
||||
Here is a code example:
|
||||
|
||||
@snippet snippets/DynamicBatching.cpp part0
|
||||
|
||||
|
||||
## Limitations
|
||||
### Limitations
|
||||
|
||||
Currently, certain limitations for using Dynamic Batching exist:
|
||||
Currently, there are certain limitations for the use of Dynamic Batching exist:
|
||||
|
||||
* Use Dynamic Batching with CPU and GPU plugins only.
|
||||
|
||||
* Use Dynamic Batching on topologies that consist of certain layers only:
|
||||
* Convolution
|
||||
* Deconvolution
|
||||
* Activation
|
||||
* LRN
|
||||
* Pooling
|
||||
* FullyConnected
|
||||
* SoftMax
|
||||
* Split
|
||||
* Concatenation
|
||||
* Power
|
||||
* Eltwise
|
||||
* Crop
|
||||
* BatchNormalization
|
||||
* Copy
|
||||
|
||||
* Convolution
|
||||
* Deconvolution
|
||||
* Activation
|
||||
* LRN
|
||||
* Pooling
|
||||
* FullyConnected
|
||||
* SoftMax
|
||||
* Split
|
||||
* Concatenation
|
||||
* Power
|
||||
* Eltwise
|
||||
* Crop
|
||||
* BatchNormalization
|
||||
* Copy
|
||||
|
||||
Do not use layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape),
|
||||
layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput), and
|
||||
custom layers.
|
||||
Topology analysis is performed during the process of loading a network into plugin, and if topology is
|
||||
not applicable, an exception is generated.
|
||||
The following types of layers are not supported:
|
||||
|
||||
* Layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape)
|
||||
* Layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput)
|
||||
* Custom layers
|
||||
|
||||
Topology analysis is performed during the process of loading a network into plugin, and if the topology is not supported, an exception is generated.
|
||||
|
||||
## Using Dynamic Batching (Python)
|
||||
|
||||
@sphinxdirective
|
||||
.. raw:: html
|
||||
|
||||
<div id="switcher-python" class="switcher-anchor">Python</div>
|
||||
@endsphinxdirective
|
||||
|
||||
Dynamic Batching is a feature that allows you to dynamically change batch size for inference calls within a preset batch size limit. This feature might be useful when batch size is unknown beforehand, and using extra large batch size is not desired or impossible due to resource limitations. For example, face detection with person age, gender, or mood recognition is a typical usage scenario.
|
||||
|
||||
You can activate Dynamic Batching by setting the "DYN_BATCH_ENABLED" flag to "YES" in a configuration map that is passed to the plugin while loading a network. This configuration creates an `ExecutableNetwork` object that will allow setting batch size dynamically in all of its infer requests using the [ie_api.batch_size](api/ie_python_api/_autosummary/openvino.inference_engine.IENetwork.html#openvino.inference_engine.IENetwork.batch_size) method. The batch size that was set in the passed CNNNetwork object will be used as a maximum batch size limit.
|
||||
|
||||
```python
|
||||
from openvino.inference_engine import IECore
|
||||
|
||||
ie = IECore()
|
||||
dyn_config = {"DYN_BATCH_ENABLED": "YES"}
|
||||
ie.set_config(config=dyn_config, device_name=device)
|
||||
# Read a network in IR or ONNX format
|
||||
net = ie.read_network(path_to_model)
|
||||
net.batch_size = 32 # set the maximum batch size to 32
|
||||
exec_net = ie.load_network(network=net, device_name=device)
|
||||
```
|
||||
|
||||
### Limitations
|
||||
|
||||
Currently, certain limitations for the use of Dynamic Batching exist:
|
||||
|
||||
* Use Dynamic Batching with CPU and GPU plugins only.
|
||||
* Use Dynamic Batching on topologies that consist of certain layers only:
|
||||
* Convolution
|
||||
* Deconvolution
|
||||
* Activation
|
||||
* LRN
|
||||
* Pooling
|
||||
* FullyConnected
|
||||
* SoftMax
|
||||
* Split
|
||||
* Concatenation
|
||||
* Power
|
||||
* Eltwise
|
||||
* Crop
|
||||
* BatchNormalization
|
||||
* Copy
|
||||
|
||||
The following types of layers are not supported:
|
||||
|
||||
* Layers that might arbitrary change tensor shape (such as Flatten, Permute, Reshape)
|
||||
* Layers specific to object detection topologies (ROIPooling, ProirBox, DetectionOutput)
|
||||
* Custom layers
|
||||
|
||||
Topology analysis is performed during the process of loading a network into plugin, and if the topology is not supported, an exception is generated.
|
||||
@@ -1,7 +1,9 @@
|
||||
# Custom nGraph Operation {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
|
||||
# Custom nGraph Operations {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps}
|
||||
|
||||
Inference Engine Extension API allows you to register operation sets (opsets) with custom nGraph operations to support models with operations which OpenVINO™ does not support out-of-the-box.
|
||||
|
||||
Besides creating custom nGraph operations, to [support custom operations](../../HOWTO/Custom_Layers_Guide.md) in your model you must also create a Model Optimizer extension for the custom operations and an Inference Engine device plugin extension for the device you will use for inference.
|
||||
|
||||
## Operation Class
|
||||
|
||||
To add your custom nGraph operation, create a new class that extends `ngraph::Op`, which is in turn derived from `ngraph::Node`, the base class for all graph operations in nGraph. Follow the steps below to add a custom nGraph operation:
|
||||
@@ -26,8 +28,8 @@ Based on that, declaration of an operation class can look as follows:
|
||||
|
||||
The provided implementation has several fields:
|
||||
|
||||
* `add` of type `int64_t` is an attribute of a custom operation.
|
||||
* `type_info` of type `ngraph::NodeTypeInfo` defines the type and version of an operation.
|
||||
* `add` of type `int64_t` is an attribute of a custom operation
|
||||
* `type_info` of type `ngraph::NodeTypeInfo` defines type and version of an operation
|
||||
|
||||
### Operation Constructors
|
||||
|
||||
@@ -67,14 +69,13 @@ To add custom operations to the [Extension](Extension.md) class, create an opera
|
||||
|
||||
@snippet template_extension/extension.cpp extension:getOpSets
|
||||
|
||||
This method returns a map of opsets that exist in the extension library.
|
||||
|
||||
nGraph provides an opset mechanism to group operations into clusters. S. Different opsets distinguish between different versions of one operation.
|
||||
This method returns a map of opsets that exist in the [extension library](Extension.md).
|
||||
nGraph provides an opset mechanism to group operations into clusters. Different opsets distinguish between different versions of one operation.
|
||||
|
||||
When specifying opset names, follow the rules below:
|
||||
* Use unique opset names.
|
||||
* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opset2`, `opset3`, ... , `opsetN`.
|
||||
* Make sure that the Model Optimizer and your extension use the same opset names.
|
||||
* [Make sure that the Model Optimizer](../../HOWTO/Custom_Layers_Guide.md) and your extension use the same opset names.
|
||||
* IR v10 operations have the mandatory `version` attribute specifying the opset.
|
||||
Operations from the default opset cannot be redefined.
|
||||
|
||||
|
||||
@@ -2,13 +2,13 @@
|
||||
|
||||
Inference Engine build infrastructure provides the Inference Engine Package for application development.
|
||||
|
||||
To build an extension library, use the following CMake script:
|
||||
To configure the build of your extension library, use the following CMake script:
|
||||
|
||||
@snippet template_extension/CMakeLists.txt cmake:extension
|
||||
|
||||
This CMake script finds the Inference Engine and nGraph using the `find_package` CMake command.
|
||||
|
||||
To build an extension library, run the commands below:
|
||||
To build the extension library, run the commands below:
|
||||
|
||||
```sh
|
||||
$ cd template_extension
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
# How to Implement Custom CPU Operations {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel}
|
||||
# CPU Kernel Custom Operations {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel}
|
||||
|
||||
The primary means of the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel.
|
||||
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the CPU device.
|
||||
|
||||
The primary means of the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl API call defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel.
|
||||
|
||||
## Implementation Class
|
||||
|
||||
@@ -20,31 +22,32 @@ The provided implementation has several fields:
|
||||
|
||||
### Constructor of Implementation
|
||||
|
||||
An implementation constructor checks parameters of an nGraph operation, stores required attributes, and stores an error message in the case of an error.
|
||||
An implementation constructor checks parameters of an nGraph operation, stores required attributes, and stores an error message in case of an error.
|
||||
|
||||
@snippet template_extension/cpu_kernel.cpp cpu_implementation:ctor
|
||||
|
||||
### `getSupportedConfigurations`
|
||||
|
||||
InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions.
|
||||
The InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions.
|
||||
|
||||
@snippet template_extension/cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
|
||||
|
||||
### `init`
|
||||
|
||||
InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
|
||||
The InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
|
||||
|
||||
@snippet template_extension/cpu_kernel.cpp cpu_implementation:init
|
||||
|
||||
### `execute`
|
||||
|
||||
InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tenors as input/output blobs:
|
||||
The InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tensors as input/output blobs:
|
||||
|
||||
@snippet template_extension/cpu_kernel.cpp cpu_implementation:execute
|
||||
|
||||
## Register Implementation in `Extension` Class
|
||||
|
||||
To register custom kernel implementation in the [Extension](Extension.md) class, implement the following methods:
|
||||
|
||||
* <a href="#getImpTypes">getImplTypes</a>
|
||||
* <a href="#getImplementation">getImplementation</a>
|
||||
|
||||
@@ -66,4 +69,3 @@ InferenceEngine::IExtension::getImplementation returns the kernel implementation
|
||||
Use the `AddExtension` method of the general plugin interface to load your primitives:
|
||||
|
||||
@snippet snippets/CPU_Kernel.cpp part0
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Custom ONNX* Operators {#openvino_docs_IE_DG_Extensibility_DG_Custom_ONNX_Ops}
|
||||
|
||||
The ONNX\* importer provides a mechanism to register custom ONNX operators based on predefined or custom nGraph operations.
|
||||
The function responsible for registering a new operator is called `ngraph::onnx_import::register_operator` and is defined in `onnx_import/onnx_utils.hpp`.
|
||||
The function responsible for registering a new operator is called `ngraph::onnx_import::register_operator` and is defined in [`onnx_import/onnx_utils.hpp`](https://docs.openvinotoolkit.org/latest/ngraph_cpp_api/onnx__utils_8hpp_source.html).
|
||||
|
||||
## Register Custom ONNX Operator Based on Predefined nGraph Operations
|
||||
|
||||
@@ -14,18 +14,22 @@ x < 0 => f(x) = x * beta
|
||||
where `alpha` and `beta` are float constants.
|
||||
|
||||
1. Include headers:
|
||||
|
||||
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:headers
|
||||
|
||||
2. Register the CustomRelu operator in the ONNX importer:
|
||||
|
||||
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:register_operator
|
||||
|
||||
The `register_operator` function takes four arguments: op_type, opset version, domain, and a function object.
|
||||
The function object is a user-defined function that takes `ngraph::onnx_import::Node` as an input and based on that, returns a graph with nGraph operations.
|
||||
The `ngraph::onnx_import::Node` class represents a node in an ONNX model. It provides functions to fetch input node(s) using `get_ng_inputs`, attribute value using `get_attribute_value`, and many more. See `onnx_import/core/node.hpp` for full class declaration.
|
||||
The `ngraph::onnx_import::Node` class represents a node in an ONNX model. It provides functions to fetch input node(s) using `get_ng_inputs`, attribute value using `get_attribute_value`, and many more. See [`onnx_import/core/node.hpp`](https://docs.openvinotoolkit.org/latest/ngraph_cpp_api/core_2include_2ngraph_2node_8hpp_source.html) for full class declaration.
|
||||
|
||||
New operator registration must happen before an ONNX model is read. For example, if an model uses the `CustomRelu` operator, call `register_operator("CustomRelu", ...)` before InferenceEngine::Core::ReadNetwork.
|
||||
Reregistering ONNX operators within the same process is supported. If you register an existing operator, you get a warning.
|
||||
|
||||
The example below demonstrates an exemplary model that requires a previously created `CustomRelu` operator:
|
||||
|
||||
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model
|
||||
|
||||
|
||||
@@ -33,27 +37,30 @@ To create a graph with nGraph operations, visit [Custom nGraph Operations](Addin
|
||||
For a complete list of predefined nGraph operators, visit [Available Operations Sets](../../ops/opset.md).
|
||||
|
||||
If you do not need an operator anymore, unregister it by calling `unregister_operator`. The function takes three arguments: `op_type`, `version`, and `domain`.
|
||||
|
||||
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:unregister_operator
|
||||
|
||||
## Register Custom ONNX Operator Based on Custom nGraph Operations
|
||||
|
||||
The same principles apply when registering a custom ONNX operator based on custom nGraph operations.
|
||||
This example shows how to register a custom ONNX operator based on `Operation` presented in [this tutorial](AddingNGraphOps.md), which is used in [TemplateExtension](Extension.md).
|
||||
This example shows how to register a custom ONNX operator based on `Operation` presented in [this tutorial](AddingNGraphOps.md), which is used in [TemplateExtension](Extension.md):
|
||||
|
||||
@snippet template_extension/extension.cpp extension:ctor
|
||||
|
||||
Here, the `register_operator` function is called in the constructor of Extension. The constructor makes sure that the function is called before InferenceEngine::Core::ReadNetwork, because InferenceEngine::Core::AddExtension must be called before a model with a custom operator is read.
|
||||
|
||||
The example below demonstrates how to unregister an operator from the destructor of Extension:
|
||||
|
||||
@snippet template_extension/extension.cpp extension:dtor
|
||||
|
||||
> **REQUIRED**: It is mandatory to unregister a custom ONNX operator if it is defined in a dynamic shared library.
|
||||
|
||||
## Requirements for Building with CMake
|
||||
|
||||
A program that uses the `register_operator` functionality requires `ngraph` and `onnx_importer` libraries in addition to the Inference Engine.
|
||||
The `onnx_importer` is a component of the `ngraph` package , so `find_package(ngraph REQUIRED COMPONENTS onnx_importer)` can find both.
|
||||
The `ngraph` package exposes two variables, `${NGRAPH_LIBRARIES}` and `${ONNX_IMPORTER_LIBRARIES}`, which reference the `ngraph` and `onnx_importer` libraries.
|
||||
Those variables need to be passed to the `target_link_libraries` command in the CMakeLists.txt file.
|
||||
A program that uses the `register_operator` functionality requires `ngraph::ngraph` and `ngraph::onnx_ngraph_frontend` libraries in addition to the Inference Engine.
|
||||
The `onnx_ngraph_frontend` is a component of the `ngraph` package, so `find_package(ngraph REQUIRED COMPONENTS onnx_ngraph_frontend)` can find both.
|
||||
Those libraries need to be passed to the `target_link_libraries` command in the CMakeLists.txt file.
|
||||
|
||||
See CMakeLists.txt below for reference:
|
||||
|
||||
@snippet onnx_custom_op/CMakeLists.txt cmake:onnx_custom_op
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
# How to Implement Custom GPU Operations {#openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel}
|
||||
|
||||
The GPU codepath abstracts many details about OpenCL\*. You need to provide the kernel code in OpenCL C and the configuration file that connects the kernel and its parameters to the parameters of the operation.
|
||||
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
|
||||
|
||||
There are two options of using the custom operation configuration file:
|
||||
The GPU codepath abstracts many details about OpenCL\*. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
|
||||
|
||||
There are two options for using the custom operation configuration file:
|
||||
|
||||
* Include a section with your kernels into the global automatically-loaded `cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file, which is hosted in the `<INSTALL_DIR>/deployment_tools/inference_engine/bin/intel64/{Debug/Release}` folder
|
||||
* Call the `InferenceEngine::Core::SetConfig()` method from your application with the `InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
||||
|
||||
@snippet snippets/GPU_Kernel.cpp part0
|
||||
|
||||
All Inference Engine samples, except the trivial `hello_classification`,
|
||||
All Inference Engine samples, except the trivial `hello_classification`, and most Open Model Zoo demos
|
||||
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
|
||||
```sh
|
||||
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
|
||||
@@ -132,8 +134,8 @@ queuing an OpenCL program for execution.
|
||||
|
||||
## Example Configuration File
|
||||
|
||||
The following code sample provides an example configuration file in the
|
||||
`.xml` format. For information on the configuration file structure, see
|
||||
The following code sample provides an example configuration file in XML
|
||||
format. For information on the configuration file structure, see
|
||||
[Configuration File Format](#config-file-format).
|
||||
```xml
|
||||
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
|
||||
@@ -208,12 +210,12 @@ __kernel void example_relu_kernel(
|
||||
}
|
||||
```
|
||||
|
||||
> **NOTE:** As described in the previous section, all things like
|
||||
> **NOTE**: As described in the previous section, all items like
|
||||
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
|
||||
> the Inference Engine for efficiency reasons. See [Debugging
|
||||
> Tips](#debugging-tips) for information on debugging the results.
|
||||
|
||||
> **NOTE**: Several GPU-targeted kernels are also added to the binaries upon samples compilation
|
||||
> **NOTE**: Several GPU-targeted kernels are also added to the binaries upon compilation of samples
|
||||
> so that the sample application can easy load them.
|
||||
> Refer to the `cldnn_global_custom_kernels` folder in the GPU plugin installation directory.
|
||||
|
||||
@@ -221,10 +223,11 @@ __kernel void example_relu_kernel(
|
||||
|
||||
* **Using `printf` in the OpenCL™ Kernels**.
|
||||
To debug the specific values, you can use `printf` in your kernels.
|
||||
However, be careful: for instance, do not output excessively
|
||||
as it would generate too much data. The `printf` output is typical, so
|
||||
However, be careful not to output excessively, which
|
||||
could generate too much data. The `printf` output is typical, so
|
||||
your output can be truncated to fit the buffer. Also, because of
|
||||
buffering, you actually get an entire buffer of output when the
|
||||
execution ends.<br>
|
||||
|
||||
For more information, refer to the [printf
|
||||
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
|
||||
|
||||
@@ -1,27 +1,39 @@
|
||||
# Inference Engine Extensibility Mechanism {#openvino_docs_IE_DG_Extensibility_DG_Intro}
|
||||
|
||||
Inference Engine Extensibility API enables you to add support of custom operations to the Inference Engine.
|
||||
Extension should contain operation sets with custom operations and execution kernels for custom operations.
|
||||
Physically, an extension library can be represented as a dynamic library exporting the single `CreateExtension` function
|
||||
that creates a new extension instance.
|
||||
@sphinxdirective
|
||||
|
||||
To load the Extensibility library to the `InferenceEngine::Core` object, use the
|
||||
`InferenceEngine::Core::AddExtension` method.
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps
|
||||
openvino_docs_IE_DG_Extensibility_DG_Custom_ONNX_Ops
|
||||
CPU Kernels Extensibility <openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel>
|
||||
GPU Kernels Extensibility <openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel>
|
||||
VPU Kernels Extensibility <openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel>
|
||||
openvino_docs_IE_DG_Extensibility_DG_Extension
|
||||
openvino_docs_IE_DG_Extensibility_DG_Building
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
If your model contains operations not normally supported by OpenVINO, the Inference Engine Extensibility API lets you add support for those custom operations in a library containing custom nGraph operation sets, corresponding extensions to the Model Optimizer, and a device plugin extension. See the overview in the [Custom Operations Guide](../../HOWTO/Custom_Layers_Guide.md) to learn how these work together.
|
||||
|
||||
To load the Extensibility library to the `InferenceEngine::Core` object, use the `InferenceEngine::Core::AddExtension` method.
|
||||
|
||||
## Inference Engine Extension Library
|
||||
|
||||
Inference Engine Extension dynamic library contains the following components:
|
||||
An Inference Engine Extension dynamic library contains the following components:
|
||||
|
||||
* [Extension Library](Extension.md):
|
||||
- Contains custom operation sets.
|
||||
- Provides CPU implementations for custom operations.
|
||||
- Contains custom operation sets
|
||||
- Provides CPU implementations for custom operations
|
||||
* [Custom nGraph Operation](AddingNGraphOps.md):
|
||||
- Enables the use of `InferenceEngine::Core::ReadNetwork` to read Intermediate Representation (IR) with unsupported
|
||||
operations.
|
||||
- Enables the creation of `ngraph::Function` with unsupported operations.
|
||||
- Provides a shape inference mechanism for custom operations.
|
||||
operations
|
||||
- Enables the creation of `ngraph::Function` with unsupported operations
|
||||
- Provides a shape inference mechanism for custom operations
|
||||
|
||||
> **NOTE**: This documentation is written based on the `Template extension`, which demonstrates extension development details. Find the complete code of the `Template extension`, which is fully compilable and up-to-date, at `<dldt source tree>/docs/template_extension`.
|
||||
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension), which demonstrates extension development details. You can review the complete code, which is fully compilable and up-to-date, to see how it works.
|
||||
|
||||
## Execution Kernels
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user