Compare commits
239 Commits
2022.3.1
...
releases/2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ec53191909 | ||
|
|
1ee54505a0 | ||
|
|
a75a93252e | ||
|
|
3b72991477 | ||
|
|
f1479f19a9 | ||
|
|
7e6e08571a | ||
|
|
08e3ed0966 | ||
|
|
645847fae1 | ||
|
|
d8a8daa1bb | ||
|
|
f3551dd009 | ||
|
|
c078192273 | ||
|
|
441496d79b | ||
|
|
da99f390c4 | ||
|
|
8d3fa0e6c2 | ||
|
|
ae60d612c6 | ||
|
|
bd5fc754c7 | ||
|
|
92987ecb33 | ||
|
|
b149ea0e42 | ||
|
|
0eae631776 | ||
|
|
4105180e99 | ||
|
|
97ed68051a | ||
|
|
c1dfd56358 | ||
|
|
394d3b481a | ||
|
|
1b90b897d1 | ||
|
|
da2aa1aac0 | ||
|
|
ce1aa513a0 | ||
|
|
69c166bf3a | ||
|
|
2c48911c49 | ||
|
|
801deae368 | ||
|
|
448b4bb838 | ||
|
|
179cb63b00 | ||
|
|
608d002402 | ||
|
|
ec21e6906b | ||
|
|
961abdb0b7 | ||
|
|
bf02b11a63 | ||
|
|
673d61126f | ||
|
|
79cf494d27 | ||
|
|
993857a52b | ||
|
|
af945e4913 | ||
|
|
850b88983f | ||
|
|
332d4d3b69 | ||
|
|
f169440f83 | ||
|
|
ef97282841 | ||
|
|
49afa6bb06 | ||
|
|
f303df8a63 | ||
|
|
1a3a3e89ec | ||
|
|
81cb88b6c5 | ||
|
|
4da2c945d6 | ||
|
|
e57005afcb | ||
|
|
4dbdba1ac3 | ||
|
|
c6e7336118 | ||
|
|
ab52ba5efd | ||
|
|
8be1ae96bc | ||
|
|
e829bfd858 | ||
|
|
c6b0b9c255 | ||
|
|
b00cbf59cb | ||
|
|
1e2c657895 | ||
|
|
7c78f17438 | ||
|
|
737319b6a0 | ||
|
|
cd74d8c668 | ||
|
|
5e3f0720cd | ||
|
|
140cf689a2 | ||
|
|
73fe0afe3e | ||
|
|
b0c7a05d24 | ||
|
|
31a7187ccb | ||
|
|
894b501bce | ||
|
|
d78bcbe150 | ||
|
|
897dff88a7 | ||
|
|
de3cdf1067 | ||
|
|
65ac02865f | ||
|
|
01822ff343 | ||
|
|
59e2f86f9b | ||
|
|
d966611e28 | ||
|
|
d1d95ff5fc | ||
|
|
28ee91fa46 | ||
|
|
d771eb44f4 | ||
|
|
27b76dba44 | ||
|
|
7be33a6079 | ||
|
|
d437958466 | ||
|
|
9d6b193201 | ||
|
|
a57e3a9697 | ||
|
|
fe1954aa25 | ||
|
|
bc8582469e | ||
|
|
4795a4ac4a | ||
|
|
33960aa4e8 | ||
|
|
cad5b795f8 | ||
|
|
1ed2e8b156 | ||
|
|
e4d599713a | ||
|
|
b5562c4ddf | ||
|
|
3c9745990c | ||
|
|
128e950d49 | ||
|
|
f4c8920cf3 | ||
|
|
67934ce37e | ||
|
|
d183c1ca44 | ||
|
|
f43b1ef805 | ||
|
|
dc8fcaf6e2 | ||
|
|
c8c5c2eb14 | ||
|
|
7f01b0a8eb | ||
|
|
0b5cd796d4 | ||
|
|
f361cc2d6b | ||
|
|
2e8acae6f2 | ||
|
|
ea92b38c44 | ||
|
|
20d2477124 | ||
|
|
356289adc1 | ||
|
|
3717201e99 | ||
|
|
5bf210cbea | ||
|
|
6835565610 | ||
|
|
2d2af81a08 | ||
|
|
f08632615c | ||
|
|
db49a6b662 | ||
|
|
b5db7ec6b1 | ||
|
|
12ca62bed5 | ||
|
|
465f19ae60 | ||
|
|
4066218fa0 | ||
|
|
9c652198f0 | ||
|
|
8856b95234 | ||
|
|
944c8b7fb5 | ||
|
|
5792a4a6df | ||
|
|
8fa3b23c6d | ||
|
|
d83741f433 | ||
|
|
9a0a0c4e2c | ||
|
|
37a0278204 | ||
|
|
48ea77df85 | ||
|
|
4451ef7d42 | ||
|
|
427900eca7 | ||
|
|
72c3bf222b | ||
|
|
80f1677c2c | ||
|
|
7d184040eb | ||
|
|
caaef49639 | ||
|
|
af16ea1d79 | ||
|
|
dcc8f926e1 | ||
|
|
c0762847a7 | ||
|
|
af29d221b4 | ||
|
|
0f5a45c875 | ||
|
|
facf990dfd | ||
|
|
eb24795c66 | ||
|
|
e21b51a53b | ||
|
|
b2c00c66a7 | ||
|
|
d84da15de5 | ||
|
|
917a465a00 | ||
|
|
320ed5b94c | ||
|
|
37097c71cc | ||
|
|
9b170e63fd | ||
|
|
41fa6f360b | ||
|
|
1e9da3f5de | ||
|
|
6987465875 | ||
|
|
a0b661a274 | ||
|
|
a466b3fea6 | ||
|
|
cb6b1fe56f | ||
|
|
66d3048598 | ||
|
|
d2e06d4f25 | ||
|
|
72d7b518ca | ||
|
|
41a404f290 | ||
|
|
abaa9e6404 | ||
|
|
429c7265df | ||
|
|
7123433ce3 | ||
|
|
319e95e419 | ||
|
|
bafd45502b | ||
|
|
4ea602bc7e | ||
|
|
891f1c49bc | ||
|
|
a3f8cef198 | ||
|
|
826a54dc20 | ||
|
|
99b8c80677 | ||
|
|
f409e95768 | ||
|
|
d2f7816e6f | ||
|
|
3980672082 | ||
|
|
6a20d1408e | ||
|
|
1e5fec7e25 | ||
|
|
188746224c | ||
|
|
aa1a607328 | ||
|
|
6fecdbca36 | ||
|
|
f87e00398d | ||
|
|
4cdd8119da | ||
|
|
714b1de678 | ||
|
|
0000550371 | ||
|
|
a6bfc0cf0e | ||
|
|
b4d18bb406 | ||
|
|
4d9443eb0e | ||
|
|
d770b535fb | ||
|
|
5a0dea4a46 | ||
|
|
c8d57bbc77 | ||
|
|
a8f2365563 | ||
|
|
8a1d34d317 | ||
|
|
7fe32c89ae | ||
|
|
067c21f110 | ||
|
|
aafabb41b8 | ||
|
|
e03fbd5c15 | ||
|
|
4e02bd2771 | ||
|
|
8ca594f49a | ||
|
|
2c78fdb7c7 | ||
|
|
544b3f8191 | ||
|
|
5bd1e64a42 | ||
|
|
66257530e3 | ||
|
|
cfbf5a1808 | ||
|
|
4f03abe2ca | ||
|
|
389c970c12 | ||
|
|
29628a89b7 | ||
|
|
f3adf63f6b | ||
|
|
c0212a361a | ||
|
|
d8d5dfb34a | ||
|
|
e628fae196 | ||
|
|
f0f6896fc0 | ||
|
|
9163114290 | ||
|
|
53a3cb377b | ||
|
|
ac805c66e1 | ||
|
|
c9afc5a5c1 | ||
|
|
d328b00e48 | ||
|
|
1788c86943 | ||
|
|
32713f744d | ||
|
|
ea302afb47 | ||
|
|
5871d5dc38 | ||
|
|
125adeaf29 | ||
|
|
a1bd02e633 | ||
|
|
3068b3823c | ||
|
|
71b97b69a8 | ||
|
|
e9030cca21 | ||
|
|
c591d773d4 | ||
|
|
5f4999117d | ||
|
|
c9f9795d29 | ||
|
|
28922e2080 | ||
|
|
4a88aa0493 | ||
|
|
3a72200f92 | ||
|
|
fdae95a769 | ||
|
|
483f38e6d8 | ||
|
|
c144702d8b | ||
|
|
79db96d61e | ||
|
|
123f8e62bf | ||
|
|
8c80f9ff58 | ||
|
|
de5e9bb397 | ||
|
|
32f800c6a6 | ||
|
|
b492f98d30 | ||
|
|
9c49b71c11 | ||
|
|
02330bc11c | ||
|
|
4412e1ddfa | ||
|
|
b7b3f0ab4a | ||
|
|
0621e8cf28 | ||
|
|
9d6d84088f | ||
|
|
a63dad6fdd | ||
|
|
bbc1c26750 |
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
repositories:
|
repositories:
|
||||||
@@ -13,7 +26,7 @@ resources:
|
|||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/openvino_contrib
|
name: openvinotoolkit/openvino_contrib
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: android_arm64
|
- job: android_arm64
|
||||||
@@ -110,11 +123,11 @@ jobs:
|
|||||||
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
|
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
|
||||||
-DANDROID_STL=c++_shared
|
-DANDROID_STL=c++_shared
|
||||||
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
|
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
|
||||||
-DENABLE_OPENCV=OFF
|
|
||||||
-DENABLE_TESTS=ON
|
-DENABLE_TESTS=ON
|
||||||
-DENABLE_SAMPLES=ON
|
-DENABLE_SAMPLES=ON
|
||||||
-DENABLE_INTEL_MYRIAD=OFF
|
-DENABLE_INTEL_MYRIAD=OFF
|
||||||
-DBUILD_java_api=ON
|
-DBUILD_java_api=ON
|
||||||
|
-DBUILD_cuda_plugin=OFF
|
||||||
-DTHREADING=SEQ
|
-DTHREADING=SEQ
|
||||||
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
||||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
repositories:
|
repositories:
|
||||||
@@ -13,13 +26,13 @@ resources:
|
|||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/openvino_contrib
|
name: openvinotoolkit/openvino_contrib
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
- repository: testdata
|
- repository: testdata
|
||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/testdata
|
name: openvinotoolkit/testdata
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: Lin
|
- job: Lin
|
||||||
@@ -161,6 +174,7 @@ jobs:
|
|||||||
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
||||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||||
-DCMAKE_C_COMPILER_LAUNCHER=ccache
|
-DCMAKE_C_COMPILER_LAUNCHER=ccache
|
||||||
|
-DBUILD_cuda_plugin=OFF
|
||||||
$(REPO_DIR)
|
$(REPO_DIR)
|
||||||
workingDirectory: $(BUILD_DIR)
|
workingDirectory: $(BUILD_DIR)
|
||||||
|
|
||||||
@@ -214,7 +228,6 @@ jobs:
|
|||||||
set -e
|
set -e
|
||||||
mkdir -p $(INSTALL_DIR)/opencv/
|
mkdir -p $(INSTALL_DIR)/opencv/
|
||||||
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
|
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
|
||||||
cp -R $(REPO_DIR)/temp/opencv_4.5.2_ubuntu20/opencv/* $(INSTALL_DIR)/opencv/
|
|
||||||
workingDirectory: $(BUILD_DIR)
|
workingDirectory: $(BUILD_DIR)
|
||||||
displayName: 'Install tests'
|
displayName: 'Install tests'
|
||||||
|
|
||||||
@@ -332,7 +345,7 @@ jobs:
|
|||||||
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/cpuFuncTests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
|
- script: . $(SETUPVARS) && $(INSTALL_TEST_DIR)/cpuFuncTests --gtest_filter=*smoke* --gtest_print_time=1 --gtest_output=xml:TEST-cpuFuncTests.xml
|
||||||
displayName: 'CPU FuncTests'
|
displayName: 'CPU FuncTests'
|
||||||
continueOnError: false
|
continueOnError: false
|
||||||
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')
|
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
|
||||||
|
|
||||||
- script: |
|
- script: |
|
||||||
export DATA_PATH=$(MODELS_PATH)
|
export DATA_PATH=$(MODELS_PATH)
|
||||||
@@ -341,13 +354,6 @@ jobs:
|
|||||||
displayName: 'IE CAPITests'
|
displayName: 'IE CAPITests'
|
||||||
continueOnError: false
|
continueOnError: false
|
||||||
|
|
||||||
- script: |
|
|
||||||
export DATA_PATH=$(MODELS_PATH)
|
|
||||||
export MODELS_PATH=$(MODELS_PATH)
|
|
||||||
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_output=xml:TEST-OpenVinoCAPITests.xml
|
|
||||||
displayName: 'OV CAPITests'
|
|
||||||
continueOnError: false
|
|
||||||
|
|
||||||
- task: CMake@1
|
- task: CMake@1
|
||||||
inputs:
|
inputs:
|
||||||
cmakeArgs: >
|
cmakeArgs: >
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
repositories:
|
repositories:
|
||||||
@@ -13,7 +26,7 @@ resources:
|
|||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/openvino_contrib
|
name: openvinotoolkit/openvino_contrib
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: linux_arm64
|
- job: linux_arm64
|
||||||
@@ -127,7 +140,6 @@ jobs:
|
|||||||
-GNinja
|
-GNinja
|
||||||
-DVERBOSE_BUILD=ON
|
-DVERBOSE_BUILD=ON
|
||||||
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
|
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
|
||||||
-DENABLE_OPENCV=OFF
|
|
||||||
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
|
-DPYTHON_INCLUDE_DIRS=$(INSTALL_PYTHON)/include/python3.8
|
||||||
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
|
-DPYTHON_LIBRARY=$(INSTALL_PYTHON)/lib/libpython3.8.so
|
||||||
-DENABLE_PYTHON=ON
|
-DENABLE_PYTHON=ON
|
||||||
@@ -143,6 +155,7 @@ jobs:
|
|||||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||||
-DENABLE_SAMPLES=ON
|
-DENABLE_SAMPLES=ON
|
||||||
-DBUILD_java_api=OFF
|
-DBUILD_java_api=OFF
|
||||||
|
-DBUILD_cuda_plugin=OFF
|
||||||
-DENABLE_INTEL_MYRIAD=OFF
|
-DENABLE_INTEL_MYRIAD=OFF
|
||||||
-DTHREADING=SEQ
|
-DTHREADING=SEQ
|
||||||
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
-DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: LinCC
|
- job: LinCC
|
||||||
@@ -21,7 +34,6 @@ jobs:
|
|||||||
VSTS_HTTP_TIMEOUT: 200
|
VSTS_HTTP_TIMEOUT: 200
|
||||||
BUILD_TYPE: Release
|
BUILD_TYPE: Release
|
||||||
REPO_DIR: $(Build.Repository.LocalPath)
|
REPO_DIR: $(Build.Repository.LocalPath)
|
||||||
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
|
|
||||||
MODELS_PATH: $(REPO_DIR)/../testdata
|
MODELS_PATH: $(REPO_DIR)/../testdata
|
||||||
WORK_DIR: $(Pipeline.Workspace)/_w
|
WORK_DIR: $(Pipeline.Workspace)/_w
|
||||||
BUILD_DIR: $(WORK_DIR)/build
|
BUILD_DIR: $(WORK_DIR)/build
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ resources:
|
|||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/openvino_contrib
|
name: openvinotoolkit/openvino_contrib
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: Lin
|
- job: Lin
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
# type: github
|
# type: github
|
||||||
# endpoint: openvinotoolkit
|
# endpoint: openvinotoolkit
|
||||||
# name: openvinotoolkit/testdata
|
# name: openvinotoolkit/testdata
|
||||||
# ref: master
|
# ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: Lin_lohika
|
- job: Lin_lohika
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: OpenVINO_ONNX_CI
|
- job: OpenVINO_ONNX_CI
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: onnxruntime
|
- job: onnxruntime
|
||||||
@@ -95,7 +108,6 @@ jobs:
|
|||||||
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
|
-DPYTHON_EXECUTABLE=/usr/bin/python3.8
|
||||||
-DENABLE_INTEL_MYRIAD_COMMON=OFF
|
-DENABLE_INTEL_MYRIAD_COMMON=OFF
|
||||||
-DENABLE_INTEL_GNA=OFF
|
-DENABLE_INTEL_GNA=OFF
|
||||||
-DENABLE_OPENCV=OFF
|
|
||||||
-DENABLE_CPPLINT=OFF
|
-DENABLE_CPPLINT=OFF
|
||||||
-DENABLE_TESTS=OFF
|
-DENABLE_TESTS=OFF
|
||||||
-DENABLE_INTEL_CPU=ON
|
-DENABLE_INTEL_CPU=ON
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
repositories:
|
repositories:
|
||||||
@@ -13,13 +26,13 @@ resources:
|
|||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/openvino_contrib
|
name: openvinotoolkit/openvino_contrib
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
- repository: testdata
|
- repository: testdata
|
||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/testdata
|
name: openvinotoolkit/testdata
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: Mac
|
- job: Mac
|
||||||
@@ -101,7 +114,7 @@ jobs:
|
|||||||
export PATH="/usr/local/opt/cython/bin:$PATH"
|
export PATH="/usr/local/opt/cython/bin:$PATH"
|
||||||
export CC=gcc
|
export CC=gcc
|
||||||
export CXX=g++
|
export CXX=g++
|
||||||
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache $(REPO_DIR)
|
cmake -GNinja -DVERBOSE_BUILD=ON -DENABLE_REQUIREMENTS_INSTALL=OFF -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_PYTHON=ON -DENABLE_TESTS=OFF -DENABLE_STRICT_DEPENDENCIES=OFF -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules -DCMAKE_CXX_COMPILER_LAUNCHER=ccache -DCMAKE_C_COMPILER_LAUNCHER=ccache -DBUILD_cuda_plugin=OFF $(REPO_DIR)
|
||||||
workingDirectory: $(BUILD_DIR)
|
workingDirectory: $(BUILD_DIR)
|
||||||
displayName: 'CMake'
|
displayName: 'CMake'
|
||||||
|
|
||||||
@@ -145,7 +158,6 @@ jobs:
|
|||||||
set -e
|
set -e
|
||||||
mkdir -p $(INSTALL_DIR)/opencv/
|
mkdir -p $(INSTALL_DIR)/opencv/
|
||||||
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
|
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
|
||||||
cp -R $(REPO_DIR)/temp/opencv_4.5.2_osx/opencv/* $(INSTALL_DIR)/opencv/
|
|
||||||
workingDirectory: $(BUILD_DIR)
|
workingDirectory: $(BUILD_DIR)
|
||||||
displayName: 'Install tests'
|
displayName: 'Install tests'
|
||||||
|
|
||||||
@@ -212,14 +224,6 @@ jobs:
|
|||||||
continueOnError: false
|
continueOnError: false
|
||||||
enabled: false
|
enabled: false
|
||||||
|
|
||||||
- script: |
|
|
||||||
export DATA_PATH=$(MODELS_PATH)
|
|
||||||
export MODELS_PATH=$(MODELS_PATH)
|
|
||||||
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_output=xml:TEST-OpenVinoCAPITests.xml
|
|
||||||
displayName: 'IE CAPITests'
|
|
||||||
continueOnError: false
|
|
||||||
enabled: false
|
|
||||||
|
|
||||||
- task: PublishTestResults@2
|
- task: PublishTestResults@2
|
||||||
condition: always()
|
condition: always()
|
||||||
inputs:
|
inputs:
|
||||||
|
|||||||
@@ -4,8 +4,21 @@ trigger:
|
|||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
resources:
|
resources:
|
||||||
repositories:
|
repositories:
|
||||||
@@ -13,13 +26,13 @@ resources:
|
|||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/openvino_contrib
|
name: openvinotoolkit/openvino_contrib
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
- repository: testdata
|
- repository: testdata
|
||||||
type: github
|
type: github
|
||||||
endpoint: openvinotoolkit
|
endpoint: openvinotoolkit
|
||||||
name: openvinotoolkit/testdata
|
name: openvinotoolkit/testdata
|
||||||
ref: master
|
ref: releases/2022/2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: Win
|
- job: Win
|
||||||
@@ -32,7 +45,7 @@ jobs:
|
|||||||
maxParallel: 2
|
maxParallel: 2
|
||||||
|
|
||||||
# About 150% of total time
|
# About 150% of total time
|
||||||
timeoutInMinutes: 270 #Temporary change
|
timeoutInMinutes: 270 #Temporary change
|
||||||
|
|
||||||
pool:
|
pool:
|
||||||
name: WIN_VMSS_VENV_D8S_WU2
|
name: WIN_VMSS_VENV_D8S_WU2
|
||||||
@@ -135,7 +148,7 @@ jobs:
|
|||||||
|
|
||||||
- script: |
|
- script: |
|
||||||
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
||||||
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR)
|
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" -DENABLE_WHEEL=ON -DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) -DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) -DENABLE_REQUIREMENTS_INSTALL=OFF -DENABLE_FASTER_BUILD=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DENABLE_STRICT_DEPENDENCIES=OFF -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.7.6\x64\python.exe" -DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.7.6\x64\include" -DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.7.6\x64\libs\python37.lib" -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DBUILD_cuda_plugin=OFF $(REPO_DIR)
|
||||||
workingDirectory: $(BUILD_DIR)
|
workingDirectory: $(BUILD_DIR)
|
||||||
displayName: 'CMake'
|
displayName: 'CMake'
|
||||||
|
|
||||||
@@ -195,7 +208,7 @@ jobs:
|
|||||||
displayName: 'Samples Smoke Tests'
|
displayName: 'Samples Smoke Tests'
|
||||||
continueOnError: false
|
continueOnError: false
|
||||||
|
|
||||||
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake && xcopy $(REPO_DIR)\temp\opencv_4.5.2\opencv\* $(INSTALL_DIR)\opencv\ /e /h /y
|
- script: $(CMAKE_CMD) -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P cmake_install.cmake
|
||||||
workingDirectory: $(BUILD_DIR)
|
workingDirectory: $(BUILD_DIR)
|
||||||
displayName: 'Install tests'
|
displayName: 'Install tests'
|
||||||
|
|
||||||
@@ -276,7 +289,7 @@ jobs:
|
|||||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\cpuFuncTests --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
|
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\cpuFuncTests --gtest_filter=*smoke* --gtest_output=xml:TEST-cpuFuncTests.xml
|
||||||
displayName: 'CPU FuncTests'
|
displayName: 'CPU FuncTests'
|
||||||
continueOnError: false
|
continueOnError: false
|
||||||
condition: eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF')
|
condition: and(succeeded(), eq(variables['CMAKE_BUILD_SHARED_LIBS'], 'OFF'))
|
||||||
|
|
||||||
- script: |
|
- script: |
|
||||||
set DATA_PATH=$(MODELS_PATH)
|
set DATA_PATH=$(MODELS_PATH)
|
||||||
@@ -285,13 +298,6 @@ jobs:
|
|||||||
displayName: 'IE CAPITests'
|
displayName: 'IE CAPITests'
|
||||||
continueOnError: false
|
continueOnError: false
|
||||||
|
|
||||||
- script: |
|
|
||||||
set DATA_PATH=$(MODELS_PATH)
|
|
||||||
set MODELS_PATH=$(MODELS_PATH)
|
|
||||||
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\OpenVinoCAPITests --gtest_output=xml:TEST-OpenVinoCAPITests.xml
|
|
||||||
displayName: 'OV CAPITests'
|
|
||||||
continueOnError: false
|
|
||||||
|
|
||||||
- task: PublishTestResults@2
|
- task: PublishTestResults@2
|
||||||
condition: always()
|
condition: always()
|
||||||
inputs:
|
inputs:
|
||||||
|
|||||||
@@ -1,11 +1,24 @@
|
|||||||
trigger:
|
trigger:
|
||||||
|
branches:
|
||||||
|
include:
|
||||||
|
- master
|
||||||
|
- releases/*
|
||||||
|
paths:
|
||||||
|
exclude:
|
||||||
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
|
pr:
|
||||||
branches:
|
branches:
|
||||||
include:
|
include:
|
||||||
- master
|
- master
|
||||||
- releases/*
|
- releases/*
|
||||||
paths:
|
paths:
|
||||||
exclude:
|
exclude:
|
||||||
- docs/*
|
- docs/
|
||||||
|
- /**/docs/*
|
||||||
|
- /**/*.md
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
- job: WinCC
|
- job: WinCC
|
||||||
@@ -21,7 +34,6 @@ jobs:
|
|||||||
VSTS_HTTP_TIMEOUT: 200
|
VSTS_HTTP_TIMEOUT: 200
|
||||||
BUILD_TYPE: Release
|
BUILD_TYPE: Release
|
||||||
REPO_DIR: $(Build.Repository.LocalPath)
|
REPO_DIR: $(Build.Repository.LocalPath)
|
||||||
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
|
|
||||||
MODELS_PATH: $(REPO_DIR)\..\testdata
|
MODELS_PATH: $(REPO_DIR)\..\testdata
|
||||||
WORK_DIR: $(Pipeline.Workspace)\_w
|
WORK_DIR: $(Pipeline.Workspace)\_w
|
||||||
BUILD_DIR: $(WORK_DIR)\build
|
BUILD_DIR: $(WORK_DIR)\build
|
||||||
|
|||||||
@@ -59,7 +59,6 @@ RUN cmake .. \
|
|||||||
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
|
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
|
||||||
-DENABLE_INTEL_MYRIAD_COMMON=OFF \
|
-DENABLE_INTEL_MYRIAD_COMMON=OFF \
|
||||||
-DENABLE_INTEL_GNA=OFF \
|
-DENABLE_INTEL_GNA=OFF \
|
||||||
-DENABLE_OPENCV=OFF \
|
|
||||||
-DENABLE_CPPLINT=OFF \
|
-DENABLE_CPPLINT=OFF \
|
||||||
-DENABLE_NCC_STYLE=OFF \
|
-DENABLE_NCC_STYLE=OFF \
|
||||||
-DENABLE_TESTS=OFF \
|
-DENABLE_TESTS=OFF \
|
||||||
|
|||||||
1
.gitattributes
vendored
@@ -64,6 +64,7 @@
|
|||||||
*.gif filter=lfs diff=lfs merge=lfs -text
|
*.gif filter=lfs diff=lfs merge=lfs -text
|
||||||
*.vsdx filter=lfs diff=lfs merge=lfs -text
|
*.vsdx filter=lfs diff=lfs merge=lfs -text
|
||||||
*.bmp filter=lfs diff=lfs merge=lfs -text
|
*.bmp filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.svg filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|
||||||
#POT attributes
|
#POT attributes
|
||||||
tools/pot/tests/data/test_cases_refs/* filter=lfs diff=lfs merge=lfs -text
|
tools/pot/tests/data/test_cases_refs/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
|||||||
10
.github/workflows/build_doc.yml
vendored
@@ -4,7 +4,7 @@ on: [push, pull_request]
|
|||||||
jobs:
|
jobs:
|
||||||
Build_Doc:
|
Build_Doc:
|
||||||
if: github.repository == 'openvinotoolkit/openvino'
|
if: github.repository == 'openvinotoolkit/openvino'
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- name: Clone OpenVINO
|
- name: Clone OpenVINO
|
||||||
uses: actions/checkout@v2
|
uses: actions/checkout@v2
|
||||||
@@ -17,11 +17,11 @@ jobs:
|
|||||||
set -e
|
set -e
|
||||||
# install doc dependencies
|
# install doc dependencies
|
||||||
sudo apt update
|
sudo apt update
|
||||||
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive
|
sudo apt --assume-yes install libusb-1.0-0-dev graphviz texlive liblua5.2-0
|
||||||
cd docs
|
cd docs
|
||||||
python -m pip install -r requirements.txt --user
|
python3 -m pip install -r requirements.txt --user
|
||||||
cd openvino_sphinx_theme
|
cd openvino_sphinx_theme
|
||||||
python setup.py install --user
|
python3 setup.py install --user
|
||||||
cd ../..
|
cd ../..
|
||||||
# install doxyrest
|
# install doxyrest
|
||||||
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz
|
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz
|
||||||
@@ -43,7 +43,7 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DNGRAPH_PYTHON_BUILD_ENABLE=ON -DCMAKE_BUILD_TYPE=Release ..
|
cmake -DENABLE_DOCS=ON -DENABLE_PYTHON=ON -DCMAKE_BUILD_TYPE=Release ..
|
||||||
|
|
||||||
- name: Build doc
|
- name: Build doc
|
||||||
run: |
|
run: |
|
||||||
|
|||||||
2
.github/workflows/check_pr_commits.yml
vendored
@@ -3,7 +3,7 @@ on: [pull_request]
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
Checks:
|
Checks:
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- name: Clone OpenVINO
|
- name: Clone OpenVINO
|
||||||
uses: actions/checkout@v2
|
uses: actions/checkout@v2
|
||||||
|
|||||||
8
.github/workflows/code_style.yml
vendored
@@ -48,7 +48,7 @@ jobs:
|
|||||||
path: build/code_style_diff.diff
|
path: build/code_style_diff.diff
|
||||||
|
|
||||||
ShellCheck:
|
ShellCheck:
|
||||||
runs-on: ubuntu-18.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
@@ -73,7 +73,7 @@ jobs:
|
|||||||
working-directory: build
|
working-directory: build
|
||||||
|
|
||||||
NamingConventionCheck:
|
NamingConventionCheck:
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
@@ -82,8 +82,8 @@ jobs:
|
|||||||
- name: Install Clang dependency
|
- name: Install Clang dependency
|
||||||
run: |
|
run: |
|
||||||
sudo apt update
|
sudo apt update
|
||||||
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11
|
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
|
||||||
sudo apt --assume-yes install libclang-12-dev
|
sudo apt --assume-yes install libclang-14-dev
|
||||||
|
|
||||||
- name: Install Python-based dependencies
|
- name: Install Python-based dependencies
|
||||||
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt
|
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt
|
||||||
|
|||||||
2
.github/workflows/files_size.yml
vendored
@@ -3,7 +3,7 @@ on: [push, pull_request]
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
Check_Files_Size:
|
Check_Files_Size:
|
||||||
runs-on: ubuntu-18.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
|
|
||||||
|
|||||||
2
.github/workflows/mo.yml
vendored
@@ -9,7 +9,7 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
Pylint-UT:
|
Pylint-UT:
|
||||||
runs-on: ubuntu-18.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
|
|||||||
6
.github/workflows/py_checks.yml
vendored
@@ -6,13 +6,15 @@ on:
|
|||||||
paths:
|
paths:
|
||||||
- 'src/bindings/python/**'
|
- 'src/bindings/python/**'
|
||||||
- 'samples/python/**'
|
- 'samples/python/**'
|
||||||
|
- '.github/workflows/py_checks.yml'
|
||||||
pull_request:
|
pull_request:
|
||||||
paths:
|
paths:
|
||||||
- 'src/bindings/python/**'
|
- 'src/bindings/python/**'
|
||||||
- 'samples/python/**'
|
- 'samples/python/**'
|
||||||
|
- '.github/workflows/py_checks.yml'
|
||||||
jobs:
|
jobs:
|
||||||
linters:
|
linters:
|
||||||
runs-on: ubuntu-18.04
|
runs-on: ubuntu-20.04
|
||||||
steps:
|
steps:
|
||||||
- name: Code checkout
|
- name: Code checkout
|
||||||
uses: actions/checkout@v2
|
uses: actions/checkout@v2
|
||||||
@@ -121,4 +123,4 @@ jobs:
|
|||||||
run: python -m bandit -r ./ -f screen
|
run: python -m bandit -r ./ -f screen
|
||||||
working-directory: src/bindings/python/src/compatibility/openvino
|
working-directory: src/bindings/python/src/compatibility/openvino
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
1
.gitignore
vendored
@@ -1,6 +1,7 @@
|
|||||||
# build/artifact dirs
|
# build/artifact dirs
|
||||||
_*
|
_*
|
||||||
[Bb]uild*/
|
[Bb]uild*/
|
||||||
|
cmake-build*
|
||||||
|
|
||||||
# but ensure we don't skip __init__.py and __main__.py
|
# but ensure we don't skip __init__.py and __main__.py
|
||||||
!__init__.py
|
!__init__.py
|
||||||
|
|||||||
92
README.md
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
|
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
|
||||||
|
|
||||||
[](https://github.com/openvinotoolkit/openvino/releases/tag/2022.1)
|
[](https://github.com/openvinotoolkit/openvino/releases/tag/2022.2.0)
|
||||||
[](LICENSE)
|
[](LICENSE)
|
||||||

|

|
||||||

|

|
||||||
@@ -34,24 +34,24 @@ OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
|
|||||||
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
|
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
|
||||||
|
|
||||||
|
|
||||||
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
|
This open-source version includes several components: namely [Model Optimizer], [OpenVINO™ Runtime], [Post-Training Optimization Tool], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics.
|
||||||
It supports pre-trained models from the [Open Model Zoo], along with 100+ open
|
It supports pre-trained models from [Open Model Zoo], along with 100+ open
|
||||||
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
|
source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
|
||||||
|
|
||||||
### Components
|
### Components
|
||||||
* [OpenVINO™ Runtime] - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
|
* [OpenVINO™ Runtime] - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice.
|
||||||
* [core](https://github.com/openvinotoolkit/openvino/tree/master/src/core) - provides the base API for model representation and modification.
|
* [core](./src/core) - provides the base API for model representation and modification.
|
||||||
* [inference](https://github.com/openvinotoolkit/openvino/tree/master/src/inference) - provides an API to infer models on device.
|
* [inference](./src/inference) - provides an API to infer models on the device.
|
||||||
* [transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
|
* [transformations](./src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins.
|
||||||
* [low precision transformations](https://github.com/openvinotoolkit/openvino/tree/master/src/common/low_precision_transformations) - contains the set of transformations which are used in low precision models
|
* [low precision transformations](./src/common/low_precision_transformations) - contains the set of transformations that are used in low precision models
|
||||||
* [bindings](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings) - contains all awailable OpenVINO bindings which are maintained by OpenVINO team.
|
* [bindings](./src/bindings) - contains all available OpenVINO bindings which are maintained by the OpenVINO team.
|
||||||
* [c](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/c) - provides C API for OpenVINO™ Runtime
|
* [c](./src/bindings/c) - C API for OpenVINO™ Runtime
|
||||||
* [python](https://github.com/openvinotoolkit/openvino/tree/master/src/bindings/python) - Python API for OpenVINO™ Runtime
|
* [python](./src/bindings/python) - Python API for OpenVINO™ Runtime
|
||||||
* [Plugins](https://github.com/openvinotoolkit/openvino/tree/master/src/plugins) - contains OpenVINO plugins which are maintained in open-source by OpenVINO team. For more information please taje a look to the [list of supported devices](#supported-hardware-matrix).
|
* [Plugins](./src/plugins) - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the [list of supported devices](#supported-hardware-matrix).
|
||||||
* [Frontends](https://github.com/openvinotoolkit/openvino/tree/master/src/frontends) - contains available OpenVINO frontends which allow to read model from native framework format.
|
* [Frontends](./src/frontends) - contains available OpenVINO frontends that allow reading models from the native framework format.
|
||||||
* [Model Optimizer] - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
|
* [Model Optimizer] - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
|
||||||
* [Post-Training Optimization Tool] - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
|
* [Post-Training Optimization Tool] - is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, for example, post-training 8-bit quantization.
|
||||||
* [Samples] - applications on C, C++ and Python languages which shows basic use cases of OpenVINO usages.
|
* [Samples] - applications in C, C++ and Python languages that show basic OpenVINO use cases.
|
||||||
|
|
||||||
## Supported Hardware matrix
|
## Supported Hardware matrix
|
||||||
|
|
||||||
@@ -69,37 +69,37 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
|
|||||||
<tbody>
|
<tbody>
|
||||||
<tr>
|
<tr>
|
||||||
<td rowspan=2>CPU</td>
|
<td rowspan=2>CPU</td>
|
||||||
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
|
<td> <a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
|
||||||
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
|
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
|
<td> <a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
|
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
|
||||||
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
|
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>GPU</td>
|
<td>GPU</td>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
|
||||||
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
|
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>GNA</td>
|
<td>GNA</td>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
|
||||||
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
|
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>VPU</td>
|
<td>VPU</td>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_IE_DG_supported_plugins_VPU.html#doxid-openvino-docs-i-e-d-g-supported-plugins-v-p-u">Myriad plugin</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/intel_myriad">openvino_intel_myriad_plugin</a></i></b></td>
|
||||||
<td>Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X</td>
|
<td>Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
Also OpenVINO™ Toolkit contains several plugins which should simplify to load model on several hardware devices:
|
OpenVINO™ Toolkit also contains several plugins which simplify loading models on several hardware devices:
|
||||||
<table>
|
<table>
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
@@ -110,23 +110,23 @@ Also OpenVINO™ Toolkit contains several plugins which should simplify to load
|
|||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
<tr>
|
<tr>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
||||||
<td>Auto plugin enables selecting Intel device for inference automatically</td>
|
<td>Auto plugin enables selecting Intel device for inference automatically</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
|
||||||
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
|
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
|
||||||
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
|
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
|
<td><a href="https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
|
||||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino/tree/master/src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
||||||
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
|
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
@@ -140,11 +140,11 @@ By contributing to the project, you agree to the license and copyright terms the
|
|||||||
|
|
||||||
### User documentation
|
### User documentation
|
||||||
|
|
||||||
The latest documentation for OpenVINO™ Toolkit is availabe [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all important information which could be needed if you create an application which is based on binary OpenVINO distribution or own OpenVINO version without source code modification.
|
The latest documentation for OpenVINO™ Toolkit is available [here](https://docs.openvino.ai/). This documentation contains detailed information about all OpenVINO components and provides all the important information you may need to create an application based on binary OpenVINO distribution or own OpenVINO version without source code modification.
|
||||||
|
|
||||||
### Developer documentation
|
### Developer documentation
|
||||||
|
|
||||||
[Developer documentation](#todo-add) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
|
[Developer documentation](./docs/dev/index.md) contains information about architectural decisions which are applied inside the OpenVINO components. This documentation has all necessary information which could be needed in order to contribute to OpenVINO.
|
||||||
|
|
||||||
## Tutorials
|
## Tutorials
|
||||||
|
|
||||||
@@ -161,15 +161,15 @@ The list of OpenVINO tutorials:
|
|||||||
|
|
||||||
## System requirements
|
## System requirements
|
||||||
|
|
||||||
The full information about system requirements depends on platform and available in section `System requirement` on dedicated pages:
|
The system requirements vary depending on platform and are available on dedicated pages:
|
||||||
- [Linux](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html)
|
- [Linux](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_linux_header.html)
|
||||||
- [Windows](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows.html)
|
- [Windows](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_windows_header.html)
|
||||||
- [macOS](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_macos.html)
|
- [macOS](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_macos_header.html)
|
||||||
- [Raspbian](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_raspbian.html)
|
- [Raspbian](https://docs.openvino.ai/2022.2/openvino_docs_install_guides_installing_openvino_raspbian.html)
|
||||||
|
|
||||||
## How to build
|
## How to build
|
||||||
|
|
||||||
Please take a look to [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about OpenVINO build process.
|
See the [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about the OpenVINO build process.
|
||||||
|
|
||||||
## How to contribute
|
## How to contribute
|
||||||
|
|
||||||
@@ -177,13 +177,13 @@ See [CONTRIBUTING](./CONTRIBUTING.md) for details. Thank you!
|
|||||||
|
|
||||||
## Get a support
|
## Get a support
|
||||||
|
|
||||||
Please report questions, issues and suggestions using:
|
Report questions, issues and suggestions, using:
|
||||||
|
|
||||||
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
|
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
|
||||||
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
|
* The [`openvino`](https://stackoverflow.com/questions/tagged/openvino) tag on StackOverflow\*
|
||||||
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
|
* [Forum](https://software.intel.com/en-us/forums/computer-vision)
|
||||||
|
|
||||||
## See also
|
## Additional Resources
|
||||||
|
|
||||||
* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
|
* [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki)
|
||||||
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
|
* [OpenVINO Storage](https://storage.openvinotoolkit.org/)
|
||||||
@@ -194,15 +194,15 @@ Please report questions, issues and suggestions using:
|
|||||||
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
|
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
|
||||||
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
|
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
|
||||||
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
|
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
|
||||||
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - An alternative, web-based version of OpenVINO designed to make production of pretrained deep learning models significantly easier.
|
* [DL Workbench](https://docs.openvino.ai/2022.2/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
|
||||||
* [Computer Vision Annotation Tool (CVAT)](https://github.com/openvinotoolkit/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
|
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
|
||||||
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
|
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
|
||||||
|
|
||||||
---
|
---
|
||||||
\* Other names and brands may be claimed as the property of others.
|
\* Other names and brands may be claimed as the property of others.
|
||||||
|
|
||||||
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
|
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
|
||||||
[OpenVINO™ Runtime]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
|
[OpenVINO™ Runtime]:https://docs.openvino.ai/2022.2/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
|
||||||
[Model Optimizer]:https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
|
[Model Optimizer]:https://docs.openvino.ai/2022.2/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
|
||||||
[Post-Training Optimization Tool]:https://docs.openvino.ai/latest/pot_introduction.html
|
[Post-Training Optimization Tool]:https://docs.openvino.ai/2022.2/pot_introduction.html
|
||||||
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
|
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
|
||||||
|
|||||||
@@ -84,6 +84,11 @@ ie_coverage_extract(INPUT "openvino" OUTPUT "core"
|
|||||||
ie_coverage_genhtml(INFO_FILE "core"
|
ie_coverage_genhtml(INFO_FILE "core"
|
||||||
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
|
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
|
||||||
|
|
||||||
|
ie_coverage_extract(INPUT "openvino" OUTPUT "openvino_all"
|
||||||
|
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/*" "${OV_COVERAGE_BASE_DIRECTORY}/docs/template_plugin/*")
|
||||||
|
ie_coverage_genhtml(INFO_FILE "openvino_all"
|
||||||
|
PREFIX "${OV_COVERAGE_BASE_DIRECTORY}")
|
||||||
|
|
||||||
if(ENABLE_OV_ONNX_FRONTEND)
|
if(ENABLE_OV_ONNX_FRONTEND)
|
||||||
ie_coverage_extract(INPUT "openvino" OUTPUT "onnx"
|
ie_coverage_extract(INPUT "openvino" OUTPUT "onnx"
|
||||||
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/frontends/onnx/*"
|
PATTERNS "${OV_COVERAGE_BASE_DIRECTORY}/src/frontends/onnx/*"
|
||||||
|
|||||||
@@ -151,6 +151,9 @@ function(ov_download_tbb)
|
|||||||
if(EXISTS "${TBBROOT}/lib/cmake/TBB/TBBConfig.cmake")
|
if(EXISTS "${TBBROOT}/lib/cmake/TBB/TBBConfig.cmake")
|
||||||
# oneTBB case
|
# oneTBB case
|
||||||
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/TBB" "Path to TBB cmake folder")
|
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/TBB" "Path to TBB cmake folder")
|
||||||
|
elseif(EXISTS "${TBBROOT}/lib/cmake/tbb/TBBConfig.cmake")
|
||||||
|
# oneTBB release package version less than 2021.6.0
|
||||||
|
update_deps_cache(TBB_DIR "${TBBROOT}/lib/cmake/tbb" "Path to TBB cmake folder")
|
||||||
elseif(EXISTS "${TBBROOT}/lib64/cmake/TBB/TBBConfig.cmake")
|
elseif(EXISTS "${TBBROOT}/lib64/cmake/TBB/TBBConfig.cmake")
|
||||||
# 64-bits oneTBB case
|
# 64-bits oneTBB case
|
||||||
update_deps_cache(TBB_DIR "${TBBROOT}/lib64/cmake/TBB" "Path to TBB cmake folder")
|
update_deps_cache(TBB_DIR "${TBBROOT}/lib64/cmake/TBB" "Path to TBB cmake folder")
|
||||||
|
|||||||
@@ -28,7 +28,6 @@ if(ENABLE_CLANG_FORMAT AND NOT TARGET clang_format_check_all)
|
|||||||
add_custom_target(clang_format_fix_all)
|
add_custom_target(clang_format_fix_all)
|
||||||
set_target_properties(clang_format_check_all clang_format_fix_all
|
set_target_properties(clang_format_check_all clang_format_fix_all
|
||||||
PROPERTIES FOLDER clang_format)
|
PROPERTIES FOLDER clang_format)
|
||||||
set(CLANG_FORMAT_ALL_OUTPUT_FILES "" CACHE INTERNAL "All clang-format output files")
|
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
function(add_clang_format_target TARGET_NAME)
|
function(add_clang_format_target TARGET_NAME)
|
||||||
@@ -88,14 +87,10 @@ function(add_clang_format_target TARGET_NAME)
|
|||||||
"[clang-format] ${source_file}"
|
"[clang-format] ${source_file}"
|
||||||
VERBATIM)
|
VERBATIM)
|
||||||
|
|
||||||
|
list(APPEND all_input_sources "${source_file}")
|
||||||
list(APPEND all_output_files "${output_file}")
|
list(APPEND all_output_files "${output_file}")
|
||||||
endforeach()
|
endforeach()
|
||||||
|
|
||||||
set(CLANG_FORMAT_ALL_OUTPUT_FILES
|
|
||||||
${CLANG_FORMAT_ALL_OUTPUT_FILES} ${all_output_files}
|
|
||||||
CACHE INTERNAL
|
|
||||||
"All clang-format output files")
|
|
||||||
|
|
||||||
add_custom_target(${TARGET_NAME}
|
add_custom_target(${TARGET_NAME}
|
||||||
DEPENDS ${all_output_files}
|
DEPENDS ${all_output_files}
|
||||||
COMMENT "[clang-format] ${TARGET_NAME}")
|
COMMENT "[clang-format] ${TARGET_NAME}")
|
||||||
@@ -104,11 +99,11 @@ function(add_clang_format_target TARGET_NAME)
|
|||||||
COMMAND
|
COMMAND
|
||||||
"${CMAKE_COMMAND}"
|
"${CMAKE_COMMAND}"
|
||||||
-D "CLANG_FORMAT=${CLANG_FORMAT}"
|
-D "CLANG_FORMAT=${CLANG_FORMAT}"
|
||||||
-D "INPUT_FILES=${CLANG_FORMAT_FOR_SOURCES}"
|
-D "INPUT_FILES=${all_input_sources}"
|
||||||
-D "EXCLUDE_PATTERNS=${CLANG_FORMAT_EXCLUDE_PATTERNS}"
|
-D "EXCLUDE_PATTERNS=${CLANG_FORMAT_EXCLUDE_PATTERNS}"
|
||||||
-P "${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
|
-P "${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
|
||||||
DEPENDS
|
DEPENDS
|
||||||
"${CLANG_FORMAT_FOR_SOURCES}"
|
"${all_input_sources}"
|
||||||
"${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
|
"${IEDevScripts_DIR}/clang_format/clang_format_fix.cmake"
|
||||||
COMMENT
|
COMMENT
|
||||||
"[clang-format] ${TARGET_NAME}_fix"
|
"[clang-format] ${TARGET_NAME}_fix"
|
||||||
|
|||||||
@@ -9,26 +9,46 @@ endif()
|
|||||||
set(ncc_style_dir "${IEDevScripts_DIR}/ncc_naming_style")
|
set(ncc_style_dir "${IEDevScripts_DIR}/ncc_naming_style")
|
||||||
set(ncc_style_bin_dir "${CMAKE_CURRENT_BINARY_DIR}/ncc_naming_style")
|
set(ncc_style_bin_dir "${CMAKE_CURRENT_BINARY_DIR}/ncc_naming_style")
|
||||||
|
|
||||||
# try to find_package(Clang QUIET)
|
# find python3
|
||||||
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
|
|
||||||
# installed, then find_package fails with errors even in QUIET mode
|
|
||||||
configure_file("${ncc_style_dir}/try_find_clang.cmake"
|
|
||||||
"${ncc_style_bin_dir}/source/CMakeLists.txt" COPYONLY)
|
|
||||||
execute_process(
|
|
||||||
COMMAND
|
|
||||||
"${CMAKE_COMMAND}" -S "${ncc_style_bin_dir}/source"
|
|
||||||
-B "${ncc_style_bin_dir}/build"
|
|
||||||
RESULT_VARIABLE clang_find_result
|
|
||||||
OUTPUT_VARIABLE output_var
|
|
||||||
ERROR_VARIABLE error_var)
|
|
||||||
|
|
||||||
if(NOT clang_find_result EQUAL "0")
|
find_package(PythonInterp 3 QUIET)
|
||||||
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
|
if(NOT PYTHONINTERP_FOUND)
|
||||||
message(WARNING "find_package(Clang) output: ${output_var}")
|
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
|
||||||
message(WARNING "find_package(Clang) error: ${error_var}")
|
|
||||||
set(ENABLE_NCC_STYLE OFF)
|
set(ENABLE_NCC_STYLE OFF)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
if(PYTHON_VERSION_MINOR EQUAL 6)
|
||||||
|
set(clang_version 10)
|
||||||
|
elseif(PYTHON_VERSION_MINOR EQUAL 8)
|
||||||
|
set(clang_version 12)
|
||||||
|
elseif(PYTHON_VERSION_MINOR EQUAL 9)
|
||||||
|
set(clang_version 12)
|
||||||
|
elseif(PYTHON_VERSION_MINOR EQUAL 10)
|
||||||
|
set(clang_version 14)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
|
if(ENABLE_NCC_STYLE)
|
||||||
|
# try to find_package(Clang QUIET)
|
||||||
|
# ClangConfig.cmake contains bug that if libclang-XX-dev is not
|
||||||
|
# installed, then find_package fails with errors even in QUIET mode
|
||||||
|
configure_file("${ncc_style_dir}/try_find_clang.cmake"
|
||||||
|
"${ncc_style_bin_dir}/source/CMakeLists.txt" COPYONLY)
|
||||||
|
execute_process(
|
||||||
|
COMMAND "${CMAKE_COMMAND}" -S "${ncc_style_bin_dir}/source"
|
||||||
|
-B "${ncc_style_bin_dir}/build"
|
||||||
|
RESULT_VARIABLE clang_find_result
|
||||||
|
OUTPUT_VARIABLE output_var
|
||||||
|
ERROR_VARIABLE error_var)
|
||||||
|
|
||||||
|
if(NOT clang_find_result EQUAL "0")
|
||||||
|
message(WARNING "Please, install `apt-get install clang-${clang_version} libclang-${clang_version}-dev` package (required for ncc naming style check)")
|
||||||
|
message(TRACE "find_package(Clang) output: ${output_var}")
|
||||||
|
message(TRACE "find_package(Clang) error: ${error_var}")
|
||||||
|
set(ENABLE_NCC_STYLE OFF)
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
# Since we were able to find_package(Clang) in a separate process
|
# Since we were able to find_package(Clang) in a separate process
|
||||||
# let's try to find in current process
|
# let's try to find in current process
|
||||||
if(ENABLE_NCC_STYLE)
|
if(ENABLE_NCC_STYLE)
|
||||||
@@ -37,19 +57,11 @@ if(ENABLE_NCC_STYLE)
|
|||||||
get_target_property(libclang_location libclang LOCATION)
|
get_target_property(libclang_location libclang LOCATION)
|
||||||
message(STATUS "Found libclang: ${libclang_location}")
|
message(STATUS "Found libclang: ${libclang_location}")
|
||||||
else()
|
else()
|
||||||
message(WARNING "libclang is not found (required for ncc naming style check)")
|
message(WARNING "libclang-${clang_version} is not found (required for ncc naming style check)")
|
||||||
set(ENABLE_NCC_STYLE OFF)
|
set(ENABLE_NCC_STYLE OFF)
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
# find python3
|
|
||||||
|
|
||||||
find_package(PythonInterp 3 QUIET)
|
|
||||||
if(NOT PYTHONINTERP_FOUND)
|
|
||||||
message(WARNING "Python3 interpreter was not found (required for ncc naming style check)")
|
|
||||||
set(ENABLE_NCC_STYLE OFF)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
# check python requirements_dev.txt
|
# check python requirements_dev.txt
|
||||||
|
|
||||||
set(ncc_script_py "${ncc_style_dir}/ncc/ncc.py")
|
set(ncc_script_py "${ncc_style_dir}/ncc/ncc.py")
|
||||||
|
|||||||
@@ -1,2 +1,5 @@
|
|||||||
clang==11.0
|
clang==10.0.1; python_version == '3.6'
|
||||||
|
clang==12.0.1; python_version == '3.8'
|
||||||
|
clang==12.0.1; python_version == '3.9'
|
||||||
|
clang==14.0; python_version == '3.10'
|
||||||
pyyaml
|
pyyaml
|
||||||
@@ -6,6 +6,17 @@ include(CMakeParseArguments)
|
|||||||
|
|
||||||
find_host_program(shellcheck_PROGRAM NAMES shellcheck DOC "Path to shellcheck tool")
|
find_host_program(shellcheck_PROGRAM NAMES shellcheck DOC "Path to shellcheck tool")
|
||||||
|
|
||||||
|
if(shellcheck_PROGRAM)
|
||||||
|
execute_process(COMMAND "${shellcheck_PROGRAM}" --version
|
||||||
|
RESULT_VARIABLE shellcheck_EXIT_CODE
|
||||||
|
OUTPUT_VARIABLE shellcheck_VERSION_STRING)
|
||||||
|
if(shellcheck_EXIT_CODE EQUAL 0)
|
||||||
|
if(shellcheck_VERSION_STRING MATCHES "version: ([0-9]+)\.([0-9]+).([0-9]+)")
|
||||||
|
set(shellcheck_VERSION "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}.${CMAKE_MATCH_3}" CACHE STRING "shellcheck version")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
function(ie_shellcheck_process)
|
function(ie_shellcheck_process)
|
||||||
if(NOT shellcheck_PROGRAM)
|
if(NOT shellcheck_PROGRAM)
|
||||||
message(WARNING "shellcheck tool is not found")
|
message(WARNING "shellcheck tool is not found")
|
||||||
@@ -33,7 +44,7 @@ function(ie_shellcheck_process)
|
|||||||
set(output_file "${output_file}.txt")
|
set(output_file "${output_file}.txt")
|
||||||
get_filename_component(script_name "${script}" NAME)
|
get_filename_component(script_name "${script}" NAME)
|
||||||
|
|
||||||
add_custom_command(OUTPUT ${output_file}
|
add_custom_command(OUTPUT ${output_file}
|
||||||
COMMAND ${CMAKE_COMMAND}
|
COMMAND ${CMAKE_COMMAND}
|
||||||
-D IE_SHELLCHECK_PROGRAM=${shellcheck_PROGRAM}
|
-D IE_SHELLCHECK_PROGRAM=${shellcheck_PROGRAM}
|
||||||
-D IE_SHELL_SCRIPT=${script}
|
-D IE_SHELL_SCRIPT=${script}
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ function (commitHash VAR)
|
|||||||
message(FATAL_ERROR "repo_root is not defined")
|
message(FATAL_ERROR "repo_root is not defined")
|
||||||
endif()
|
endif()
|
||||||
execute_process(
|
execute_process(
|
||||||
COMMAND git rev-parse HEAD
|
COMMAND git rev-parse --short=11 HEAD
|
||||||
WORKING_DIRECTORY ${repo_root}
|
WORKING_DIRECTORY ${repo_root}
|
||||||
OUTPUT_VARIABLE GIT_COMMIT_HASH
|
OUTPUT_VARIABLE GIT_COMMIT_HASH
|
||||||
OUTPUT_STRIP_TRAILING_WHITESPACE)
|
OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||||
@@ -28,13 +28,19 @@ endfunction()
|
|||||||
|
|
||||||
macro(ov_parse_ci_build_number)
|
macro(ov_parse_ci_build_number)
|
||||||
set(OpenVINO_VERSION_BUILD 000)
|
set(OpenVINO_VERSION_BUILD 000)
|
||||||
set(IE_VERSION_BUILD ${OpenVINO_VERSION_BUILD})
|
|
||||||
|
|
||||||
if(CI_BUILD_NUMBER MATCHES "^([0-9]+)\.([0-9]+)\.([0-9]+)\-([0-9]+)\-.*")
|
if(CI_BUILD_NUMBER MATCHES "^([0-9]+)\.([0-9]+)\.([0-9]+)\-([0-9]+)\-.*")
|
||||||
set(OpenVINO_VERSION_MAJOR ${CMAKE_MATCH_1})
|
set(OpenVINO_VERSION_MAJOR ${CMAKE_MATCH_1})
|
||||||
set(OpenVINO_VERSION_MINOR ${CMAKE_MATCH_2})
|
set(OpenVINO_VERSION_MINOR ${CMAKE_MATCH_2})
|
||||||
set(OpenVINO_VERSION_PATCH ${CMAKE_MATCH_3})
|
set(OpenVINO_VERSION_PATCH ${CMAKE_MATCH_3})
|
||||||
set(OpenVINO_VERSION_BUILD ${CMAKE_MATCH_4})
|
set(OpenVINO_VERSION_BUILD ${CMAKE_MATCH_4})
|
||||||
|
set(the_whole_version_is_defined_by_ci ON)
|
||||||
|
elseif(CI_BUILD_NUMBER MATCHES "^[0-9]+$")
|
||||||
|
set(OpenVINO_VERSION_BUILD ${CI_BUILD_NUMBER})
|
||||||
|
# only build number is defined by CI
|
||||||
|
set(the_whole_version_is_defined_by_ci OFF)
|
||||||
|
elseif(CI_BUILD_NUMBER)
|
||||||
|
message(FATAL_ERROR "Failed to parse CI_BUILD_NUMBER which is ${CI_BUILD_NUMBER}")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if(NOT DEFINED repo_root)
|
if(NOT DEFINED repo_root)
|
||||||
@@ -95,21 +101,33 @@ macro(ov_parse_ci_build_number)
|
|||||||
|
|
||||||
set(OpenVINO_VERSION "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}")
|
set(OpenVINO_VERSION "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}")
|
||||||
message(STATUS "OpenVINO version is ${OpenVINO_VERSION} (Build ${OpenVINO_VERSION_BUILD})")
|
message(STATUS "OpenVINO version is ${OpenVINO_VERSION} (Build ${OpenVINO_VERSION_BUILD})")
|
||||||
|
|
||||||
|
if(NOT the_whole_version_is_defined_by_ci)
|
||||||
|
# create CI_BUILD_NUMBER
|
||||||
|
|
||||||
|
branchName(GIT_BRANCH)
|
||||||
|
commitHash(GIT_COMMIT_HASH)
|
||||||
|
|
||||||
|
if(NOT GIT_BRANCH STREQUAL "master")
|
||||||
|
set(GIT_BRANCH_POSTFIX "-${GIT_BRANCH}")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set(CI_BUILD_NUMBER "${OpenVINO_VERSION}-${OpenVINO_VERSION_BUILD}-${GIT_COMMIT_HASH}${GIT_BRANCH_POSTFIX}")
|
||||||
|
|
||||||
|
unset(GIT_BRANCH_POSTFIX)
|
||||||
|
unset(GIT_BRANCH)
|
||||||
|
unset(GIT_COMMIT_HASH)
|
||||||
|
else()
|
||||||
|
unset(the_whole_version_is_defined_by_ci)
|
||||||
|
endif()
|
||||||
endmacro()
|
endmacro()
|
||||||
|
|
||||||
if (DEFINED ENV{CI_BUILD_NUMBER})
|
|
||||||
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
|
|
||||||
else()
|
|
||||||
branchName(GIT_BRANCH)
|
|
||||||
commitHash(GIT_COMMIT_HASH)
|
|
||||||
|
|
||||||
set(custom_build "custom_${GIT_BRANCH}_${GIT_COMMIT_HASH}")
|
|
||||||
set(CI_BUILD_NUMBER "${custom_build}")
|
|
||||||
endif()
|
|
||||||
|
|
||||||
# provides OpenVINO version
|
# provides OpenVINO version
|
||||||
# 1. If CI_BUILD_NUMBER is defined, parses this information
|
# 1. If CI_BUILD_NUMBER is defined, parses this information
|
||||||
# 2. Otherwise, parses openvino/core/version.hpp
|
# 2. Otherwise, parses openvino/core/version.hpp
|
||||||
|
if (DEFINED ENV{CI_BUILD_NUMBER})
|
||||||
|
set(CI_BUILD_NUMBER $ENV{CI_BUILD_NUMBER})
|
||||||
|
endif()
|
||||||
ov_parse_ci_build_number()
|
ov_parse_ci_build_number()
|
||||||
|
|
||||||
macro (addVersionDefines FILE)
|
macro (addVersionDefines FILE)
|
||||||
|
|||||||
@@ -126,7 +126,7 @@ ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS
|
|||||||
|
|
||||||
ie_dependent_option (ENABLE_SAMPLES "console samples are part of inference engine package" ON "NOT MINGW" OFF)
|
ie_dependent_option (ENABLE_SAMPLES "console samples are part of inference engine package" ON "NOT MINGW" OFF)
|
||||||
|
|
||||||
ie_option (ENABLE_OPENCV "enables OpenCV" ON)
|
ie_option (ENABLE_OPENCV "enables OpenCV" OFF)
|
||||||
|
|
||||||
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
|
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
|
||||||
|
|
||||||
@@ -136,16 +136,7 @@ ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are link
|
|||||||
|
|
||||||
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" OFF "BUILD_SHARED_LIBS" OFF)
|
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" OFF "BUILD_SHARED_LIBS" OFF)
|
||||||
|
|
||||||
get_linux_name(LINUX_OS_NAME)
|
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" OFF "THREADING MATCHES TBB;LINUX" OFF)
|
||||||
if(LINUX_OS_NAME MATCHES "^Ubuntu [0-9]+\.[0-9]+$" AND NOT DEFINED ENV{TBBROOT})
|
|
||||||
# Debian packages are enabled on Ubuntu systems
|
|
||||||
# so, system TBB can be tried for usage
|
|
||||||
set(ENABLE_SYSTEM_TBB_DEFAULT ON)
|
|
||||||
else()
|
|
||||||
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB;LINUX" OFF)
|
|
||||||
|
|
||||||
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
|
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
|
||||||
|
|
||||||
|
|||||||
@@ -150,13 +150,23 @@ if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND
|
|||||||
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
|
set(enable_system_tbb "@ENABLE_SYSTEM_TBB@")
|
||||||
if(NOT enable_system_tbb)
|
if(NOT enable_system_tbb)
|
||||||
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
|
set_and_check(_tbb_dir "@PACKAGE_IE_TBB_DIR@")
|
||||||
|
|
||||||
|
# see https://stackoverflow.com/questions/28070810/cmake-generate-error-on-windows-as-it-uses-as-escape-seq
|
||||||
|
if(DEFINED ENV{TBBROOT})
|
||||||
|
file(TO_CMAKE_PATH $ENV{TBBROOT} ENV_TBBROOT)
|
||||||
|
endif()
|
||||||
|
if(DEFINED ENV{TBB_DIR})
|
||||||
|
file(TO_CMAKE_PATH $ENV{TBB_DIR} ENV_TBB_DIR)
|
||||||
|
endif()
|
||||||
|
|
||||||
set(find_package_tbb_extra_args
|
set(find_package_tbb_extra_args
|
||||||
CONFIG
|
CONFIG
|
||||||
PATHS
|
PATHS
|
||||||
# oneTBB case exposed via export TBBROOT=<custom TBB root>
|
# oneTBB case exposed via export TBBROOT=<custom TBB root>
|
||||||
"$ENV{TBBROOT}/lib64/cmake/TBB"
|
"${ENV_TBBROOT}/lib64/cmake/TBB"
|
||||||
"$ENV{TBBROOT}/lib/cmake/TBB"
|
"${ENV_TBBROOT}/lib/cmake/TBB"
|
||||||
# "$ENV{TBB_DIR}"
|
"${ENV_TBBROOT}/lib/cmake/tbb"
|
||||||
|
"${ENV_TBB_DIR}"
|
||||||
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
|
# for custom TBB exposed via cmake -DTBBROOT=<custom TBB root>
|
||||||
"${TBBROOT}/cmake"
|
"${TBBROOT}/cmake"
|
||||||
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO
|
# _tbb_dir points to TBB_DIR (custom | temp | system) used to build OpenVINO
|
||||||
|
|||||||
@@ -2,7 +2,10 @@
|
|||||||
|
|
||||||
|
|
||||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
|
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
|
||||||
* [Run inference and develop your app with OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
|
|
||||||
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
|
|
||||||
* [Deploy your model online with the OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
|
|
||||||
|
|
||||||
|
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
|
||||||
|
* [Deploy your model with OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
|
||||||
|
* [Deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).
|
||||||
|
|
||||||
|
|
||||||
|
> **NOTE**: Note that [running inference in OpenVINO Runtime](../OV_Runtime_UG/openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.
|
||||||
@@ -13,99 +13,3 @@
|
|||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Deep Learning Workbench (DL Workbench) is an official OpenVINO™ graphical interface designed to make the production of pretrained deep learning Computer Vision and Natural Language Processing models significantly easier.
|
|
||||||
|
|
||||||
Minimize the inference-to-deployment workflow timing for neural models right in your browser: import a model, analyze its performance and accuracy, visualize the outputs, optimize and make the final model deployment-ready in a matter of minutes. DL Workbench takes you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
@sphinxdirective
|
|
||||||
|
|
||||||
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
|
|
||||||
:type: ref
|
|
||||||
:text: Run DL Workbench in Intel® DevCloud
|
|
||||||
:classes: btn-primary btn-block
|
|
||||||
|
|
||||||
@endsphinxdirective
|
|
||||||
|
|
||||||
DL Workbench enables you to get a detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
|
|
||||||
|
|
||||||
DL Workbench also provides the [JupyterLab environment](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Jupyter_Notebooks.html#doxid-workbench-docs-workbench-d-g-jupyter-notebooks) that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.
|
|
||||||
|
|
||||||
|
|
||||||
## Video
|
|
||||||
|
|
||||||
@sphinxdirective
|
|
||||||
|
|
||||||
.. list-table::
|
|
||||||
|
|
||||||
* - .. raw:: html
|
|
||||||
|
|
||||||
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="560"
|
|
||||||
src="https://www.youtube.com/embed/on8xSSTKCt8">
|
|
||||||
</iframe>
|
|
||||||
* - **DL Workbench Introduction**. Duration: 1:31
|
|
||||||
|
|
||||||
@endsphinxdirective
|
|
||||||
|
|
||||||
|
|
||||||
## User Goals
|
|
||||||
|
|
||||||
DL Workbench helps achieve your goals depending on the stage of your deep learning journey.
|
|
||||||
|
|
||||||
If you are a beginner in the deep learning field, the DL Workbench provides you with
|
|
||||||
learning opportunities:
|
|
||||||
* Learn what neural networks are, how they work, and how to examine their architectures.
|
|
||||||
* Learn the basics of neural network analysis and optimization before production.
|
|
||||||
* Get familiar with the OpenVINO™ ecosystem and its main components without installing it on your system.
|
|
||||||
|
|
||||||
If you have enough experience with neural networks, DL Workbench provides you with a
|
|
||||||
convenient web interface to optimize your model and prepare it for production:
|
|
||||||
* Measure and interpret model performance.
|
|
||||||
* Tune the model for enhanced performance.
|
|
||||||
* Analyze the quality of your model and visualize output.
|
|
||||||
|
|
||||||
## General Workflow
|
|
||||||
|
|
||||||
The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Get a quick overview of the workflow in the DL Workbench User Interface:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## OpenVINO™ Toolkit Components
|
|
||||||
|
|
||||||
The intuitive web-based interface of the DL Workbench enables you to easily use various
|
|
||||||
OpenVINO™ toolkit components:
|
|
||||||
|
|
||||||
Component | Description
|
|
||||||
|------------------|------------------|
|
|
||||||
| [Open Model Zoo](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| Get access to the collection of high-quality pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [Intel-trained](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html) models trained to resolve a variety of different tasks.
|
|
||||||
| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) |Optimize and transform models trained in supported frameworks to the IR format. <br>Supported frameworks include TensorFlow\*, Caffe\*, Kaldi\*, MXNet\*, and ONNX\* format.
|
|
||||||
| [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| Estimate deep learning model inference performance on supported devices.
|
|
||||||
| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html)| Evaluate the accuracy of a model by collecting one or several metric values.
|
|
||||||
| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models. |
|
|
||||||
|
|
||||||
|
|
||||||
@sphinxdirective
|
|
||||||
|
|
||||||
.. link-button:: workbench_docs_Workbench_DG_Start_DL_Workbench_in_DevCloud
|
|
||||||
:type: ref
|
|
||||||
:text: Run DL Workbench in Intel® DevCloud
|
|
||||||
:classes: btn-outline-primary
|
|
||||||
|
|
||||||
@endsphinxdirective
|
|
||||||
|
|
||||||
## Contact Us
|
|
||||||
|
|
||||||
* [DL Workbench GitHub Repository](https://github.com/openvinotoolkit/workbench)
|
|
||||||
|
|
||||||
* [DL Workbench on Intel Community Forum](https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit)
|
|
||||||
|
|
||||||
* [DL Workbench Gitter Chat](https://gitter.im/dl-workbench/general?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&content=body)
|
|
||||||
24
docs/Documentation/inference_modes_overview.md
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# Inference Modes {#openvino_docs_Runtime_Inference_Modes_Overview}
|
||||||
|
|
||||||
|
@sphinxdirective
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 1
|
||||||
|
:hidden:
|
||||||
|
|
||||||
|
openvino_docs_OV_UG_supported_plugins_AUTO
|
||||||
|
openvino_docs_OV_UG_Running_on_multiple_devices
|
||||||
|
openvino_docs_OV_UG_Hetero_execution
|
||||||
|
openvino_docs_OV_UG_Automatic_Batching
|
||||||
|
|
||||||
|
@endsphinxdirective
|
||||||
|
|
||||||
|
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
|
||||||
|
|
||||||
|
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||||
|
|
||||||
|
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
|
||||||
|
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
|
||||||
|
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
|
||||||
|
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)
|
||||||
|
|
||||||
@@ -2,11 +2,21 @@
|
|||||||
|
|
||||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
||||||
|
|
||||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||||
* [Browse a database of models for use in your projects](../model_zoo.md).
|
|
||||||
|
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
|
||||||
|
|
||||||
|
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||||
|
|
||||||
|
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||||
|
|
||||||
|
This section describes the how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||||
|
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
|
||||||
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||||
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
|
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
|
||||||
|
|
||||||
|
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ More resources:
|
|||||||
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
|
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
|
||||||
|
|
||||||
More resources:
|
More resources:
|
||||||
* [Documentation](@ref docs_nncf_introduction)
|
* [Documentation](@ref tmo_introduction)
|
||||||
* [GitHub](https://github.com/openvinotoolkit/nncf)
|
* [GitHub](https://github.com/openvinotoolkit/nncf)
|
||||||
* [PyPI](https://pypi.org/project/nncf/)
|
* [PyPI](https://pypi.org/project/nncf/)
|
||||||
|
|
||||||
@@ -25,7 +25,7 @@ A solution for Model Developers and Independent Software Vendors to use secure p
|
|||||||
|
|
||||||
More resources:
|
More resources:
|
||||||
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
|
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
|
||||||
* [GitHub]https://github.com/openvinotoolkit/security_addon)
|
* [GitHub](https://github.com/openvinotoolkit/security_addon)
|
||||||
|
|
||||||
|
|
||||||
### OpenVINO™ integration with TensorFlow (OVTF)
|
### OpenVINO™ integration with TensorFlow (OVTF)
|
||||||
@@ -40,7 +40,7 @@ More resources:
|
|||||||
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
||||||
|
|
||||||
More resources:
|
More resources:
|
||||||
* [documentation on GitHub](https://openvinotoolkit.github.io/dlstreamer_gst/)
|
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
|
||||||
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
|
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
|
||||||
|
|
||||||
### DL Workbench
|
### DL Workbench
|
||||||
@@ -61,7 +61,7 @@ More resources:
|
|||||||
An online, interactive video and image annotation tool for computer vision purposes.
|
An online, interactive video and image annotation tool for computer vision purposes.
|
||||||
|
|
||||||
More resources:
|
More resources:
|
||||||
* [documentation on GitHub](https://openvinotoolkit.github.io/cvat/docs/)
|
* [documentation on GitHub](https://opencv.github.io/cvat/docs/)
|
||||||
* [web application](https://cvat.org/)
|
* [web application](https://cvat.org/)
|
||||||
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
|
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
|
||||||
* [GitHub](https://github.com/openvinotoolkit/cvat)
|
* [GitHub](https://github.com/openvinotoolkit/cvat)
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
|
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
|
||||||
|
|
||||||
To enable operations not supported by OpenVINO out of the box, you may need an extension for an OpenVINO operation set, and a custom kernel for the device you will target. This page describes custom kernel support for the GPU device.
|
To enable operations not supported by OpenVINO™ out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
|
||||||
|
|
||||||
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
|
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
|
||||||
|
|
||||||
@@ -8,7 +8,6 @@ There are two options for using the custom operation configuration file:
|
|||||||
|
|
||||||
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
|
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
|
||||||
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
||||||
|
|
||||||
@sphinxtabset
|
@sphinxtabset
|
||||||
|
|
||||||
@sphinxtab{C++}
|
@sphinxtab{C++}
|
||||||
@@ -31,7 +30,7 @@ $ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validati
|
|||||||
## Configuration File Format <a name="config-file-format"></a>
|
## Configuration File Format <a name="config-file-format"></a>
|
||||||
|
|
||||||
The configuration file is expected to follow the `.xml` file structure
|
The configuration file is expected to follow the `.xml` file structure
|
||||||
with a node of the `CustomLayer` type for every custom operation you provide.
|
with a node of the type `CustomLayer` for every custom operation you provide.
|
||||||
|
|
||||||
The definitions described in the sections below use the following notations:
|
The definitions described in the sections below use the following notations:
|
||||||
|
|
||||||
@@ -44,44 +43,44 @@ Notation | Description
|
|||||||
|
|
||||||
### CustomLayer Node and Sub-Node Structure
|
### CustomLayer Node and Sub-Node Structure
|
||||||
|
|
||||||
`CustomLayer` node contains the entire configuration for a single custom operation.
|
The `CustomLayer` node contains the entire configuration for a single custom operation.
|
||||||
|
|
||||||
| Attribute Name |\# | Description |
|
| Attribute Name |\# | Description |
|
||||||
|-----|-----|-----|
|
|-----|-----|-----|
|
||||||
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the IR.|
|
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the OpenVINO IR.|
|
||||||
| `type` | (1) | Must be `SimpleGPU`. |
|
| `type` | (1) | Must be `SimpleGPU`. |
|
||||||
| `version` | (1) | Must be `1`. |
|
| `version` | (1) | Must be `1`. |
|
||||||
|
|
||||||
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
|
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
|
||||||
`WorkSizes` (0/1)
|
`WorkSizes` (0/1)
|
||||||
|
|
||||||
### Kernel Node and Sub-Node Structure
|
### Kernel Node and Sub-Node Structure
|
||||||
|
|
||||||
`Kernel` node contains all kernel source code configuration.
|
The `Kernel` node contains all kernel source code configuration.
|
||||||
|
|
||||||
**Sub-nodes**: `Source` (1+), `Define` (0+)
|
**Sub-nodes**: `Source` (1+), `Define` (0+)
|
||||||
|
|
||||||
### Source Node and Sub-Node Structure
|
### Source Node and Sub-Node Structure
|
||||||
|
|
||||||
`Source` node points to a single OpenCL source file.
|
The `Source` node points to a single OpenCL source file.
|
||||||
|
|
||||||
| Attribute Name | \# |Description|
|
| Attribute Name | \# |Description|
|
||||||
|-----|-----|-----|
|
|-----|-----|-----|
|
||||||
| `filename` | (1) | Name of the file containing OpenCL source code. Note that the path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
|
| `filename` | (1) | Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
|
||||||
|
|
||||||
**Sub-nodes**: None
|
**Sub-nodes**: None
|
||||||
|
|
||||||
### Define Node and Sub-Node Structure
|
### Define Node and Sub-Node Structure
|
||||||
|
|
||||||
`Define` node configures a single `#‍define` instruction to be added to
|
The `Define` node configures a single `#‍define` instruction to be added to
|
||||||
the sources during compilation (JIT).
|
the sources during compilation (JIT).
|
||||||
|
|
||||||
| Attribute Name | \# | Description |
|
| Attribute Name | \# | Description |
|
||||||
|------|-------|------|
|
|------|-------|------|
|
||||||
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
|
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
|
||||||
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
|
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
|
||||||
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
|
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
|
||||||
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the IR. |
|
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR. |
|
||||||
|
|
||||||
**Sub-nodes:** None
|
**Sub-nodes:** None
|
||||||
|
|
||||||
@@ -90,37 +89,37 @@ The resulting JIT has the following form:
|
|||||||
|
|
||||||
### Buffers Node and Sub-Node Structure
|
### Buffers Node and Sub-Node Structure
|
||||||
|
|
||||||
`Buffers` node configures all input/output buffers for the OpenCL entry
|
The `Buffers` node configures all input/output buffers for the OpenCL entry
|
||||||
function. No buffers node structure exists.
|
function. No buffers node structure exists.
|
||||||
|
|
||||||
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
|
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
|
||||||
|
|
||||||
### Data Node and Sub-Node Structure
|
### Data Node and Sub-Node Structure
|
||||||
|
|
||||||
`Data` node configures a single input with static data, for example,
|
The `Data` node configures a single input with static data, for example,
|
||||||
weights or biases.
|
weights or biases.
|
||||||
|
|
||||||
| Attribute Name | \# | Description |
|
| Attribute Name | \# | Description |
|
||||||
|----|-----|------|
|
|----|-----|------|
|
||||||
| `name` | (1) | Name of a blob attached to an operation in the IR |
|
| `name` | (1) | Name of a blob attached to an operation in the OpenVINO IR. |
|
||||||
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to |
|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
|
||||||
|
|
||||||
**Sub-nodes**: None
|
**Sub-nodes**: None
|
||||||
|
|
||||||
### Tensor Node and Sub-Node Structure
|
### Tensor Node and Sub-Node Structure
|
||||||
|
|
||||||
`Tensor` node configures a single input or output tensor.
|
The `Tensor` node configures a single input or output tensor.
|
||||||
|
|
||||||
| Attribute Name | \# | Description |
|
| Attribute Name | \# | Description |
|
||||||
|------|-------|-------|
|
|------|-------|-------|
|
||||||
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
|
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
|
||||||
| `type` | (1) | `input` or `output` |
|
| `type` | (1) | `input` or `output` |
|
||||||
| `port-index` | (1) | 0-based index in the operation input/output ports in the IR |
|
| `port-index` | (1) | 0-based index in the operation input/output ports in the OpenVINO IR |
|
||||||
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`, and same values in all lowercase. Default value: `BFYX` |
|
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`(also in lowercase). The default value: `BFYX` |
|
||||||
|
|
||||||
### CompilerOptions Node and Sub-Node Structure
|
### CompilerOptions Node and Sub-Node Structure
|
||||||
|
|
||||||
`CompilerOptions` node configures the compilation flags for the OpenCL
|
The `CompilerOptions` node configures the compilation flags for the OpenCL
|
||||||
sources.
|
sources.
|
||||||
|
|
||||||
| Attribute Name | \# | Description |
|
| Attribute Name | \# | Description |
|
||||||
@@ -131,20 +130,20 @@ sources.
|
|||||||
|
|
||||||
### WorkSizes Node and Sub-Node Structure
|
### WorkSizes Node and Sub-Node Structure
|
||||||
|
|
||||||
`WorkSizes` node configures the global/local work sizes to be used when
|
The `WorkSizes` node configures the global/local work sizes to be used when
|
||||||
queuing an OpenCL program for execution.
|
queuing an OpenCL program for execution.
|
||||||
|
|
||||||
| Attribute Name | \# | Description |
|
| Attribute Name | \# | Description |
|
||||||
|-----|------|-----|
|
|-----|------|-----|
|
||||||
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
|
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
|
||||||
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. Default value: `output` |
|
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. The default value: `output` |
|
||||||
|
|
||||||
**Sub-nodes**: None
|
**Sub-nodes**: None
|
||||||
|
|
||||||
## Example Configuration File
|
## Example Configuration File
|
||||||
|
|
||||||
The following code sample provides an example configuration file in XML
|
The following code sample provides an example configuration file in XML
|
||||||
format. For information on the configuration file structure, see
|
format. For information on the configuration file structure, see the
|
||||||
[Configuration File Format](#config-file-format).
|
[Configuration File Format](#config-file-format).
|
||||||
```xml
|
```xml
|
||||||
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
|
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
|
||||||
@@ -170,22 +169,22 @@ For an example, see [Example Kernel](#example-kernel).
|
|||||||
|
|
||||||
| Name | Value |
|
| Name | Value |
|
||||||
|---|---|
|
|---|---|
|
||||||
| `NUM_INPUTS` | Number of the input tensors bound to this kernel |
|
| `NUM_INPUTS` | Number of the input tensors bound to this kernel. |
|
||||||
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel |
|
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel. |
|
||||||
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array |
|
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array. |
|
||||||
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel |
|
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel. |
|
||||||
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array |
|
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array. |
|
||||||
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX` |
|
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX`. |
|
||||||
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
|
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
|
||||||
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`|
|
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`. |
|
||||||
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#‍ifdef/#‍endif`. |
|
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#‍ifdef/#‍endif`. |
|
||||||
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
|
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
|
||||||
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array |
|
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array. |
|
||||||
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
|
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
|
||||||
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array |
|
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array. |
|
||||||
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.|
|
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX. |
|
||||||
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array |
|
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array. |
|
||||||
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
|
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
|
||||||
|
|
||||||
All `<TENSOR>` values are automatically defined for every tensor
|
All `<TENSOR>` values are automatically defined for every tensor
|
||||||
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
|
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
|
||||||
@@ -220,20 +219,19 @@ __kernel void example_relu_kernel(
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
> **NOTE**: As described in the previous section, all items like
|
> **NOTE**: As described in the previous section, all items such as the
|
||||||
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
|
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
|
||||||
> OpenVINO for efficiency reasons. See [Debugging
|
> OpenVINO for efficiency reasons. See the [Debugging
|
||||||
> Tips](#debugging-tips) for information on debugging the results.
|
> Tips](#debugging-tips) below for information on debugging the results.
|
||||||
|
|
||||||
## Debugging Tips<a name="debugging-tips"></a>
|
## Debugging Tips<a name="debugging-tips"></a>
|
||||||
|
|
||||||
* **Using `printf` in the OpenCL™ Kernels**.
|
**Using `printf` in the OpenCL™ Kernels**.
|
||||||
To debug the specific values, you can use `printf` in your kernels.
|
To debug the specific values, use `printf` in your kernels.
|
||||||
However, be careful not to output excessively, which
|
However, be careful not to output excessively, which
|
||||||
could generate too much data. The `printf` output is typical, so
|
could generate too much data. The `printf` output is typical, so
|
||||||
your output can be truncated to fit the buffer. Also, because of
|
your output can be truncated to fit the buffer. Also, because of
|
||||||
buffering, you actually get an entire buffer of output when the
|
buffering, you actually get an entire buffer of output when the
|
||||||
execution ends.<br>
|
execution ends.<br>
|
||||||
|
|
||||||
For more information, refer to the [printf
|
For more information, refer to the [printf Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
|
||||||
Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
|
|
||||||
|
|||||||
@@ -19,62 +19,61 @@ TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The lis
|
|||||||
each of the supported frameworks. To see the operations supported by your framework, refer to
|
each of the supported frameworks. To see the operations supported by your framework, refer to
|
||||||
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
|
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
|
||||||
|
|
||||||
Custom operations, that is those not included in the list, are not recognized by OpenVINO™ out-of-the-box. The need for a custom operation may appear in two main cases:
|
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
|
||||||
|
|
||||||
1. A regular framework operation that is new or rarely used, which is why it hasn’t been implemented in OpenVINO yet.
|
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
|
||||||
|
|
||||||
2. A new user operation that was created for some specific model topology by a model author using framework extension capabilities.
|
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
|
||||||
|
|
||||||
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations, allowing you to plug in your own implementation for them. OpenVINO™ Extensibility API lets you add support for those custom operations and use one implementation for Model Optimizer and OpenVINO™ Runtime.
|
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
|
||||||
|
|
||||||
Defining a new custom operation basically consist of two parts:
|
Defining a new custom operation basically consists of two parts:
|
||||||
|
|
||||||
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). How to implement execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
|
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for [GPU](./GPU_Extensibility.md) and [VPU](./VPU_Extensibility.md) is described in separate guides.
|
||||||
|
|
||||||
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
|
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
|
||||||
|
|
||||||
The first part is required for inference, the second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part, the next sections will describe them in detail.
|
The first part is required for inference. The second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part. The following sections will describe them in detail.
|
||||||
|
|
||||||
## Definition of Operation Semantics
|
## Definition of Operation Semantics
|
||||||
|
|
||||||
|
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. Refer to the latest OpenVINO operation set, when deciding feasibility of such decomposition. You can use any valid combination of exiting operations. The next section of this document describes the way to map a custom operation.
|
||||||
|
|
||||||
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. When deciding feasibility of such decomposition refer to the latest OpenVINO operation set. You can use any valid combination of exiting operations. How to map a custom operation is described in the next section of this document.
|
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the [Custom Operation Guide](add_openvino_ops.md).
|
||||||
|
|
||||||
If such decomposition is not possible or appears too bulky with lots of consisting operations that are not performing well, then a new class for the custom operation should be implemented as described in the [Custom Operation Guide](add_openvino_ops.md).
|
You might prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise, try to decompose the operation first, as described above. Then, after verifying correctness of inference and resulting performance, you may move on to optional implementation of Bare Metal C++.
|
||||||
|
|
||||||
Prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise try to decompose the operation first as described above and then after verifying correctness of inference and resulting performance, optionally invest to implementing bare metal C++ implementation.
|
|
||||||
|
|
||||||
## Mapping from Framework Operation
|
## Mapping from Framework Operation
|
||||||
|
|
||||||
Depending on model format used for import, mapping of custom operation is implemented differently, choose one of:
|
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
|
||||||
|
|
||||||
1. If model is represented in ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with Model Optimizer `--extensions` option or when model is imported directly to OpenVINO run-time using read_model method. Python API is also available for run-time model importing.
|
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
|
||||||
|
|
||||||
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
|
2. If a model is represented in the TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
|
||||||
|
|
||||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||||
|
|
||||||
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be
|
If you are implementing extensions for new ONNX or PaddlePaddle frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
|
||||||
|
|
||||||
1. Implemented in C++ only
|
1. Implemented in C++ only.
|
||||||
|
|
||||||
2. Compiled as a separate shared library (see details how to do that later in this guide).
|
2. Compiled as a separate shared library (see details on how to do this further in this guide).
|
||||||
|
|
||||||
You cannot write new frontend extensions using Python API if you plan to use them with Model Optimizer.
|
Model Optimizer does not support new frontend extensions written in Python API.
|
||||||
|
|
||||||
Remaining part of this guide uses Frontend Extension API applicable for new frontends.
|
Remaining part of this guide describes application of Frontend Extension API for new frontends.
|
||||||
|
|
||||||
## Registering Extensions
|
## Registering Extensions
|
||||||
|
|
||||||
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
|
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
|
||||||
|
|
||||||
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/docs/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
|
> **NOTE**: This documentation is derived from the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates the details of extension development. It is based on minimalistic `Identity` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
|
||||||
|
|
||||||
To load the extensions to the `ov::Core` object, use the `ov::Core::add_extension` method, this method allows to load library with extensions or extensions from the code.
|
Use the `ov::Core::add_extension` method to load the extensions to the `ov::Core` object. This method allows loading library with extensions or extensions from the code.
|
||||||
|
|
||||||
### Load extensions to core
|
### Load Extensions to Core
|
||||||
|
|
||||||
Extensions can be loaded from code with `ov::Core::add_extension` method:
|
Extensions can be loaded from a code with the `ov::Core::add_extension` method:
|
||||||
|
|
||||||
@sphinxtabset
|
@sphinxtabset
|
||||||
|
|
||||||
@@ -92,7 +91,7 @@ Extensions can be loaded from code with `ov::Core::add_extension` method:
|
|||||||
|
|
||||||
@endsphinxtabset
|
@endsphinxtabset
|
||||||
|
|
||||||
`Identity` is custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is enough to enable reading IR which uses `Identity` extension operation emitted by Model Optimizer. To be able to load original model directly to the runtime, you need to add also a mapping extension:
|
The `Identity` is a custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is sufficient to enable reading OpenVINO IR which uses the `Identity` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
|
||||||
|
|
||||||
@sphinxdirective
|
@sphinxdirective
|
||||||
|
|
||||||
@@ -110,32 +109,34 @@ Extensions can be loaded from code with `ov::Core::add_extension` method:
|
|||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
|
||||||
When Python API is used there is no way to implement a custom OpenVINO operation. Also, even if custom OpenVINO operation is implemented in C++ and loaded to the runtime through a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. Use C++ shared library approach to implement both operations semantics and framework mapping in this case.
|
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
|
||||||
|
|
||||||
You still can use Python for operation mapping and decomposition in case if operations from the standard OpenVINO operation set is used only.
|
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
|
||||||
|
|
||||||
### Create library with extensions
|
### Create a Library with Extensions
|
||||||
|
|
||||||
You need to create extension library in the following cases:
|
An extension library should be created in the following cases:
|
||||||
- Convert model with custom operations in Model Optimizer
|
|
||||||
- Load model with custom operations in Python application. It is applicable for both framework model and IR.
|
|
||||||
- Loading models with custom operations in tools that support loading extensions from a library, for example `benchmark_app`.
|
|
||||||
|
|
||||||
If you want to create an extension library, for example in order to load these extensions to the Model Optimizer, you need to do next steps:
|
- Conversion of a model with custom operations in Model Optimizer.
|
||||||
Create an entry point for extension library. OpenVINO™ provides an `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO™ Extensions.
|
- Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
|
||||||
This macro should have a vector of all OpenVINO™ Extensions as an argument.
|
- Loading models with custom operations in tools that support loading extensions from a library, for example the `benchmark_app`.
|
||||||
|
|
||||||
Based on that, the declaration of an extension class can look as follows:
|
To create an extension library, for example, to load the extensions into Model Optimizer, perform the following:
|
||||||
|
|
||||||
|
1. Create an entry point for extension library. OpenVINO provides the `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO Extensions.
|
||||||
|
This macro should have a vector of all OpenVINO Extensions as an argument.
|
||||||
|
|
||||||
|
Based on that, the declaration of an extension class might look like the following:
|
||||||
|
|
||||||
@snippet template_extension/new/ov_extension.cpp ov_extension:entry_point
|
@snippet template_extension/new/ov_extension.cpp ov_extension:entry_point
|
||||||
|
|
||||||
To configure the build of your extension library, use the following CMake script:
|
2. Configure the build of your extension library, using the following CMake script:
|
||||||
|
|
||||||
@snippet template_extension/new/CMakeLists.txt cmake:extension
|
@snippet template_extension/new/CMakeLists.txt cmake:extension
|
||||||
|
|
||||||
This CMake script finds the OpenVINO™ using the `find_package` CMake command.
|
This CMake script finds OpenVINO, using the `find_package` CMake command.
|
||||||
|
|
||||||
To build the extension library, run the commands below:
|
3. Build the extension library, running the commands below:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
$ cd docs/template_extension/new
|
$ cd docs/template_extension/new
|
||||||
@@ -145,7 +146,7 @@ $ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
|
|||||||
$ cmake --build .
|
$ cmake --build .
|
||||||
```
|
```
|
||||||
|
|
||||||
After the build you can use path to your extension library to load your extensions to OpenVINO™ Runtime:
|
4. After the build, you may use the path to your extension library to load your extensions to OpenVINO Runtime:
|
||||||
|
|
||||||
@sphinxtabset
|
@sphinxtabset
|
||||||
|
|
||||||
@@ -168,4 +169,3 @@ After the build you can use path to your extension library to load your extensio
|
|||||||
* [OpenVINO Transformations](./ov_transformations.md)
|
* [OpenVINO Transformations](./ov_transformations.md)
|
||||||
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
|
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
|
||||||
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)
|
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)
|
||||||
|
|
||||||
|
|||||||
@@ -2,9 +2,10 @@
|
|||||||
|
|
||||||
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
|
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
|
||||||
|
|
||||||
> **NOTES:**
|
> **NOTE:**
|
||||||
> * OpenCL\* custom layer support is available in the preview mode.
|
> * OpenCL custom layer support is available in the preview mode.
|
||||||
> * This section assumes you are familiar with developing kernels using OpenCL.
|
> * This section assumes you are familiar with developing kernels using OpenCL.
|
||||||
|
|
||||||
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
|
To customize your topology with an OpenCL layer, carry out the tasks described on this page:
|
||||||
|
|
||||||
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
|
1. Write and compile your OpenCL code with the standalone offline OpenCL compiler (`clc`).
|
||||||
@@ -13,9 +14,9 @@ To customize your topology with an OpenCL layer, carry out the tasks described o
|
|||||||
|
|
||||||
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
|
## Compile OpenCL code for VPU (Intel® Neural Compute Stick 2)
|
||||||
|
|
||||||
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE* processor only, is redistributed with OpenVINO.
|
> **NOTE**: OpenCL compiler, targeting Intel® Neural Compute Stick 2 for the SHAVE processor only, is redistributed with OpenVINO.
|
||||||
OpenCL support is provided by ComputeAorta* and is distributed under a license agreement between Intel® and Codeplay* Software Ltd.
|
OpenCL support is provided by ComputeAorta and is distributed under a license agreement between Intel® and Codeplay Software Ltd.
|
||||||
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only, so first compile OpenCL C code using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
|
The OpenCL toolchain for the Intel® Neural Compute Stick 2 supports offline compilation only. Start with compiling OpenCL C code, using the standalone `clc` compiler. You can find the compiler binary at `<INSTALL_DIR>/tools/cl_compiler`.
|
||||||
|
|
||||||
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
|
> **NOTE**: By design, custom OpenCL layers support any OpenCL kernels written assuming OpenCL version 1.2. It also supports half float extension and is optimized for this type, because it is a native type for Intel® Movidius™ VPUs.
|
||||||
1. Prior to running a compilation, make sure that the following variables are set:
|
1. Prior to running a compilation, make sure that the following variables are set:
|
||||||
@@ -63,7 +64,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
|
|||||||
- Node `Source` must contain the following attributes:
|
- Node `Source` must contain the following attributes:
|
||||||
- `filename` – The path to a compiled binary relative to the XML configuration file.
|
- `filename` – The path to a compiled binary relative to the XML configuration file.
|
||||||
- Sub-node `Parameters` – Describes parameters bindings. For more information, see the description below.
|
- Sub-node `Parameters` – Describes parameters bindings. For more information, see the description below.
|
||||||
- Sub-node `WorkSizes` – Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the IR. `global` and `local` work group configurations support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
|
- Sub-node `WorkSizes` – Describes local and global work group sizes and the source for dimension deduction as a pair `direction,port`. In the example above, the work group is described relatively to the dimension of the input tensor that comes through port 0 in the OpenVINO IR. Work group configurations, namely `global` and `local` support any simple math expressions with +,-,\*,/, and () from `B`(batch), `Y`(height), `X`(width) and `F`(channels).
|
||||||
- Sub-node `Where` – Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
|
- Sub-node `Where` – Allows to customize bindings with the `key="value"` attribute. For example, to substitute only 3x3 convolutions, write `<Where kernel="3,3"/>` in the binding xml.
|
||||||
|
|
||||||
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
|
Parameter description supports `Tensor` of one of tensor types such as `input`, `output`, `input_buffer`, `output_buffer` or `data`, `Scalar`, or `Data` nodes and has the following format:
|
||||||
@@ -77,7 +78,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
|
|||||||
- `type` – Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
|
- `type` – Node type: `input_buffer` or `output_buffer`. Use the appropriate type to bind multiple kernels that correspond to different stages of the same layer.
|
||||||
- `port-index` – The unique identifier to bind by.
|
- `port-index` – The unique identifier to bind by.
|
||||||
- `dim` – The dim source with the same `direction,port` format used for `WorkSizes` bindings.
|
- `dim` – The dim source with the same `direction,port` format used for `WorkSizes` bindings.
|
||||||
- `size` – Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be expended in the future.
|
- `size` – Amount of bytes needed. Current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and might be extended in the future.
|
||||||
|
|
||||||
Here is an example of multi-stage MVN layer binding:
|
Here is an example of multi-stage MVN layer binding:
|
||||||
```xml
|
```xml
|
||||||
@@ -107,7 +108,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
|
|||||||
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
|
<WorkSizes dim="output,0" global="((Y+7)/8)*8,F,1" local="8,1,1"/>
|
||||||
</CustomLayer>
|
</CustomLayer>
|
||||||
```
|
```
|
||||||
- Each `Tensor` node that has the type `data` must contain the following attributes:
|
- Each `Tensor` node that has the `data` type must contain the following attributes:
|
||||||
- `source` – A name of the blob as it is in the IR. Typical example is `weights` for convolution.
|
- `source` – A name of the blob as it is in the IR. Typical example is `weights` for convolution.
|
||||||
- `format` – Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
|
- `format` – Specifies the channel order in the tensor. Optional conversion layers are generated if the custom layer format is not.
|
||||||
```xml
|
```xml
|
||||||
@@ -133,7 +134,7 @@ Each custom layer is described with the `CustomLayer` node. It has the following
|
|||||||
- Each `Data` node must contain the following attributes:
|
- Each `Data` node must contain the following attributes:
|
||||||
- `arg-name` – The name of a kernel parameter in the kernel signature.
|
- `arg-name` – The name of a kernel parameter in the kernel signature.
|
||||||
- `type` – Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
|
- `type` – Node type. Currently, `local_data` is the only supported value, which defines buffer allocated in fast local on-chip memory. It is limited to 100KB for all `__local` and
|
||||||
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. Note that a manual-DMA extension requires double buffering.
|
`__private` arrays defined inside the kernel as well as all `__local` parameters passed to the kernel. A manual-DMA extension requires double buffering.
|
||||||
If the custom layer is detected to run out of local memory, the inference fails.
|
If the custom layer is detected to run out of local memory, the inference fails.
|
||||||
- `dim` – The dim source with the same `direction,port` format used for `WorkSizes` bindings.
|
- `dim` – The dim source with the same `direction,port` format used for `WorkSizes` bindings.
|
||||||
- `size` – Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
|
- `size` – Amount of bytes needed. The current expression syntax supports only expression over dimensions of over selected input/output tensor or constants and may be extended in the future.
|
||||||
@@ -158,14 +159,13 @@ Each custom layer is described with the `CustomLayer` node. It has the following
|
|||||||
## Pass Configuration File to OpenVINO™ Runtime
|
## Pass Configuration File to OpenVINO™ Runtime
|
||||||
|
|
||||||
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
|
> **NOTE**: If both native and custom layer implementations are present, the custom kernel has a priority over the native one.
|
||||||
Before loading the network that features the custom layers, provide a separate configuration file and load it using the ov::Core::set_property() method with the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
Before loading the network that features the custom layers, provide a separate configuration file and load it using the `ov::Core::set_property()` method. Use the "CONFIG_KEY" key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
||||||
|
|
||||||
@snippet docs/snippets/vpu/custom_op.cpp part0
|
@snippet docs/snippets/vpu/custom_op.cpp part0
|
||||||
|
|
||||||
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
|
## Optimizing Kernels with OpenCL for VPU (Intel® Neural Compute Stick 2)
|
||||||
|
|
||||||
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL
|
This section provides optimization guidelines on writing custom layers with OpenCL for VPU devices. Knowledge about general OpenCL programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
|
||||||
programming model and OpenCL kernel language is assumed and not a subject of this section. The OpenCL model mapping to VPU is described in the table below.
|
|
||||||
|
|
||||||
| OpenCL Model | VPU Mapping|
|
| OpenCL Model | VPU Mapping|
|
||||||
|-----|----|
|
|-----|----|
|
||||||
@@ -175,41 +175,33 @@ programming model and OpenCL kernel language is assumed and not a subject of thi
|
|||||||
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
|
| Global memory | Mapped to DDR, used to pass execution preserved parameters for inputs, outputs, and blobs |
|
||||||
| Work group | Executed on a single SHAVE core iterating over multiple work items |
|
| Work group | Executed on a single SHAVE core iterating over multiple work items |
|
||||||
|
|
||||||
Note that by the OpenCL specification, the work group execution order is not specified. This means that it is your
|
The work group execution order is not defined in the OpenCL specifications. This means it is your responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime distributes work grid evenly among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
|
||||||
responsibility to ensure that race conditions among work groups are not introduced. Custom layer runtime spits evenly
|
|
||||||
work grid among available compute resources and executes them in an arbitrary order. This static scheduling approach works best if the load is evenly spread out across work groups, which is a typical case for Deep Learning kernels. The following guidelines are recommended to use for work group partitioning:
|
|
||||||
|
|
||||||
1. Split work evenly across work groups.
|
1. Distribute work evenly across work groups.
|
||||||
2. Adjust work group granularity to maintain equal workload for all compute codes.
|
2. Adjust work group granularity to maintain equal workload for all compute codes.
|
||||||
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
|
3. Set the maximum number of cores using the `max-shaves` attribute for the `CustomLayer` node. This keeps more resources for the rest of topology. It is also useful if the kernel scalability reached its limits, which may happen while optimizing memory bound kernels or kernels with poor parallelization.
|
||||||
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel if it improves work group partitioning or data access patterns.
|
4. Try an alternate data layout (`BFXY`/`BYXF`) for the kernel to see if it improves work group partitioning or data access patterns.
|
||||||
Consider not just specific layer boost, but full topology performance because data conversion layers would be automatically inserted
|
Consider not just specific layer boost, but also full topology performance because data conversion layers will be automatically inserted as appropriate.
|
||||||
as appropriate.
|
|
||||||
|
|
||||||
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
|
Offline OpenCL compiler (`clc`) features automatic vectorization over `get_global_id(0)` usage, if uniform access is detected.
|
||||||
For example, the kernel below could be automatically vectorized:
|
For example, the kernel below could be automatically vectorized:
|
||||||
```cpp
|
```cpp
|
||||||
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
|
__kernel void cvtf32f16(__global float* restrict inImage, __global half* restrict outImage,
|
||||||
float scale, float bais)
|
float scale, float bias)
|
||||||
{
|
{
|
||||||
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
|
int idx = get_global_id(0) + get_global_id(1) * get_global_size(0) + get_global_id(2) * get_global_size(0) * get_global_size(1);
|
||||||
outImage[idx] = convert_half(inImage[idx]*scale+bais);
|
outImage[idx] = convert_half(inImage[idx]*scale+bias);
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism
|
However, this work-group based vectorizer (WGV) conflicts with the default LLVM vectorizer based on superword level parallelism (SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code patterns. WGV works if and only if vector types are not used in the code.
|
||||||
(SLP) for the current compiler version. Manual vectorization is recommended to provide the best performance for non-uniform code
|
|
||||||
patterns. WGV works if and only if vector types are not used in the code.
|
|
||||||
|
|
||||||
Here is a short list of optimization tips:
|
Here is a short list of optimization tips:
|
||||||
|
|
||||||
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting `restrict` where possible.
|
1. Help auto-vectorizer ensure non-aliasing pointers for kernel parameters by putting the `restrict` markers where possible.
|
||||||
- This can give a performance boost, especially for kernels with unrolling, like `ocl_grn` from the example below.
|
- This can give a performance boost, especially for kernels with unrolling, like the `ocl_grn` from the example below.
|
||||||
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without `restrict` is up to 20% slower than the most optimal one, which combines unrolling and `restrict`.
|
- Place `restrict` markers for kernels with manually vectorized codes. In the `ocl_grn` kernel below, the unrolled version without the `restrict` is up to 20% slower than the most optimal one, which combines both unrolling and `restrict`.
|
||||||
2. Put `#‍pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to
|
2. Put `#‍pragma unroll N` to your loop header. The compiler does not trigger unrolling by default, so it is your responsibility to annotate the code with pragmas as appropriate. The `ocl_grn` version with `#‍pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the first loop
|
||||||
annotate the code with pragmas as appropriate. The `ocl_grn` version with `#‍pragma unroll 4` is up to 50% faster, most of which comes from unrolling the first loop, because LLVM, in general, is better in scheduling 3-stage loops (load-compute-store), while the fist loop
|
The `variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
|
||||||
`variance += (float)(src_data[c*H*W + y*W + x] * src_data[c*H*W + y*W + x]);` is only 2-stage (load-compute). Pay
|
|
||||||
attention to unrolling such cases first. Unrolling factor is loop-dependent. Choose the smallest number that
|
|
||||||
still improves performance as an optimum between the kernel size and execution speed. For this specific kernel, changing the unroll factor from `4` to `6` results in the same performance, so unrolling factor equal to 4 is an optimum. For Intel® Neural Compute Stick 2, unrolling is conjugated with the automatic software pipelining for load, store, and compute stages:
|
|
||||||
```cpp
|
```cpp
|
||||||
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
|
__kernel void ocl_grn(__global const half* restrict src_data, __global half* restrict dst_data, int C, float bias)
|
||||||
{
|
{
|
||||||
@@ -227,7 +219,7 @@ __kernel void ocl_grn(__global const half* restrict src_data, __global half* res
|
|||||||
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
|
dst_data[c*H*W + y*W + x] = (half)((float)src_data[c*H*W + y*W + x] * variance);
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
To check the efficiency of WGV, you can compare performance of the kernel above with the kernel below, which is manually vectorized over width:
|
To check the efficiency of WGV, compare performance of the kernel above with the kernel below, which is manually vectorized over width:
|
||||||
```cpp
|
```cpp
|
||||||
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
|
__kernel void ocl_grn_line(__global const half* restrict src_data, __global half* restrict dst_data, int C, int W, float bias)
|
||||||
{
|
{
|
||||||
@@ -267,19 +259,14 @@ __kernel void ocl_grn_line(__global const half* restrict src_data, __global hal
|
|||||||
```
|
```
|
||||||
Both versions perform the same, but the second one has more complex code.
|
Both versions perform the same, but the second one has more complex code.
|
||||||
|
|
||||||
3. If it is easy to predict the work group size, you can also use the `reqd_work_group_size` kernel attribute to ask the compiler
|
3. If it is easy to predict the work group size, use the `reqd_work_group_size` kernel attribute to ask the compiler to unroll the code up to the local size of the work group. If the kernel is actually executed with the different work group configuration, the result is undefined.
|
||||||
to unroll the code up to the local size of the work group. Note that if the kernel is actually executed with the
|
|
||||||
different work group configuration, the result is undefined.
|
|
||||||
|
|
||||||
4. Prefer to use the `half` compute if it keeps reasonable accuracy. 16-bit float is a native type for Intel® Neural Compute Stick 2, most of the functions `half_*` are mapped to a single hardware instruction.
|
4. Prefer to use the `half` compute if it keeps reasonable accuracy. A 16-bit float is a native type for Intel Neural Compute Stick 2, most of the `half_*` functions are mapped to a single hardware instruction.
|
||||||
Use the standard `native_*` function for the rest of types.
|
Use the standard `native_*` function for the rest of types.
|
||||||
|
|
||||||
5. Prefer to use the `convert_half` function over `vstore_half` if conversion to 32-bit float is required. `convert_half` is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the line `outImage[idx] = convert_half(inImage[idx]*scale+bais);` is eight times slower than the code with `vstore_half`.
|
5. Prefer to use the `convert_half` function over the `vstore_half` if conversion to 32-bit float is required. The `convert_half` function is mapped to a single hardware instruction. For the `cvtf32f16` kernel above, the `outImage[idx] = convert_half(inImage[idx]*scale+bias);` code is eight times slower than the code with `vstore_half`.
|
||||||
|
|
||||||
6. Mind early exits. Early exit can be extremely costly for the current version of the `clc` compiler due to conflicts with the
|
6. Be aware of early exits, as they can be extremely costly for the current version of the `clc` compiler due to conflicts with the auto-vectorizer. It is recommended to setup local size by `x` dimension equal to inputs or/and outputs width. If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example, `if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
|
||||||
auto-vectorizer. The generic advice would be to setup local size by `x` dimension equal to inputs or/and outputs width.
|
|
||||||
If it is impossible to define the work grid that exactly matches inputs or/and outputs to eliminate checks, for example,
|
|
||||||
`if (get_global_id(0) >= width) return`, use line-wise kernel variant with manual vectorization.
|
|
||||||
The kernel example below demonstrates the impact of early exits on kernel performance.
|
The kernel example below demonstrates the impact of early exits on kernel performance.
|
||||||
```cpp
|
```cpp
|
||||||
// Initial version
|
// Initial version
|
||||||
@@ -302,8 +289,8 @@ The kernel example below demonstrates the impact of early exits on kernel perfor
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
|
This `reorg` kernel is auto-vectorizable, but an input for YOLO v2 topology is `NCHW=<1,64,26,26>` and it is not multiple of vector width, which is `8` for `half` data type. As a result, the Inference Engine does not select the auto-vectorized kernel.
|
||||||
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to`NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
|
To compare performance of auto-vectorized and scalar version of the kernel, change the input size to `NCHW=<1,64,26,32>`. This enables the auto-vectorized version to be selected by the Inference Engine and can give you about 30% uplift.
|
||||||
Since the auto-vectorized version is faster, it makes sense to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, 32, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
|
Since the auto-vectorized version is faster, it is recommended to enable it for the YOLO v2 topology input size by setting the local size multiple of vector, for example, `32`, and adjust global sizes accordingly. As a result, the execution work grid exceeds actual input dimension, so out-of-bound checks should be inserted. See the updated kernel version below:
|
||||||
```cpp
|
```cpp
|
||||||
// Version with out-of-bound checks added
|
// Version with out-of-bound checks added
|
||||||
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
|
__kernel void reorg(const __global half* restrict src, __global half* restrict out, int W, int stride)
|
||||||
@@ -324,7 +311,7 @@ Since the auto-vectorized version is faster, it makes sense to enable it for the
|
|||||||
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
|
out[W*H*c + W*h + w] = src[W2*H2*c2 + W2*h2 + w2];
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
This code performs the same as the initial kernel above (scalar) due to branching overhead. If you replace min/max expression `w = min(w, W-1);` with `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
|
This code performs the same as the initial kernel above (scalar) due to branching overhead. If the `w = min(w, W-1);` min/max expression is replaced with the `if (w >= W) return;`, runtime increases up to 2x against to code without branching (initial version).<br>
|
||||||
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
|
If branching is inevitable for your element-based kernel, it is recommended to change the scheme to line-based. See the kernel variant below:
|
||||||
```cpp
|
```cpp
|
||||||
// Line-wise version
|
// Line-wise version
|
||||||
@@ -347,8 +334,8 @@ __kernel void reorg(const __global half* restrict src, __global half* restrict o
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
|
This decreases the execution time up to 40% against the best performing vectorized kernel without early exits (initial version).
|
||||||
7. Reuse computations among work items by using line-based kernels or sharing values though `__local` memory.
|
7. Reuse computations among work items by using line-based kernels or sharing values through the `__local` memory.
|
||||||
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by `stride`:
|
8. Improve data access locality. Most of custom kernels are memory bound while convolution and fully connected layers are hardware-implemented. The code below demonstrates a further optimized version of the `reorg` kernel unrolled by the `stride`:
|
||||||
```cpp
|
```cpp
|
||||||
// Unrolled line-wise version
|
// Unrolled line-wise version
|
||||||
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
|
__kernel void reorg_unrolled_by_stride(const __global half* restrict src, __global half* restrict dst,
|
||||||
@@ -366,14 +353,11 @@ This decreases the execution time up to 40% against the best performing vectoriz
|
|||||||
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
|
dst[W*H*C2*(stride_y*stride+stride_x) + W*H*c2 + W*h + w] = src[W2*H2*c2 + W2*h*stride + W2*stride_y + w2 + stride_x];
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
`scr` data in this case loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
|
The `scr` data in this case is loaded only once. As the result, the cycle count drops up to 45% against the line-wise version.
|
||||||
|
|
||||||
9. Copy data from `__dlobal` to `__local` or `__private` memory if the data is accessed more than once. Access to
|
9. Copy data from the `__dlobal` to the `__local` or `__private` memory if the data is accessed more than once. Access to the `__dlobal` memory is orders of magnitude slower than access to the `__local`/`__private` due to statically scheduled pipeline, which stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store from/to the `__blobal` pointer since work-group copying could be done in a vector fashion.
|
||||||
`__dlobal` memory is orders of magnitude slower than access to `__local`/`__private` due to statically scheduled pipeline, which
|
|
||||||
stalls completely on memory access without any prefetch. The same recommendation is applicable for scalar load/store
|
|
||||||
from/to a `__blobal` pointer since work-group copying could be done in a vector fashion.
|
|
||||||
|
|
||||||
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Starting from OpenVINO™ 2020.1, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
|
10. Use a manual DMA extension. Local (on-chip) memory throughput is up to 24x higher than DDR throughput. Since the OpenVINO 2020.1 release, VPU OpenCL features manual-DMA kernel extension to copy sub-tensor used by a work group into local memory and performing compute without DDR evolved. Here is the simple GRN kernel implementation that runs over DDR. Local size is in the form (width of the input tensor, 1, 1) to define a large enough work group to get code automatically vectorized and unrolled, while global size is (width of the input tensor, height of the input tensor, 1):
|
||||||
```cpp
|
```cpp
|
||||||
__kernel void grn_NCHW(
|
__kernel void grn_NCHW(
|
||||||
__global const half* restrict src_data,
|
__global const half* restrict src_data,
|
||||||
@@ -398,7 +382,7 @@ from/to a `__blobal` pointer since work-group copying could be done in a vector
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
|
This kernel can be rewritten to introduce the `__dma_preload` and `__dma_postwrite intrinsics` special data binding. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName`, and `__dma_postwrite_kernelName`. The `__dma_preload_kernelName` kernel for a particular work group `n` is guaranteed to be executed before the `n`-th work group itself, while the `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. One of those functions may be defined to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA.
|
||||||
|
|
||||||
```cpp
|
```cpp
|
||||||
__kernel void __dma_preload_grn_NCHW(
|
__kernel void __dma_preload_grn_NCHW(
|
||||||
@@ -557,9 +541,9 @@ __kernel void grn_NCHW(
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Note the `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup because it was completely limited by memory usage.
|
> **NOTE**: The `get_local_size` and `get_local_id` usage inside the kernel. 21x speedup is expected for a kernel on enet-curbs setup since it is completely limited by memory usage.
|
||||||
|
|
||||||
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and requires work groups equal to single work item.
|
An alternative method to using DMA is to use work item copy extension. Those functions are executed inside a kernel and require work groups equal to single work item.
|
||||||
|
|
||||||
Here is the list of supported work item functions:
|
Here is the list of supported work item functions:
|
||||||
```cpp
|
```cpp
|
||||||
|
|||||||
@@ -70,7 +70,7 @@ To eliminate operation, OpenVINO™ has special method that considers all limita
|
|||||||
|
|
||||||
`ov::replace_output_update_name()` in case of successful replacement it automatically preserves friendly name and runtime info.
|
`ov::replace_output_update_name()` in case of successful replacement it automatically preserves friendly name and runtime info.
|
||||||
|
|
||||||
## Transformations types <a name="transformations_types"></a>
|
## Transformations types <a name="transformations-types"></a>
|
||||||
|
|
||||||
OpenVINO™ Runtime has three main transformation types:
|
OpenVINO™ Runtime has three main transformation types:
|
||||||
|
|
||||||
@@ -91,7 +91,7 @@ Transformation library has two internal macros to support conditional compilatio
|
|||||||
|
|
||||||
When developing a transformation, you need to follow these transformation rules:
|
When developing a transformation, you need to follow these transformation rules:
|
||||||
|
|
||||||
###1. Friendly Names
|
### 1. Friendly Names
|
||||||
|
|
||||||
Each `ov::Node` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
|
Each `ov::Node` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
|
||||||
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
|
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
|
||||||
@@ -100,7 +100,7 @@ To avoid losing friendly name when replacing node with other node or subgraph, s
|
|||||||
|
|
||||||
In more advanced cases, when replaced operation has several outputs and we add additional consumers to its outputs, we make a decision how to set friendly name by arrangement.
|
In more advanced cases, when replaced operation has several outputs and we add additional consumers to its outputs, we make a decision how to set friendly name by arrangement.
|
||||||
|
|
||||||
###2. Runtime Info
|
### 2. Runtime Info
|
||||||
|
|
||||||
Runtime info is a map `std::map<std::string, ov::Any>` located inside `ov::Node` class. It represents additional attributes in `ov::Node`.
|
Runtime info is a map `std::map<std::string, ov::Any>` located inside `ov::Node` class. It represents additional attributes in `ov::Node`.
|
||||||
These attributes can be set by users or by plugins and when executing transformation that changes `ov::Model` we need to preserve these attributes as they will not be automatically propagated.
|
These attributes can be set by users or by plugins and when executing transformation that changes `ov::Model` we need to preserve these attributes as they will not be automatically propagated.
|
||||||
@@ -111,9 +111,9 @@ Currently, there is no mechanism that automatically detects transformation types
|
|||||||
|
|
||||||
When transformation has multiple fusions or decompositions, `ov::copy_runtime_info` must be called multiple times for each case.
|
When transformation has multiple fusions or decompositions, `ov::copy_runtime_info` must be called multiple times for each case.
|
||||||
|
|
||||||
**Note**: copy_runtime_info removes rt_info from destination nodes. If you want to keep it, you need to specify them in source nodes like this: copy_runtime_info({a, b, c}, {a, b})
|
> **NOTE**: `copy_runtime_info` removes `rt_info` from destination nodes. If you want to keep it, you need to specify them in source nodes like this: `copy_runtime_info({a, b, c}, {a, b})`
|
||||||
|
|
||||||
###3. Constant Folding
|
### 3. Constant Folding
|
||||||
|
|
||||||
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use `ov::pass::ConstantFolding()` after your transformation or call constant folding directly for operation.
|
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use `ov::pass::ConstantFolding()` after your transformation or call constant folding directly for operation.
|
||||||
The example below shows how constant subgraph can be constructed.
|
The example below shows how constant subgraph can be constructed.
|
||||||
@@ -140,8 +140,8 @@ In transformation development process:
|
|||||||
## Using pass manager <a name="using_pass_manager"></a>
|
## Using pass manager <a name="using_pass_manager"></a>
|
||||||
|
|
||||||
`ov::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
|
`ov::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
|
||||||
It can register and apply any [transformation pass](#transformations_types) on model.
|
It can register and apply any [transformation pass](#transformations-types) on model.
|
||||||
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how_to_debug_transformations) section).
|
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how-to-debug-transformations) section).
|
||||||
|
|
||||||
The example below shows basic usage of `ov::pass::Manager`
|
The example below shows basic usage of `ov::pass::Manager`
|
||||||
|
|
||||||
@@ -151,7 +151,7 @@ Another example shows how multiple matcher passes can be united into single Grap
|
|||||||
|
|
||||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager2
|
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager2
|
||||||
|
|
||||||
## How to debug transformations <a name="how_to_debug_transformations"></a>
|
## How to debug transformations <a name="how-to-debug-transformations"></a>
|
||||||
|
|
||||||
If you are using `ngraph::pass::Manager` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
|
If you are using `ngraph::pass::Manager` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
|
||||||
|
|
||||||
@@ -160,7 +160,7 @@ OV_PROFILE_PASS_ENABLE=1 - enables performance measurement for each transformati
|
|||||||
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
|
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Note**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
|
> **NOTE**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
|
||||||
|
|
||||||
## See Also
|
## See Also
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Build Plugin Using CMake* {#openvino_docs_ie_plugin_dg_plugin_build}
|
# Build Plugin Using CMake {#openvino_docs_ie_plugin_dg_plugin_build}
|
||||||
|
|
||||||
Inference Engine build infrastructure provides the Inference Engine Developer Package for plugin development.
|
Inference Engine build infrastructure provides the Inference Engine Developer Package for plugin development.
|
||||||
|
|
||||||
@@ -57,7 +57,6 @@ A common plugin consists of the following components:
|
|||||||
To build a plugin and its tests, run the following CMake scripts:
|
To build a plugin and its tests, run the following CMake scripts:
|
||||||
|
|
||||||
- Root `CMakeLists.txt`, which finds the Inference Engine Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
|
- Root `CMakeLists.txt`, which finds the Inference Engine Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
|
||||||
|
|
||||||
```cmake
|
```cmake
|
||||||
cmake_minimum_required(VERSION 3.13)
|
cmake_minimum_required(VERSION 3.13)
|
||||||
|
|
||||||
@@ -82,21 +81,15 @@ if(ENABLE_TESTS)
|
|||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
```
|
```
|
||||||
|
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the Inference Engine Developer Package and they are the same as for the main DLDT build tree. You can override them during plugin build using the command below:
|
||||||
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the Inference Engine Developer Package and they are the same as for the main DLDT build tree. You can override them during plugin build using the command below:
|
```bash
|
||||||
|
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DInferenceEngineDeveloperPackage_DIR=../dldt-release-build ../template-plugin
|
||||||
```bash
|
```
|
||||||
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DInferenceEngineDeveloperPackage_DIR=../dldt-release-build ../template-plugin
|
|
||||||
```
|
|
||||||
|
|
||||||
- `src/CMakeLists.txt` to build a plugin shared library from sources:
|
- `src/CMakeLists.txt` to build a plugin shared library from sources:
|
||||||
|
|
||||||
@snippet template_plugin/src/CMakeLists.txt cmake:plugin
|
@snippet template_plugin/src/CMakeLists.txt cmake:plugin
|
||||||
|
> **NOTE**: `IE::inference_engine` target is imported from the Inference Engine Developer Package.
|
||||||
> **NOTE**: `IE::inference_engine` target is imported from the Inference Engine Developer Package.
|
|
||||||
|
|
||||||
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
|
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
|
||||||
|
|
||||||
@snippet template_plugin/tests/functional/CMakeLists.txt cmake:functional_tests
|
@snippet template_plugin/tests/functional/CMakeLists.txt cmake:functional_tests
|
||||||
|
> **NOTE**: The `IE::funcSharedTests` static library with common functional Inference Engine Plugin tests is imported via the Inference Engine Developer Package.
|
||||||
> **NOTE**: The `IE::funcSharedTests` static library with common functional Inference Engine Plugin tests is imported via the Inference Engine Developer Package.
|
|
||||||
|
|||||||
@@ -95,6 +95,6 @@ Returns a current value for a configuration key with the name `name`. The method
|
|||||||
|
|
||||||
@snippet src/template_executable_network.cpp executable_network:get_config
|
@snippet src/template_executable_network.cpp executable_network:get_config
|
||||||
|
|
||||||
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](../_inference_engine_tools_compile_tool_README.html)).
|
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](@ref openvino_inference_engine_tools_compile_tool_README).
|
||||||
|
|
||||||
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class.
|
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class.
|
||||||
|
|||||||
@@ -47,13 +47,13 @@ Inference Engine plugin dynamic library consists of several main components:
|
|||||||
on several task executors based on a device-specific pipeline structure.
|
on several task executors based on a device-specific pipeline structure.
|
||||||
|
|
||||||
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin
|
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin
|
||||||
development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
|
> development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
|
||||||
at `<dldt source dir>/docs/template_plugin`.
|
> at `<dldt source dir>/docs/template_plugin`.
|
||||||
|
|
||||||
Detailed guides
|
Detailed guides
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake\*
|
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake
|
||||||
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
|
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
|
||||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
|
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
|
||||||
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
||||||
|
|||||||
@@ -81,7 +81,7 @@ The function accepts a const shared pointer to `ov::Model` object and performs t
|
|||||||
|
|
||||||
1. Deep copies a const object to a local object, which can later be modified.
|
1. Deep copies a const object to a local object, which can later be modified.
|
||||||
2. Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about network representation:
|
2. Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about network representation:
|
||||||
* [Intermediate Representation and Operation Sets](../_docs_MO_DG_IR_and_opsets.html)
|
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
|
||||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks).
|
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks).
|
||||||
|
|
||||||
@snippet template_plugin/src/template_plugin.cpp plugin:transform_network
|
@snippet template_plugin/src/template_plugin.cpp plugin:transform_network
|
||||||
|
|||||||
@@ -14,15 +14,12 @@ Engine concepts: plugin creation, multiple executable networks support, multiple
|
|||||||
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `IE::funcSharedTests` library:
|
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `IE::funcSharedTests` library:
|
||||||
|
|
||||||
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
|
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
|
||||||
|
|
||||||
@snippet single_layer/convolution.hpp test_convolution:definition
|
@snippet single_layer/convolution.hpp test_convolution:definition
|
||||||
|
|
||||||
- Based on that, define a set of parameters for `Template` plugin functional test instantiation:
|
- Based on that, define a set of parameters for `Template` plugin functional test instantiation:
|
||||||
|
|
||||||
@snippet single_layer_tests/convolution.cpp test_convolution:declare_parameters
|
@snippet single_layer_tests/convolution.cpp test_convolution:declare_parameters
|
||||||
|
|
||||||
- Instantiate the test itself using standard GoogleTest macro `INSTANTIATE_TEST_SUITE_P`:
|
- Instantiate the test itself using standard GoogleTest macro `INSTANTIATE_TEST_SUITE_P`:
|
||||||
|
|
||||||
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
|
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
|
||||||
|
|
||||||
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
|
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ Thus we can define:
|
|||||||
- **Scale** as `(output_high - output_low) / (levels-1)`
|
- **Scale** as `(output_high - output_low) / (levels-1)`
|
||||||
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
|
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
|
||||||
|
|
||||||
**Note**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
|
> **NOTE**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
|
||||||
|
|
||||||
## Quantization specifics and restrictions
|
## Quantization specifics and restrictions
|
||||||
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
|
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# AvgPoolPrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
|
# AvgPoolPrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
|
||||||
|
|
||||||
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.
|
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# IntervalsAlignment attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
|
# IntervalsAlignment Attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
|
||||||
|
|
||||||
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.
|
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# PrecisionPreserved attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
|
# PrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
|
||||||
|
|
||||||
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.
|
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Precisions attribute {#openvino_docs_OV_UG_lpt_Precisions}
|
# Precisions Attribute {#openvino_docs_OV_UG_lpt_Precisions}
|
||||||
|
|
||||||
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.
|
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# QuantizationAlignment attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
|
# QuantizationAlignment Attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
|
||||||
|
|
||||||
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.
|
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# QuantizationGranularity attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
|
# QuantizationGranularity Attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
|
||||||
|
|
||||||
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.
|
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.
|
||||||
|
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 29 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 130 B |
@@ -54,4 +54,4 @@ Attributes usage by transformations:
|
|||||||
| IntervalsAlignment | AlignQuantizationIntervals | FakeQuantizeDecompositionTransformation |
|
| IntervalsAlignment | AlignQuantizationIntervals | FakeQuantizeDecompositionTransformation |
|
||||||
| QuantizationAlignment | AlignQuantizationParameters | FakeQuantizeDecompositionTransformation |
|
| QuantizationAlignment | AlignQuantizationParameters | FakeQuantizeDecompositionTransformation |
|
||||||
|
|
||||||
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
|
> **NOTE**: the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
|
||||||
|
Before Width: | Height: | Size: 61 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 64 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 67 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 78 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 77 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 58 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 77 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 95 KiB After Width: | Height: | Size: 130 B |
@@ -22,7 +22,7 @@ The table of transformations and used attributes:
|
|||||||
| AlignQuantizationIntervals | IntervalsAlignment | PrecisionPreserved |
|
| AlignQuantizationIntervals | IntervalsAlignment | PrecisionPreserved |
|
||||||
| AlignQuantizationParameters | QuantizationAlignment | PrecisionPreserved, PerTensorQuantization |
|
| AlignQuantizationParameters | QuantizationAlignment | PrecisionPreserved, PerTensorQuantization |
|
||||||
|
|
||||||
> **Note:** the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
|
> **NOTE**: the same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
|
||||||
|
|
||||||
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
|
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
|
||||||
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
|
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
|
||||||
|
|||||||
@@ -46,4 +46,4 @@ Changes in the example model after main transformation:
|
|||||||
- dequantization operations.
|
- dequantization operations.
|
||||||
* Dequantization operations were moved via precision preserved (`concat1` and `concat2`) and quantized (`convolution2`) operations.
|
* Dequantization operations were moved via precision preserved (`concat1` and `concat2`) and quantized (`convolution2`) operations.
|
||||||
|
|
||||||
> **Note:** the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
|
> **NOTE**: the left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 54 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 51 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 130 B |
|
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 130 B |
@@ -1,4 +1,4 @@
|
|||||||
# Converting Models with Model Optimizer {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
|
# Model Optimizer Usage {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
|
||||||
|
|
||||||
@sphinxdirective
|
@sphinxdirective
|
||||||
|
|
||||||
@@ -8,19 +8,12 @@
|
|||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
:hidden:
|
:hidden:
|
||||||
|
|
||||||
|
openvino_docs_model_inputs_outputs
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
|
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
|
||||||
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
|
openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
|
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
|
||||||
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
|
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
|
||||||
openvino_docs_MO_DG_FP16_Compression
|
openvino_docs_MO_DG_FP16_Compression
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
|
|
||||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
|
||||||
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
|
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
|
||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
@@ -41,7 +34,7 @@ where IR is a pair of files describing the model:
|
|||||||
|
|
||||||
* <code>.bin</code> - Contains the weights and biases binary data.
|
* <code>.bin</code> - Contains the weights and biases binary data.
|
||||||
|
|
||||||
The generated IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
|
The OpenVINO IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md)
|
||||||
> that applies post-training quantization methods.
|
> that applies post-training quantization methods.
|
||||||
|
|
||||||
> **TIP**: You can also work with Model Optimizer in OpenVINO™ [Deep Learning Workbench (DL Workbench)](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html), which is a web-based tool with GUI for optimizing, fine-tuning, analyzing, visualizing, and comparing performance of deep learning models.
|
> **TIP**: You can also work with Model Optimizer in OpenVINO™ [Deep Learning Workbench (DL Workbench)](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Introduction.html), which is a web-based tool with GUI for optimizing, fine-tuning, analyzing, visualizing, and comparing performance of deep learning models.
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 130 B |
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:11579795c778b28d57cbf080dedc10149500d78cc8b16a74fe2b113c76a94f6b
|
|
||||||
size 26152
|
|
||||||
3
docs/MO_DG/img/FaceNet.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f2720b6d3b5e680978a91379c8c37366285299aab31aa139ad9abea8334aae34
|
||||||
|
size 57687
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:1a570510808fb2997ee0d51af6f92c5a4a8f8a59dbd275000489f856e89124d5
|
|
||||||
size 120211
|
|
||||||
3
docs/MO_DG/img/NCF_start.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:5c0389fe34562993b1285f1994dbc878e9547a841c903bf204074ed2219b6bc7
|
||||||
|
size 323210
|
||||||
|
Before Width: | Height: | Size: 46 KiB After Width: | Height: | Size: 130 B |
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:344b2fcb9b7a180a8d8047e65b4aad3ca2651cfc7d5e1e408710a5a3730fed09
|
|
||||||
size 20851
|
|
||||||
3
docs/MO_DG/img/inception_v1_first_block.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:1cc5ead5513c641763b994bea5a08ccaa4a694b3f5239ddd2fe58424b90e5289
|
||||||
|
size 33741
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:78a73487434f4178f111595eb34b344b35af14bd4ccb03e6a5b00509f86e19c5
|
|
||||||
size 5348
|
|
||||||
3
docs/MO_DG/img/inception_v1_std_input.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:14dd247a2b498dfa570e643656e6fd5ba9f7eb6e6fd14f4ada0dda2d4426c943
|
||||||
|
size 7832
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:939e1aa0d2ba28dab1c930c6271a9f4063fd9f8c539d4713c0bd0f87c34f66c3
|
|
||||||
size 15020
|
|
||||||
3
docs/MO_DG/img/inception_v1_std_output.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e42abc494dce9f04edb6424ff6828b074879869c68d1fbe08f3980b657fecdf8
|
||||||
|
size 30634
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:9859464a5c3ec91e4d6316109f523f48ad8972d2213a6797330e665d45b35c54
|
|
||||||
size 44117
|
|
||||||
3
docs/MO_DG/img/lm_1b.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:062fa64afa0cc43c4a2c2c0442e499b6176c837857222af30bad2fa7c9515420
|
||||||
|
size 95508
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:3812efef32bd7f1bf40b130d5d522bc3df6aebd406bd1186699d214bca856722
|
|
||||||
size 43721
|
|
||||||
3
docs/MO_DG/img/optimizations/groups.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:bc41098cd8ca3c72f930beab155c981cc6e4e898729bd76438650ba31ebe351a
|
||||||
|
size 142111
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:0e232c47e8500f42bd0e1f2b93f94f58e2d59caee149c687be3cdc3e8a5be59a
|
|
||||||
size 18417
|
|
||||||
3
docs/MO_DG/img/optimizations/inception_v4.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:6a67a86a656c81bc69e024c4911c535cf0937496bdbe69f31b7fee20ee14e474
|
||||||
|
size 173854
|
||||||
3
docs/MO_DG/img/optimizations/resnet_269.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:4f13f2a0424aa53a52d32ad692f574d331bf31c1f1a9e09499df9729912b45f4
|
||||||
|
size 351773
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
version https://git-lfs.github.com/spec/v1
|
|
||||||
oid sha256:2adeca1e3512b9fe7b088a5412ce21592977a1f352a013735537ec92e895dc94
|
|
||||||
size 15653
|
|
||||||
3
docs/MO_DG/img/optimizations/resnet_optimization.svg
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:85172ae61de4f592245d0a89605d66ea0b425696868636f9e40276a097a2ba81
|
||||||
|
size 498110
|
||||||
@@ -2,11 +2,11 @@
|
|||||||
|
|
||||||
Input data for inference can be different from the training dataset and requires additional preprocessing before inference.
|
Input data for inference can be different from the training dataset and requires additional preprocessing before inference.
|
||||||
To accelerate the whole pipeline including preprocessing and inference, Model Optimizer provides special parameters such as `--mean_values`,
|
To accelerate the whole pipeline including preprocessing and inference, Model Optimizer provides special parameters such as `--mean_values`,
|
||||||
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates IR with additionally
|
|
||||||
|
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates OpenVINO IR with additionally
|
||||||
inserted sub-graphs to perform the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data,
|
inserted sub-graphs to perform the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data,
|
||||||
reverting data along channel dimension, and changing the data layout.
|
reverting data along channel dimension, and changing the data layout.
|
||||||
See the following sections for details on the parameters, or the [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md) for the same functionality in OpenVINO Runtime.
|
See the following sections for details on the parameters, or the [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md) for the same functionality in OpenVINO Runtime.
|
||||||
for more information.
|
|
||||||
|
|
||||||
## Specifying Layout
|
## Specifying Layout
|
||||||
|
|
||||||
@@ -58,10 +58,12 @@ for example, `[0, 1]` or `[-1, 1]`. Sometimes, the mean values (mean images) are
|
|||||||
|
|
||||||
There are two cases of how the input data preprocessing is implemented.
|
There are two cases of how the input data preprocessing is implemented.
|
||||||
* The input preprocessing operations are a part of a model.
|
* The input preprocessing operations are a part of a model.
|
||||||
In this case, the application does not perform a separate preprocessing step: everything is embedded into the model itself. Model Optimizer will generate the IR with required preprocessing operations, and no `mean` and `scale` parameters are required.
|
|
||||||
|
In this case, the application does not perform a separate preprocessing step: everything is embedded into the model itself. Model Optimizer will generate the OpenVINO IR format with required preprocessing operations, and no `mean` and `scale` parameters are required.
|
||||||
* The input preprocessing operations are not a part of a model and the preprocessing is performed within the application which feeds the model with input data.
|
* The input preprocessing operations are not a part of a model and the preprocessing is performed within the application which feeds the model with input data.
|
||||||
|
|
||||||
In this case, information about mean/scale values should be provided to the Model Optimizer to embed it to the generated IR.
|
In this case, information about mean/scale values should be provided to Model Optimizer to embed it to the generated OpenVINO IR format.
|
||||||
|
|
||||||
Model Optimizer provides command-line parameters to specify the values: `--mean_values`, `--scale_values`, `--scale`.
|
Model Optimizer provides command-line parameters to specify the values: `--mean_values`, `--scale_values`, `--scale`.
|
||||||
Using these parameters, Model Optimizer embeds the corresponding preprocessing block for mean-value normalization of the input data
|
Using these parameters, Model Optimizer embeds the corresponding preprocessing block for mean-value normalization of the input data
|
||||||
and optimizes this block so that the preprocessing takes negligible time for inference.
|
and optimizes this block so that the preprocessing takes negligible time for inference.
|
||||||
@@ -75,7 +77,8 @@ mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
|
|||||||
## Reversing Input Channels <a name="when_to_reverse_input_channels"></a>
|
## Reversing Input Channels <a name="when_to_reverse_input_channels"></a>
|
||||||
Sometimes, input images for your application can be of the RGB (or BGR) format and the model is trained on images of the BGR (or RGB) format,
|
Sometimes, input images for your application can be of the RGB (or BGR) format and the model is trained on images of the BGR (or RGB) format,
|
||||||
which is in the opposite order of color channels. In this case, it is important to preprocess the input images by reverting the color channels before inference.
|
which is in the opposite order of color channels. In this case, it is important to preprocess the input images by reverting the color channels before inference.
|
||||||
To embed this preprocessing step into IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
|
|
||||||
|
To embed this preprocessing step into OpenVINO IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
|
||||||
|
|
||||||
The `--reverse_input_channels` parameter can be used to preprocess the model input in the following cases:
|
The `--reverse_input_channels` parameter can be used to preprocess the model input in the following cases:
|
||||||
* Only one dimension in the input shape has a size equal to 3.
|
* Only one dimension in the input shape has a size equal to 3.
|
||||||
@@ -84,7 +87,7 @@ The `--reverse_input_channels` parameter can be used to preprocess the model inp
|
|||||||
Using the `--reverse_input_channels` parameter, Model Optimizer embeds the corresponding preprocessing block for reverting
|
Using the `--reverse_input_channels` parameter, Model Optimizer embeds the corresponding preprocessing block for reverting
|
||||||
the input data along channel dimension and optimizes this block so that the preprocessing takes only negligible time for inference.
|
the input data along channel dimension and optimizes this block so that the preprocessing takes only negligible time for inference.
|
||||||
|
|
||||||
For example, the following command launches the Model Optimizer for the TensorFlow AlexNet model and embeds the `reverse_input_channel` preprocessing block into IR:
|
For example, the following command launches Model Optimizer for the TensorFlow AlexNet model and embeds the `reverse_input_channel` preprocessing block into OpenVINO IR:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
mo --input_model alexnet.pb --reverse_input_channels
|
mo --input_model alexnet.pb --reverse_input_channels
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ When evaluating the performance of a model with OpenVINO Runtime, it is required
|
|||||||
|
|
||||||
- Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.
|
- Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.
|
||||||
|
|
||||||
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common).
|
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common.md).
|
||||||
|
|
||||||
## Tip 2: Try to Get Credible Data
|
## Tip 2: Try to Get Credible Data
|
||||||
|
|
||||||
|
|||||||