Compare commits

...

3 Commits

Author SHA1 Message Date
Eddy Kim
8e464e992e using calloc instead of malloc for deterministic hashing (#16326) 2023-03-20 19:50:14 +04:00
Ilya Lavrenov
d1a7b0e3c0 Releases/2022/3 (#16409)
* Docs: Update the doc on default hint and execution devices property (#14836)

* Docs: Update to LATENCY as default hint
* Docs: Update the doc on execution devices property
* Update auto_device_selection.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* 22.3: remove tbb version check for using tbbbind static library (#15700)

* update symbolic link on uninstall page (#15720)

* Update deployment_simplified.svg (#15681)

* [NormalizeL2] normalization of reduction axes (#15841) (#15879)

* Add test for negative axes, preliminary solution to solve uncorrect
results

* Normalize axes in operation NormalizeL2

* Add test for negative axes

* Add EOF

* [67541] - face-detection-0205, 0206 issues fixed (incorrect dimensions error) (#14687)

* [CVS-67541] - face-detection-0205, 0206 issues fixed (incorrect dimensions error)
* [CVS-67541] - face-detection-0205, 0206 issues fixed

* Conversion fail for ov::hint::performance_mode with UNDEFINED value (#15903)

* Update ov::hint::performance_hint UNDEFINED value from empty string to "UNDEFINED".
Update benchmark Python version.
Update the description about hint setting within benchmark APP README and help message.

* Drop the reduntant changes.

* Supported OpenSUSE 15.3 (#15897) (#15907)

* [DOCS] Structure change for 'AUTO Device Selection' article - post merge fix (#15752)

* aligning with 14750

* Fixed samples build on Debian 10 with cmake 3.13 (#15939)

* Fixed samples build on Debian 10 with cmake 3.13

* Use 2022/3 branches

* Limit setuptools version

* Fixed issues in setupvars.sh (#15884) (#15952)

* Fixed issues with setupvar.sh

* Fixes setupvars realpath error

---------

Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com>

* Apivalidator (#15951)

* Improved API validator logic (#15942)

* Fix for apiValidator when more than 1 target needs to be checked (#15950)

* Prevent infinite recursion

* [Snippets] Added matcher_name in ConvertConstantsToScalars pass (#15977)

* Install libtbb2 instead of libtbb12 on U22.04 (#15993)

* Apply Apivalidator to extra TBB libs (#15998)

* [GNA] Changed max layer limit tests to avoid SEH exceptions (#15015) (#15460)

* splitted test model

* Changed test config

* Set SF for all inputs

* [Transformations] Enable missing runtime info check (#15796) (#15972)

* Add rt info propagation to StridesOptimization

* Enable rt info check for pruning tests

* Fixed clang-format for C API (#16025)

* Port to 2022.3 from master (#16049)

* notebooks update (#16091)

20230302220806

* Update Customize_Model_Optimizer.md (#15687)

Recreating #14062

* fix benchmark_app python to support YES and NO values for -pin parameter (#16042)

* support YES and NO for -pin

* add if property_name == 'AFFINITY'

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>

* [Docs] nv12 changes port to 22.3 (#16115)

Port:
#15370
#16004

add single-plane input information
create single-plane cpp snippet
menu fix
update formatting for sphinx directives
additional snippet fixes
---------
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>

* [DOCS] Port  Frontend Extensions and OTX page (#16135)

* [DOCS] Add OTX page to Ecosystem  (#16118)

* add otx page

* change ecosystem page

* add ote img

* move ote page to rst

* fix path

* add path

* img test

* otx page

* add docs to ecosystem page

* [DOCS] Fix Frontend Extensions snippets (#16120)

* move fe to rst

* fix code snippets

* add more line breaks

* fix tabsets

* fix link

* fix anchor

* test

* fixing link

* change tab directive

* fix tabs

* align code tabs

* fix link

* fix snippets

* add dlwb to ecosystem

* change ecosystem menu

* exclude fe page

* Port to 2022.3 (#16174)

* Remove setuptools upperbound (#16054)

* Added missed licenses to openvino-dev (#16057)

* Fixed OpenMP + debian package code-path (#16058)

* [CPU] Prevent out of bounds read inside Graph::InferDynamic (#16067)

* Fixed compilation on Debian 11 with gcc 12.2 (#16096)

* Fix for OpenCL

---------

Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>

* Docs benchmarks page update port 22.3 (#16187)

changes to benchmarks page to align with theme

* Andreib/2022.3 myriad plugin obs (#16079)

* Changed to OBS firmware

* Changed dependencies settings for new FW

---------

Co-authored-by: Daria Mityagina <daria.mityagina@intel.com>

* port-16085 (#16210)

* 234 update (#16212)

Adding notebook 234-encodec-audio-compression

* [DOCS] Adding 'Scrollbox' - new sphinx directive (#15307)

port https://github.com/openvinotoolkit/openvino/pull/15305

* [DOCS] Updating 'Prerequisites' section in `Configurations for GNA` article - for 22.3 (#16237)

* issue-15090
Add command for installation of prerequisites on Linux.

* DOCS-image-fix port22.3 (#16341)

(#16324)
(#16308)

* Clearing of CustomReplacementRegistry.registry in convert_model() (#15893) (#16347)

* Clearing of CustomReplacementRegistry.registry.

* Added test.

* Fixed clearing of pipeline config params and TF session in convert_model() (#16191) (#16346)

* Fixed pipeline config params clearing.

* Added clearing of TF session. Added tests.

---------

Co-authored-by: Wang Wangwang <wangwang.wang@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Fang Xu <fang.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Daria Mityagina <daria.mityagina@intel.com>
Co-authored-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com>
Co-authored-by: Alexandra Sidorova <alexandra.sidorova@intel.com>
Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Haiqi Pan <haiqi.pan@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>
Co-authored-by: Andrei-George Boji <andrei-george.boji@intel.com>
Co-authored-by: Anastasiia Pnevskaia <anastasia.popova@intel.com>
2023-03-20 19:47:11 +04:00
Xuejun Zhai
b692afc764 Xuejun/port cache model api (#15637)
* Add new compile model api to support hash model memory (#14543)

* Add new compile_model api for ONNX RUNTIME OV EP

Allow compile_model() accept model/weight data.

* Update minor place

* Cache model if possible

* Compute hash based on model_xml and model_weight

* Update typo

* Change hash key computation for model's weights

* Resolve test case issue

* Use tensor replace blob for hash computation

* Fix hash computation isssue and add more test cases

* Fix a build issue caused by data format

* Add ov::loaded_from_cache checking for CompileModelLoadFromMemoryTest (#15030)

* Add ov::loaded_from_cache checking for CompileModelLoadFromMemoryTestBase

* Skip gna in skip_tests_config

* Ignore empty tensor for hash calculation (#15282)

* Ignore empty tensor for hash calculation

* Added test

* Fix conflict

* Trigger ci run test for customer_A branch

---------

Co-authored-by: River Li <river.li@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-02-13 15:29:30 +04:00
147 changed files with 2272 additions and 1083 deletions

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -30,6 +32,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/3
jobs:
- job: LinCC

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -37,11 +39,13 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: releases/2022/3
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/3
jobs:
- job: CUDAPlugin_Lin

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -30,6 +32,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: releases/2022/3
jobs:
- job: WinCC

View File

@@ -27,22 +27,19 @@ jobs:
submodules: recursive
lfs: true
- name: Check cmake
run: |
which cmake
cmake --version
- name: Install OpenCL
uses: awalsh128/cache-apt-pkgs-action@v1.2.4
if: runner.os == 'Linux'
with:
packages: ocl-icd-opencl-dev opencl-headers
version: 3.0
- name: CMake
run: |
mkdir build
cd build
cmake -DENABLE_INTEL_MYRIAD_COMMON=OFF -DCMAKE_BUILD_TYPE=Release ..
- name: CMake configure
run: cmake -DENABLE_INTEL_MYRIAD_COMMON=OFF -DCMAKE_BUILD_TYPE=Release -B build
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
id: cpu-cores
- name: Build snippets
run: |
cmake --build . --target ie_docs_snippets -j${{ steps.cpu-cores.outputs.count }}
working-directory: build
run: cmake --build build --target ie_docs_snippets -j${{ steps.cpu-cores.outputs.count }}

View File

@@ -30,7 +30,7 @@ jobs:
python-version: '3.10'
- name: Cache pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('tools/mo/requirements*.txt') }}

View File

@@ -24,7 +24,6 @@ function(set_ci_build_number)
endfunction()
include(features)
include(message)
set_ci_build_number()

View File

@@ -5,60 +5,77 @@
if(WIN32)
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
set(UWP_SDK_PATH "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64")
message(STATUS "Trying to find apivalidator in: ${UWP_SDK_PATH}")
find_host_program(UWP_API_VALIDATOR
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
message(STATUS "Trying to find apivalidator in: ")
foreach(wdk_path IN LISTS WDK_PATHS)
message(" * ${wdk_path}")
endforeach()
find_host_program(ONECORE_API_VALIDATOR
NAMES apivalidator
PATHS "${UWP_SDK_PATH}"
DOC "ApiValidator for UWP compliance")
PATHS ${WDK_PATHS}
DOC "ApiValidator for OneCore compliance")
if(UWP_API_VALIDATOR)
message(STATUS "Found apivalidator: ${UWP_API_VALIDATOR}")
if(ONECORE_API_VALIDATOR)
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
endif()
endif()
function(_ie_add_api_validator_post_build_step_recursive)
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
get_target_property(IS_IMPORTED ${API_VALIDATOR_TARGET} IMPORTED)
if(IS_IMPORTED)
return()
endif()
get_target_property(LIBRARY_TYPE ${API_VALIDATOR_TARGET} TYPE)
if(LIBRARY_TYPE STREQUAL "EXECUTABLE" OR LIBRARY_TYPE STREQUAL "SHARED_LIBRARY")
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
if(LINKED_LIBRARIES)
foreach(ITEM IN LISTS LINKED_LIBRARIES)
if(NOT TARGET ${ITEM})
continue()
endif()
get_target_property(LIBRARY_TYPE_DEPENDENCY ${ITEM} TYPE)
if(LIBRARY_TYPE_DEPENDENCY STREQUAL "SHARED_LIBRARY")
_ie_add_api_validator_post_build_step_recursive(TARGET ${ITEM})
endif()
endforeach()
endif()
if(LIBRARY_TYPE MATCHES "^(SHARED_LIBRARY|MODULE_LIBRARY|EXECUTABLE)$" AND
NOT ${API_VALIDATOR_TARGET} IN_LIST API_VALIDATOR_TARGETS)
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
endif()
# keep checks target list to track cyclic dependencies, leading to infinite recursion
list(APPEND checked_targets ${API_VALIDATOR_TARGET})
if(NOT LIBRARY_TYPE STREQUAL "INTERFACE_LIBRARY")
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
else()
set(LINKED_LIBRARIES)
endif()
get_target_property(INTERFACE_LINKED_LIBRARIES ${API_VALIDATOR_TARGET} INTERFACE_LINK_LIBRARIES)
foreach(library IN LISTS LINKED_LIBRARIES INTERFACE_LINKED_LIBRARIES)
if(TARGET "${library}")
get_target_property(orig_library ${library} ALIASED_TARGET)
if(orig_library IN_LIST checked_targets OR library IN_LIST checked_targets)
# in case of cyclic dependencies, we need to skip current target
continue()
endif()
if(TARGET "${orig_library}")
_ie_add_api_validator_post_build_step_recursive(TARGET ${orig_library})
else()
_ie_add_api_validator_post_build_step_recursive(TARGET ${library})
endif()
endif()
endforeach()
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
endfunction()
set(VALIDATED_LIBRARIES "" CACHE INTERNAL "")
set(VALIDATED_TARGETS "" CACHE INTERNAL "")
function(_ov_add_api_validator_post_build_step)
set(UWP_API_VALIDATOR_APIS "${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/x64/UniversalDDIs.xml")
set(UWP_API_VALIDATOR_EXCLUSION "${UWP_SDK_PATH}/BinaryExclusionlist.xml")
find_file(ONECORE_API_VALIDATOR_APIS NAMES UniversalDDIs.xml
PATHS "${PROGRAMFILES}/Windows Kits/10/build/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/universalDDIs/x64"
"${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/x64"
DOC "Path to UniversalDDIs.xml file")
find_file(ONECORE_API_VALIDATOR_EXCLUSION NAMES BinaryExclusionlist.xml
PATHS ${WDK_PATHS}
DOC "Path to BinaryExclusionlist.xml file")
if((NOT UWP_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
if((NOT ONECORE_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
return()
endif()
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "EXTRA" "" ${ARGN})
if(NOT API_VALIDATOR_TARGET)
message(FATAL_ERROR "RunApiValidator requires TARGET to validate!")
@@ -69,74 +86,81 @@ function(_ov_add_api_validator_post_build_step)
endif()
# collect targets
_ie_add_api_validator_post_build_step_recursive(TARGET ${API_VALIDATOR_TARGET})
if (API_VALIDATOR_EXTRA)
foreach(target IN LISTS API_VALIDATOR_EXTRA)
_ie_add_api_validator_post_build_step_recursive(TARGET ${target})
endforeach()
endif()
# remove targets which were tested before
foreach(target IN LISTS API_VALIDATOR_TARGETS)
list(FIND VALIDATED_LIBRARIES ${target} index)
if (NOT index EQUAL -1)
list(APPEND VALIDATED_TARGETS ${target})
endif()
if(TARGET "${target}")
get_target_property(orig_target ${target} ALIASED_TARGET)
list(FIND VALIDATED_LIBRARIES ${orig_target} index)
if (NOT index EQUAL -1)
list(APPEND VALIDATED_TARGETS ${target})
endif()
endif()
endforeach()
foreach(item IN LISTS VALIDATED_TARGETS)
list(REMOVE_ITEM API_VALIDATOR_TARGETS ${item})
endforeach()
list(REMOVE_DUPLICATES API_VALIDATOR_TARGETS)
if(NOT API_VALIDATOR_TARGETS)
return()
endif()
# apply check
macro(api_validator_get_target_name)
get_target_property(IS_IMPORTED ${target} IMPORTED)
get_target_property(is_imported ${target} IMPORTED)
get_target_property(orig_target ${target} ALIASED_TARGET)
if(IS_IMPORTED)
get_target_property(target_location ${target} LOCATION)
get_filename_component(target_name "${target_location}" NAME_WE)
if(is_imported)
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
foreach(imported_config RELEASE RELWITHDEBINFO DEBUG)
if(imported_config IN_LIST imported_configs)
get_target_property(target_location ${target} IMPORTED_LOCATION_${imported_config})
get_filename_component(target_name "${target_location}" NAME_WE)
break()
endif()
endforeach()
unset(imported_configs)
elseif(TARGET "${orig_target}")
set(target_name ${orig_target})
set(target_location $<TARGET_FILE:${orig_target}>)
else()
set(target_name ${target})
set(target_location $<TARGET_FILE:${target}>)
endif()
unset(orig_target)
unset(is_imported)
endmacro()
foreach(target IN LISTS API_VALIDATOR_TARGETS)
api_validator_get_target_name()
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.21 AND OV_GENERATOR_MULTI_CONFIG)
set(output_file "${CMAKE_BINARY_DIR}/api_validator/$<CONFIG>/${target_name}.txt")
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.20 AND OV_GENERATOR_MULTI_CONFIG)
set(output_file "${OpenVINO_BINARY_DIR}/api_validator/$<CONFIG>/${target_name}.txt")
else()
set(output_file "${CMAKE_BINARY_DIR}/api_validator/${target_name}.txt")
set(output_file "${OpenVINO_BINARY_DIR}/api_validator/${target_name}.txt")
endif()
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
COMMAND ${CMAKE_COMMAND} --config $<CONFIG>
-D UWP_API_VALIDATOR=${UWP_API_VALIDATOR}
-D UWP_API_VALIDATOR_TARGET=$<TARGET_FILE:${target}>
-D UWP_API_VALIDATOR_APIS=${UWP_API_VALIDATOR_APIS}
-D UWP_API_VALIDATOR_EXCLUSION=${UWP_API_VALIDATOR_EXCLUSION}
-D UWP_API_VALIDATOR_OUTPUT=${output_file}
list(APPEND post_build_commands
${CMAKE_COMMAND} --config $<CONFIG>
-D ONECORE_API_VALIDATOR=${ONECORE_API_VALIDATOR}
-D ONECORE_API_VALIDATOR_TARGET=${target_location}
-D ONECORE_API_VALIDATOR_APIS=${ONECORE_API_VALIDATOR_APIS}
-D ONECORE_API_VALIDATOR_EXCLUSION=${ONECORE_API_VALIDATOR_EXCLUSION}
-D ONECORE_API_VALIDATOR_OUTPUT=${output_file}
-D CMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake"
BYPRODUCTS ${output_file}
COMMENT "[apiValidator] Check ${target_name} for OneCore compliance"
VERBATIM)
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake")
list(APPEND byproducts_files ${output_file})
unset(target_name)
unset(target_location)
endforeach()
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
COMMAND ${post_build_commands}
BYPRODUCTS ${byproducts_files}
COMMENT "[apiValidator] Check ${API_VALIDATOR_TARGET} and dependencies for OneCore compliance"
VERBATIM)
# update list of validated libraries
list(APPEND VALIDATED_LIBRARIES ${API_VALIDATOR_TARGETS})
set(VALIDATED_LIBRARIES "${VALIDATED_LIBRARIES}" CACHE INTERNAL "" FORCE)
list(APPEND VALIDATED_TARGETS ${API_VALIDATOR_TARGETS})
set(VALIDATED_TARGETS "${VALIDATED_TARGETS}" CACHE INTERNAL "" FORCE)
endfunction()
#

View File

@@ -4,9 +4,9 @@
cmake_policy(SET CMP0012 NEW)
foreach(var UWP_API_VALIDATOR UWP_API_VALIDATOR_TARGET
UWP_API_VALIDATOR_APIS UWP_API_VALIDATOR_EXCLUSION
UWP_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
foreach(var ONECORE_API_VALIDATOR ONECORE_API_VALIDATOR_TARGET
ONECORE_API_VALIDATOR_APIS ONECORE_API_VALIDATOR_EXCLUSION
ONECORE_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
if(NOT DEFINED ${var})
message(FATAL_ERROR "Variable ${var} is not defined")
endif()
@@ -14,18 +14,18 @@ endforeach()
# create command
if(NOT EXISTS "${UWP_API_VALIDATOR_APIS}")
message(FATAL_ERROR "${UWP_API_VALIDATOR_APIS} does not exist")
if(NOT EXISTS "${ONECORE_API_VALIDATOR_APIS}")
message(FATAL_ERROR "${ONECORE_API_VALIDATOR_APIS} does not exist")
endif()
set(command "${UWP_API_VALIDATOR}"
-SupportedApiXmlFiles:${UWP_API_VALIDATOR_APIS}
-DriverPackagePath:${UWP_API_VALIDATOR_TARGET})
if(EXISTS "${UWP_API_VALIDATOR_EXCLUSION}")
set(command "${ONECORE_API_VALIDATOR}"
-SupportedApiXmlFiles:${ONECORE_API_VALIDATOR_APIS}
-DriverPackagePath:${ONECORE_API_VALIDATOR_TARGET})
if(EXISTS "${ONECORE_API_VALIDATOR_EXCLUSION}")
list(APPEND command
-BinaryExclusionListXmlFile:${UWP_API_VALIDATOR_EXCLUSION}
-BinaryExclusionListXmlFile:${ONECORE_API_VALIDATOR_EXCLUSION}
-StrictCompliance:TRUE)
set(UWP_HAS_BINARY_EXCLUSION ON)
set(ONECORE_HAS_BINARY_EXCLUSION ON)
endif()
# execute
@@ -36,13 +36,13 @@ execute_process(COMMAND ${command}
RESULT_VARIABLE exit_code
OUTPUT_STRIP_TRAILING_WHITESPACE)
file(WRITE "${UWP_API_VALIDATOR_OUTPUT}" "${output_message}\n\n\n${error_message}")
file(WRITE "${ONECORE_API_VALIDATOR_OUTPUT}" "CMAKE COMMAND: ${command}\n\n\n${output_message}\n\n\n${error_message}")
# post-process output
get_filename_component(name "${UWP_API_VALIDATOR_TARGET}" NAME)
get_filename_component(name "${ONECORE_API_VALIDATOR_TARGET}" NAME)
if(NOT UWP_HAS_BINARY_EXCLUSION)
if(NOT ONECORE_HAS_BINARY_EXCLUSION)
if(CMAKE_TOOLCHAIN_FILE MATCHES "onecoreuap.toolchain.cmake$")
# empty since we compile with static MSVC runtime
else()
@@ -66,7 +66,7 @@ endif()
# write output
if(UWP_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
if(ONECORE_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
message(FATAL_ERROR "${error_message}")
endif()

View File

@@ -66,6 +66,10 @@ function(add_clang_format_target TARGET_NAME)
continue()
endif()
if(IS_DIRECTORY "${source_file}")
message(FATAL_ERROR "Directory ${source_file} cannot be passed to clang-format")
endif()
file(RELATIVE_PATH source_file_relative "${CMAKE_CURRENT_SOURCE_DIR}" "${source_file}")
set(output_file "${CMAKE_CURRENT_BINARY_DIR}/clang_format/${source_file_relative}.clang")
string(REPLACE ".." "__" output_file "${output_file}")

View File

@@ -166,7 +166,7 @@ macro(ov_add_frontend)
add_library(openvino::frontend::${OV_FRONTEND_NAME} ALIAS ${TARGET_NAME})
endif()
# Shutdown protobuf when unloading the front dynamic library
# Shutdown protobuf when unloading the frontend dynamic library
if(proto_files AND BUILD_SHARED_LIBS)
target_link_libraries(${TARGET_NAME} PRIVATE ov_protobuf_shutdown)
endif()
@@ -201,8 +201,6 @@ macro(ov_add_frontend)
ie_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES})
ov_add_library_version(${TARGET_NAME})
@@ -235,10 +233,15 @@ macro(ov_add_frontend)
endif()
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS})
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${proto_files})
add_dependencies(ov_frontends ${TARGET_NAME})
# must be called after all target_link_libraries
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
# installation
if(NOT OV_FRONTEND_SKIP_INSTALL)
if(BUILD_SHARED_LIBS)
# Note:

View File

@@ -1,27 +0,0 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(UNIX AND ENABLE_ERROR_HIGHLIGHT)
function(message)
string(ASCII 27 ESC)
set(RESET "${ESC}[m")
set(RED "${ESC}[31;1m")
set(YELLOW "${ESC}[33;1m")
list(GET ARGV 0 MessageType)
list(REMOVE_AT ARGV 0)
foreach(arg IN LISTS ARGV)
set(_msg "${_msg}${arg}")
endforeach()
if(MessageType STREQUAL FATAL_ERROR OR MessageType STREQUAL SEND_ERROR)
_message(${MessageType} "${RED}${_msg}${RESET}")
elseif(MessageType STREQUAL WARNING)
_message(${MessageType} "${YELLOW}${_msg}${RESET}")
else()
_message(${MessageType} "${_msg}")
endif()
endfunction()
endif()

View File

@@ -41,8 +41,6 @@ In case SELECTIVE_BUILD is enabled, the SELECTIVE_BUILD_STAT variable should con
Usage: -DSELECTIVE_BUILD=ON -DSELECTIVE_BUILD_STAT=/path/*.csv" OFF
ALLOWED_VALUES ON OFF COLLECT)
ie_option(ENABLE_ERROR_HIGHLIGHT "Highlight errors and warnings during compile time" ON)
ie_option (ENABLE_DOCS "Build docs using Doxygen" OFF)
find_package(PkgConfig QUIET)

View File

@@ -54,6 +54,8 @@ macro(ov_cpack_settings)
NOT item STREQUAL "gna" AND
# myriad is EOL in 2023.0
NOT item STREQUAL "myriad" AND
# don't install Intel OpenMP during debian
NOT item STREQUAL "omp" AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly
NOT item MATCHES "^tbb(_dev)?$" AND

View File

@@ -40,6 +40,8 @@ macro(ov_cpack_settings)
NOT item STREQUAL "gna" AND
# myriad is EOL in 2023.0
NOT item STREQUAL "myriad" AND
# don't install Intel OpenMP during rpm
NOT item STREQUAL "omp" AND
# even for case of system TBB we have installation rules for wheels packages
# so, need to skip this explicitly
NOT item MATCHES "^tbb(_dev)?$" AND

View File

@@ -6,8 +6,8 @@
:maxdepth: 1
:hidden:
ovtf_integration
ote_documentation
ovtf_integration
ovsa_get_started
openvino_inference_engine_tools_compile_tool_README
openvino_docs_tuning_utilities
@@ -27,6 +27,15 @@ More resources:
* [GitHub](https://github.com/openvinotoolkit/nncf)
* [PyPI](https://pypi.org/project/nncf/)
### OpenVINO™ Training Extensions
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
More resources:
* [Overview](@ref ote_documentation)
* [GitHub](https://github.com/openvinotoolkit/training_extensions)
* [Documentation](https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html)
### OpenVINO™ Security Add-on
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
@@ -50,6 +59,7 @@ More resources:
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
### DL Workbench
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting [Intel® DevCloud for the Edge](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html) and launching DL Workbench on-line.
@@ -58,12 +68,6 @@ More resources:
* [Docker Hub](https://hub.docker.com/r/openvino/workbench)
* [PyPI](https://pypi.org/project/openvino-workbench/)
### OpenVINO™ Training Extensions (OTE)
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
More resources:
* [GitHub](https://github.com/openvinotoolkit/training_extensions)
### Computer Vision Annotation Tool (CVAT)
An online, interactive video and image annotation tool for computer vision purposes.

View File

@@ -0,0 +1,40 @@
# OpenVINO™ Training Extensions {#ote_documentation}
@sphinxdirective
OpenVINO™ Training Extensions provide a suite of advanced algorithms to train
Deep Learning models and convert them using the `OpenVINO™
toolkit <https://software.intel.com/en-us/openvino-toolkit>`__ for optimized
inference. It allows you to export and convert the models to the needed format. OpenVINO Training Extensions independently create and train the model. It is open-sourced and available on `GitHub <https://github.com/openvinotoolkit/training_extensions>`__. Read the OpenVINO Training Extensions `documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__ to learn more.
Detailed Workflow
#################
.. image:: ./_static/images/training_extensions_framework.png
1. To start working with OpenVINO Training Extensions, prepare and annotate your dataset. For example, on CVAT.
2. OpenVINO Training Extensions train the model, using training interface, and evaluate the model quality on your dataset, using evaluation and inference interfaces.
.. note::
Prepare a separate dataset or split the dataset you have for more accurate quality evaluation.
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF and POT. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
OpenVINO Training Extensions Components
#######################################
- `OpenVINO Training Extensions SDK <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_sdk>`__
- `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_cli>`__
- `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/master/external>`__
Tutorials
#########
`Object Detection <https://github.com/openvinotoolkit/training_extensions/blob/master/ote_cli/notebooks/train.ipynb>`__
@endsphinxdirective

View File

@@ -10,6 +10,11 @@
@endsphinxdirective
This article describes Model Optimizer internals. Altering them may result in application instability, and in case of future changes to the API, lack of backward compatibility.
> **NOTE**: If you want to add support for ONNX or PaddlePaddle operations, or you are not familiar with other extension alternatives in OpenVINO, read [this guide](../../../Extensibility_UG/Intro.md) instead.
<a name="model-optimizer-extensibility"></a>Model Optimizer extensibility mechanism enables support of new operations and custom transformations to generate the optimized intermediate representation (IR) as described in the
[Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md). This
mechanism is a core part of Model Optimizer, as a huge set of examples showing how to add custom logic to support your model.

View File

@@ -8,11 +8,15 @@
Debugging Auto-Device Plugin <openvino_docs_OV_UG_supported_plugins_AUTO_debugging>
@endsphinxdirective
This article introduces how Automatic Device Selection works and how to use it for inference.
## <a name="how-auto-works"></a> How AUTO Works
.. _how-auto-works:
How AUTO Works
####################
The Automatic Device Selection mode, or AUTO for short, uses a "virtual" or a "proxy" device,
which does not bind to a specific type of hardware, but rather selects the processing unit for inference automatically.
@@ -21,13 +25,14 @@ This way, you can write the application once and deploy it anywhere.
The selection also depends on your performance requirements, defined by the “hints” configuration API, as well as device priority list limitations, if you choose to exclude some hardware from the process.
The logic behind the choice is as follows:
1. Check what supported devices are available.
2. Check precisions of the input model (for detailed information on precisions read more on the `ov::device::capabilities`)
3. Select the highest-priority device capable of supporting the given model, as listed in the table below.
4. If models precision is FP32 but there is no device capable of supporting it, offload the model to a device supporting FP16.
The logic behind the choice is as follows:
1. Check what supported devices are available.
2. Check precisions of the input model (for detailed information on precisions read more on the ``ov::device::capabilities``).
3. Select the highest-priority device capable of supporting the given model, as listed in the table below.
4. If models precision is FP32 but there is no device capable of supporting it, offload the model to a device supporting FP16.
@sphinxdirective
+----------+------------------------------------------------------+-------------------------------------+
| Device || Supported || Supported |
| Priority || Device || model precision |
@@ -44,120 +49,140 @@ The logic behind the choice is as follows:
| 4 || Intel® CPU | FP32, FP16, INT8, BIN |
| || (e.g. Intel® Core™ i7-1165G7) | |
+----------+------------------------------------------------------+-------------------------------------+
@endsphinxdirective
To put it simply, when loading the model to the first device on the list fails, AUTO will try to load it to the next device in line, until one of them succeeds.
What is important, **AUTO always starts inference with the CPU of the system**, as it provides very low latency and can start inference with no additional delays.
To put it simply, when loading the model to the first device on the list fails, AUTO will try to load it to the next device in line, until one of them succeeds.
What is important, **AUTO starts inference with the CPU of the system by default**, as it provides very low latency and can start inference with no additional delays.
While the CPU is performing inference, AUTO continues to load the model to the device best suited for the purpose and transfers the task to it when ready.
This way, the devices which are much slower in compiling models, GPU being the best example, do not impede inference at its initial stages.
For example, if you use a CPU and a GPU, the first-inference latency of AUTO will be better than that of using GPU alone.
Note that if you choose to exclude CPU from the priority list, it will be unable to support the initial model compilation stage.
![](../img/autoplugin_accelerate.svg)
This mechanism can be easily observed in the [Using AUTO with Benchmark app sample](#using-auto-with-openvino-samples-and-benchmark-app) section, showing how the first-inference latency (the time it takes to compile the model and perform the first inference) is reduced when using AUTO. For example:
```sh
benchmark_app -m ../public/alexnet/FP32/alexnet.xml -d GPU -niter 128
```
```sh
benchmark_app -m ../public/alexnet/FP32/alexnet.xml -d AUTO -niter 128
```
Note that if you choose to exclude CPU from the priority list or disable the initial CPU acceleration feature via ``ov::intel_auto::enable_startup_fallback``, it will be unable to support the initial model compilation stage.
.. image:: _static/images/autoplugin_accelerate.svg
This mechanism can be easily observed in the :ref:`Using AUTO with Benchmark app sample <using-auto-with-openvino-samples-and-benchmark-app>` section, showing how the first-inference latency (the time it takes to compile the model and perform the first inference) is reduced when using AUTO. For example:
.. code-block: sh
benchmark_app -m ../public/alexnet/FP32/alexnet.xml -d GPU -niter 128
.. code-block: sh
benchmark_app -m ../public/alexnet/FP32/alexnet.xml -d AUTO -niter 128
@sphinxdirective
.. note::
The longer the process runs, the closer realtime performance will be to that of the best-suited device.
@endsphinxdirective
## Using AUTO
Following the OpenVINO™ naming convention, the Automatic Device Selection mode is assigned the label of “AUTO.” It may be defined with no additional parameters, resulting in defaults being used, or configured further with the following setup options:
Using AUTO
####################
@sphinxdirective
Following the OpenVINO™ naming convention, the Automatic Device Selection mode is assigned the label of "AUTO". It may be defined with no additional parameters, resulting in defaults being used, or configured further with the following setup options:
+--------------------------------+----------------------------------------------------------------------+
| | Property | | Values and Description |
+================================+======================================================================+
| | <device candidate list> | | **Values**: |
| | | | empty |
| | | | `AUTO` |
| | | | `AUTO: <device names>` (comma-separated, no spaces) |
| | | | |
| | | | Lists the devices available for selection. |
| | | | The device sequence will be taken as priority from high to low. |
| | | | If not specified, `AUTO` will be used as default, |
| | | | and all devices will be "viewed" as candidates. |
+--------------------------------+----------------------------------------------------------------------+
| | `ov::device:priorities` | | **Values**: |
| | | | `<device names>` (comma-separated, no spaces) |
| | | | |
| | | | Specifies the devices for AUTO to select. |
| | | | The device sequence will be taken as priority from high to low. |
| | | | This configuration is optional. |
+--------------------------------+----------------------------------------------------------------------+
| | `ov::hint::performance_mode` | | **Values**: |
| | | | `ov::hint::PerformanceMode::LATENCY` |
| | | | `ov::hint::PerformanceMode::THROUGHPUT` |
| | | | `ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT` |
| | | | |
| | | | Specifies the performance option preferred by the application. |
+--------------------------------+----------------------------------------------------------------------+
| | `ov::hint::model_priority` | | **Values**: |
| | | | `ov::hint::Priority::HIGH` |
| | | | `ov::hint::Priority::MEDIUM` |
| | | | `ov::hint::Priority::LOW` |
| | | | |
| | | | Indicates the priority for a model. |
| | | | IMPORTANT: This property is not fully supported yet. |
+--------------------------------+----------------------------------------------------------------------+
@endsphinxdirective
+-----------------------------------------------+----------------------------------------------------------------------+
| | Property | | Values and Description |
+===============================================+======================================================================+
| | <device candidate list> | | **Values**: |
| | | | empty |
| | | | ``AUTO`` |
| | | | ``AUTO: <device names>`` (comma-separated, no spaces) |
| | | | |
| | | | Lists the devices available for selection. |
| | | | The device sequence will be taken as priority from high to low. |
| | | | If not specified, ``AUTO`` will be used as default, |
| | | | and all devices will be "viewed" as candidates. |
+-----------------------------------------------+----------------------------------------------------------------------+
| | ``ov::device::priorities`` | | **Values**: |
| | | | ``<device names>`` (comma-separated, no spaces) |
| | | | |
| | | | Specifies the devices for AUTO to select. |
| | | | The device sequence will be taken as priority from high to low. |
| | | | This configuration is optional. |
+-----------------------------------------------+----------------------------------------------------------------------+
| | ``ov::hint::performance_mode`` | | **Values**: |
| | | | ``ov::hint::PerformanceMode::LATENCY`` |
| | | | ``ov::hint::PerformanceMode::THROUGHPUT`` |
| | | | ``ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT`` |
| | | | |
| | | | Specifies the performance option preferred by the application. |
+-----------------------------------------------+----------------------------------------------------------------------+
| | ``ov::hint::model_priority`` | | **Values**: |
| | | | ``ov::hint::Priority::HIGH`` |
| | | | ``ov::hint::Priority::MEDIUM`` |
| | | | ``ov::hint::Priority::LOW`` |
| | | | |
| | | | Indicates the priority for a model. |
| | | | IMPORTANT: This property is not fully supported yet. |
+-----------------------------------------------+----------------------------------------------------------------------+
| | ``ov::execution_devices`` | | Lists the runtime target devices on which the inferences are being |
| | | | executed. |
| | | | Examples of returning results could be ``(CPU)``(``CPU`` is a |
| | | | temporary device, indicating that CPU is used for acceleration at |
| | | | the model compilation stage), ``CPU``, ``GPU``, ``CPU GPU``, |
| | | | ``GPU.0``, etc. |
+-----------------------------------------------+----------------------------------------------------------------------+
| | ``ov::intel_auto::enable_startup_fallback`` | | **Values**: |
| | | | ``true`` |
| | | | ``false`` |
| | | | |
| | | | Enables/disables CPU as acceleration (or the helper device) in the |
| | | | beginning. The default value is ``true``, indicating that CPU is |
| | | | used as acceleration by default. |
+-----------------------------------------------+----------------------------------------------------------------------+
Inference with AUTO is configured similarly to when device plugins are used:
you compile the model on the plugin with configuration and execute inference.
### Device Candidates and Priority
The device candidate list enables you to customize the priority and limit the choice of devices available to AUTO.
- If <device candidate list> is not specified, AUTO assumes all the devices present in the system can be used.
- If `AUTO` without any device names is specified, AUTO assumes all the devices present in the system can be used, and will load the network to all devices and run inference based on their default priorities, from high to low.
To specify the priority of devices, enter the device names in the priority order (from high to low) in `AUTO: <device names>`, or use the `ov::device:priorities` property.
Device Candidates and Priority
++++++++++++++++++++++++++++++
See the following code for using AUTO and specifying devices:
@sphinxdirective
The device candidate list enables you to customize the priority and limit the choice of devices available to AUTO.
* If <device candidate list> is not specified, AUTO assumes all the devices present in the system can be used.
* If ``AUTO`` without any device names is specified, AUTO assumes all the devices present in the system can be used, and will load the network to all devices and run inference based on their default priorities, from high to low.
To specify the priority of devices, enter the device names in the priority order (from high to low) in ``AUTO: <device names>``, or use the ``ov::device::priorities`` property.
See the following code for using AUTO and specifying devices:
.. tab:: C++
.. doxygensnippet:: docs/snippets/AUTO0.cpp
:language: cpp
:fragment: [part0]
.. doxygensnippet:: docs/snippets/AUTO0.cpp
:language: cpp
:fragment: [part0]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part0]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part0]
@endsphinxdirective
Note that OpenVINO Runtime lets you use GPU as an alias for GPU.0 in function calls. More details on enumerating devices can be found in [Working with devices](supported_plugins/Device_Plugins.md).
Note that OpenVINO Runtime lets you use "GPU" as an alias for "GPU.0" in function calls. More details on enumerating devices can be found in :doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.
#### Checking Available Devices
Checking Available Devices
--------------------------
To check what devices are present in the system, you can use Device API, as listed below. For information on how to use it, see [Query device properties and configuration](supported_plugins/config_properties.md).
To check what devices are present in the system, you can use Device API, as listed below. For information on how to use it, see :doc:`Query device properties and configuration <openvino_docs_OV_UG_query_api>`.
@sphinxdirective
.. tab:: C++
.. tab:: C++
.. code-block:: sh
ov::runtime::Core::get_available_devices()
ov::runtime::Core::get_available_devices()
See the Hello Query Device C++ Sample for reference.
@@ -169,19 +194,18 @@ To check what devices are present in the system, you can use Device API, as list
See the Hello Query Device Python Sample for reference.
@endsphinxdirective
#### Excluding Devices from Device Candidate List
Excluding Devices from Device Candidate List
--------------------------------------------
You can also exclude hardware devices from AUTO, for example, to reserve CPU for other jobs. AUTO will not use the device for inference then. To do that, add a minus sign (-) before CPU in `AUTO: <device names>`, as in the following example:
You can also exclude hardware devices from AUTO, for example, to reserve CPU for other jobs. AUTO will not use the device for inference then. To do that, add a minus sign ``(-)`` before CPU in ``AUTO: <device names>``, as in the following example:
@sphinxdirective
.. tab:: C++
.. code-block:: sh
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:-CPU");
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:-CPU");
.. tab:: Python
@@ -189,126 +213,156 @@ You can also exclude hardware devices from AUTO, for example, to reserve CPU for
compiled_model = core.compile_model(model=model, device_name="AUTO:-CPU")
@endsphinxdirective
AUTO will then query all available devices and remove CPU from the candidate list.
AUTO will then query all available devices and remove CPU from the candidate list.
Note that if you choose to exclude CPU from device candidate list, CPU will not be able to support the initial model compilation stage. See more information in [How AUTO Works](#how-auto-works).
Note that if you choose to exclude CPU from device candidate list, CPU will not be able to support the initial model compilation stage. See more information in :ref:`How AUTO Works <how-auto-works>`.
### Performance Hints for AUTO
The `ov::hint::performance_mode` property enables you to specify a performance option for AUTO to be more efficient for particular use cases.
> **NOTE**: Currently, the `ov::hint` property is supported by CPU and GPU devices only.
Performance Hints for AUTO
++++++++++++++++++++++++++
#### THROUGHPUT
This option prioritizes high throughput, balancing between latency and power. It is best suited for tasks involving multiple jobs, such as inference of video feeds or large numbers of images.
The ``ov::hint::performance_mode`` property enables you to specify a performance option for AUTO to be more efficient for particular use cases. The default hint for AUTO is ``LATENCY``.
> **NOTE**: If no performance hint is set explicitly, AUTO will set THROUGHPUT for devices that have not set `ov::device::properties`. For example, if you have both a CPU and a GPU in the system, this command `core.compile_model("AUTO", ov::device::properties("CPU", ov::enable_profiling(true)))` will set THROUGHPUT for the GPU only. No hint will be set for the CPU although it's the selected device.
#### LATENCY
LATENCY
^^^^^^^
This option prioritizes low latency, providing short response time for each inference job. It performs best for tasks where inference is required for a single input image, e.g. a medical analysis of an ultrasound scan image. It also fits the tasks of real-time or nearly real-time applications, such as an industrial robot's response to actions in its environment or obstacle avoidance for autonomous vehicles.
@sphinxdirective
.. note::
If no performance hint is set explicitly, AUTO will set LATENCY for devices that have not set ``ov::device::properties``, for example, ``ov::device::properties(<DEVICE_NAME>, ov::hint::performance_mode(ov::hint::LATENCY))``.
.. _cumulative throughput:
@endsphinxdirective
#### CUMULATIVE_THROUGHPUT
While `LATENCY` and `THROUGHPUT` can select one target device with your preferred performance option, the `CUMULATIVE_THROUGHPUT` option enables running inference on multiple devices for higher throughput. With `CUMULATIVE_THROUGHPUT`, AUTO loads the network model to all available devices in the candidate list, and then runs inference on them based on the default or specified priority.
THROUGHPUT
--------------------
CUMULATIVE_THROUGHPUT has similar behavior as [the Multi-Device execution mode (MULTI)](./multi_device.md). The only difference is that CUMULATIVE_THROUGHPUT uses the devices specified by AUTO, which means that it's not mandatory to add devices manually, while with MULTI, you need to specify the devices before inference.
This option prioritizes high throughput, balancing between latency and power. It is best suited for tasks involving multiple jobs, such as inference of video feeds or large numbers of images.
CUMULATIVE_THROUGHPUT
---------------------
While ``LATENCY`` and ``THROUGHPUT`` can select one target device with your preferred performance option, the ``CUMULATIVE_THROUGHPUT`` option enables running inference on multiple devices for higher throughput. With ``CUMULATIVE_THROUGHPUT``, AUTO loads the network model to all available devices in the candidate list, and then runs inference on them based on the default or specified priority.
CUMULATIVE_THROUGHPUT has similar behavior as :doc:`the Multi-Device execution mode (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`. The only difference is that CUMULATIVE_THROUGHPUT uses the devices specified by AUTO, which means that it's not mandatory to add devices manually, while with MULTI, you need to specify the devices before inference.
With the CUMULATIVE_THROUGHPUT option:
- If `AUTO` without any device names is specified, and the system has more than two GPU devices, AUTO will remove CPU from the device candidate list to keep GPU running at full capacity.
- If device priority is specified, AUTO will run inference requests on devices based on the priority. In the following example, AUTO will always try to use GPU first, and then use CPU if GPU is busy:
```sh
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:GPU,CPU", ov::hint::performance_mode(ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT));
```
#### Code Examples
* If ``AUTO`` without any device names is specified, and the system has more than two GPU devices, AUTO will remove CPU from the device candidate list to keep GPU running at full capacity.
* If device priority is specified, AUTO will run inference requests on devices based on the priority. In the following example, AUTO will always try to use GPU first, and then use CPU if GPU is busy:
.. code-block: sh
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:GPU,CPU", ov::hint::performance_mode(ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT));
Code Examples
--------------------
To enable performance hints for your application, use the following code:
To enable performance hints for your application, use the following code:
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/AUTO3.cpp
:language: cpp
:fragment: [part3]
.. doxygensnippet:: docs/snippets/AUTO3.cpp
:language: cpp
:fragment: [part3]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part3]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part3]
@endsphinxdirective
#### Disabling Auto-Batching for THROUGHPUT and CUMULATIVE_THROUGHPUT
Disabling Auto-Batching for THROUGHPUT and CUMULATIVE_THROUGHPUT
----------------------------------------------------------------
The `ov::hint::PerformanceMode::THROUGHPUT` mode and the `ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT` mode will trigger Auto-Batching (for example, for the GPU device) by default. You can disable it by setting `ov::hint::allow_auto_batching(false)`, or change the default timeout value to a large number, e.g. `ov::auto_batch_timeout(1000)`. See [Automatic Batching](./automatic_batching.md) for more details.
The ``ov::hint::PerformanceMode::THROUGHPUT`` mode and the ``ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT`` mode will trigger Auto-Batching (for example, for the GPU device) by default. You can disable it by setting ``ov::hint::allow_auto_batching(false)``, or change the default timeout value to a large number, e.g. ``ov::auto_batch_timeout(1000)``. See :doc:`Automatic Batching <openvino_docs_OV_UG_Automatic_Batching>` for more details.
### Configuring Model Priority
The `ov::hint::model_priority` property enables you to control the priorities of models in the Auto-Device plugin. A high-priority model will be loaded to a supported high-priority device. A lower-priority model will not be loaded to a device that is occupied by a higher-priority model.
Configuring Model Priority
++++++++++++++++++++++++++
The ``ov::hint::model_priority`` property enables you to control the priorities of models in the Auto-Device plugin. A high-priority model will be loaded to a supported high-priority device. A lower-priority model will not be loaded to a device that is occupied by a higher-priority model.
@sphinxdirective
.. tab:: C++
.. doxygensnippet:: docs/snippets/AUTO4.cpp
:language: cpp
:fragment: [part4]
.. doxygensnippet:: docs/snippets/AUTO4.cpp
:language: cpp
:fragment: [part4]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part4]
@endsphinxdirective
## Configuring Individual Devices and Creating the Auto-Device plugin on Top
Although the methods described above are currently the preferred way to execute inference with AUTO, the following steps can be also used as an alternative. It is currently available as a legacy feature and used if the device candidate list includes Myriad devices, uncapable of utilizing the Performance Hints option.
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part4]
@sphinxdirective
Checking Target Runtime Devices
+++++++++++++++++++++++++++++++
To query the runtime target devices on which the inferences are being executed using AUTO, you can use the ``ov::execution_devices`` property. It must be used with ``get_property``, for example:
.. tab:: C++
.. doxygensnippet:: docs/snippets/AUTO5.cpp
:language: cpp
:fragment: [part5]
.. doxygensnippet:: docs/snippets/AUTO7.cpp
:language: cpp
:fragment: [part7]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part5]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part7]
@endsphinxdirective
## <a name="using-auto-with-openvino-samples-and-benchmark-app"></a> Using AUTO with OpenVINO Samples and Benchmark app
Configuring Individual Devices and Creating the Auto-Device plugin on Top
#########################################################################
Although the methods described above are currently the preferred way to execute inference with AUTO, the following steps can be also used as an alternative. It is currently available as a legacy feature and used if the device candidate list includes Myriad devices, incapable of utilizing the Performance Hints option.
.. tab:: C++
.. doxygensnippet:: docs/snippets/AUTO5.cpp
:language: cpp
:fragment: [part5]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part5]
.. _using-auto-with-openvino-samples-and-benchmark-app:
Using AUTO with OpenVINO Samples and Benchmark app
##################################################
To see how the Auto-Device plugin is used in practice and test its performance, take a look at OpenVINO™ samples. All samples supporting the "-d" command-line option (which stands for "device") will accept the plugin out-of-the-box. The Benchmark Application will be a perfect place to start it presents the optimal performance of the plugin without the need for additional settings, like the number of requests or CPU threads. To evaluate the AUTO performance, you can use the following commands:
For unlimited device choice:
```sh
benchmark_app d AUTO m <model> -i <input> -niter 1000
```
.. code-block:sh
benchmark_app d AUTO m <model> -i <input> -niter 1000
For limited device choice:
```sh
benchmark_app d AUTO:CPU,GPU,MYRIAD m <model> -i <input> -niter 1000
```
.. code-block:sh
For more information, refer to the [C++](../../samples/cpp/benchmark_app/README.md) or [Python](../../tools/benchmark_tool/README.md) version instructions.
benchmark_app d AUTO:CPU,GPU,MYRIAD m <model> -i <input> -niter 1000
For more information, refer to the :doc:`C++ <openvino_inference_engine_samples_benchmark_app_README>` or :doc:`Python <openvino_inference_engine_tools_benchmark_tool_README>` version instructions.
@sphinxdirective
.. note::
The default CPU stream is 1 if using “-d AUTO”.
@@ -316,11 +370,13 @@ For more information, refer to the [C++](../../samples/cpp/benchmark_app/README.
You can use the FP16 IR to work with auto-device.
No demos are yet fully optimized for AUTO, by means of selecting the most suitable device, using the GPU streams/throttling, and so on.
Additional Resources
####################
- :doc:`Debugging AUTO <openvino_docs_OV_UG_supported_plugins_AUTO_debugging>`
- :doc:`Running on Multiple Devices Simultaneously <openvino_docs_OV_UG_Running_on_multiple_devices>`
- :doc:`Supported Devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`
@endsphinxdirective
## Additional Resources
- [Debugging AUTO](AutoPlugin_Debugging.md)
- [Running on Multiple Devices Simultaneously](./multi_device.md)
- [Supported Devices](supported_plugins/Supported_Devices.md)

View File

@@ -222,11 +222,11 @@ The GPU plugin has the following additional preprocessing options:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/gpu/preprocessing.cpp init_preproc
@snippet docs/snippets/gpu/preprocessing_nv12_two_planes.cpp init_preproc
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/gpu/preprocessing.py init_preproc
@snippet docs/snippets/gpu/preprocessing_nv12_two_planes.py init_preproc
@endsphinxtab
@endsphinxtabset

View File

@@ -3,8 +3,11 @@
The GPU plugin implementation of the `ov::RemoteContext` and `ov::RemoteTensor` interfaces supports GPU
pipeline developers who need video memory sharing and interoperability with existing native APIs,
such as OpenCL, Microsoft DirectX, or VAAPI.
Using these interfaces allows you to avoid any memory copy overhead when plugging OpenVINO™ inference
into an existing GPU pipeline. It also enables OpenCL kernels to participate in the pipeline to become
The `ov::RemoteContext` and `ov::RemoteTensor` interface implementation targets the need for memory sharing and
interoperability with existing native APIs, such as OpenCL, Microsoft DirectX, and VAAPI.
They allow you to avoid any memory copy overhead when plugging OpenVINO™ inference
into an existing GPU pipeline. They also enable OpenCL kernels to participate in the pipeline to become
native buffer consumers or producers of the OpenVINO™ inference.
There are two interoperability scenarios supported by the Remote Tensor API:
@@ -23,7 +26,7 @@ and functions that consume or produce native handles directly.
## Context Sharing Between Application and GPU Plugin
GPU plugin classes that implement the `ov::RemoteContext` interface are responsible for context sharing.
Obtaining a context object is the first step of sharing pipeline objects.
Obtaining a context object is the first step in sharing pipeline objects.
The context object of the GPU plugin directly wraps OpenCL context, setting a scope for sharing the
`ov::CompiledModel` and `ov::RemoteTensor` objects. The `ov::RemoteContext` object can be either created on top of
an existing handle from a native API or retrieved from the GPU plugin.
@@ -37,60 +40,49 @@ additional parameter.
To create the `ov::RemoteContext` object for user context, explicitly provide the context to the plugin using constructor for one
of `ov::RemoteContext` derived classes.
@sphinxtabset
@sphinxdirective
@sphinxtab{Linux}
.. tab:: Linux
@sphinxtabset
.. tab:: Create from cl_context
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: context_from_cl_context
@sphinxtab{Create from cl_context}
.. tab:: Create from cl_queue
@snippet docs/snippets/gpu/remote_objects_creation.cpp context_from_cl_context
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: context_from_cl_queue
@endsphinxtab
.. tab:: Create from VADisplay
@sphinxtab{Create from cl_queue}
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: context_from_va_display
@snippet docs/snippets/gpu/remote_objects_creation.cpp context_from_cl_queue
.. tab:: Windows
@endsphinxtab
.. tab:: Create from cl_context
@sphinxtab{Create from VADisplay}
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: context_from_cl_context
@snippet docs/snippets/gpu/remote_objects_creation.cpp context_from_va_display
.. tab:: Create from cl_queue
@endsphinxtab
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: context_from_cl_queue
@endsphinxtabset
@endsphinxtab
@sphinxtab{Windows}
@sphinxtabset
@sphinxtab{Create from cl_context}
@snippet docs/snippets/gpu/remote_objects_creation.cpp context_from_cl_context
@endsphinxtab
@sphinxtab{Create from cl_queue}
@snippet docs/snippets/gpu/remote_objects_creation.cpp context_from_cl_queue
@endsphinxtab
@sphinxtab{Create from ID3D11Device}
@snippet docs/snippets/gpu/remote_objects_creation.cpp context_from_d3d_device
@endsphinxtab
@endsphinxtabset
@endsphinxtabset
.. tab:: Create from ID3D11Device
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: context_from_d3d_device
@endsphinxdirective
### Getting RemoteContext from the Plugin
If you do not provide any user context, the plugin uses its default internal context.
@@ -100,21 +92,21 @@ Once the plugin options have been changed, the internal context is replaced by t
To request the current default context of the plugin, use one of the following methods:
@sphinxtabset
@sphinxdirective
@sphinxtab{Get context from Core}
.. tab:: Get context from Core
@snippet docs/snippets/gpu/remote_objects_creation.cpp default_context_from_core
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: default_context_from_core
@endsphinxtab
.. tab:: Get context from compiled model
@sphinxtab{Batching via throughput hint}
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: default_context_from_model
@snippet docs/snippets/gpu/remote_objects_creation.cpp default_context_from_model
@endsphinxtab
@endsphinxtabset
@endsphinxdirective
## Memory Sharing Between Application and GPU Plugin
@@ -126,108 +118,153 @@ of the `ov::RemoteContext` sub-classes.
`ov::intel_gpu::ocl::ClContext` has multiple overloads of `create_tensor` methods which allow to wrap pre-allocated native handles with the `ov::RemoteTensor`
object or request plugin to allocate specific device memory. For more details, see the code snippets below:
@sphinxtabset
@sphinxdirective
@sphinxtab{Wrap native handles}
.. tab:: Wrap native handles
@sphinxtabset
.. tab:: USM pointer
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: wrap_usm_pointer
@sphinxtab{USM pointer}
.. tab:: cl_mem
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: wrap_cl_mem
@snippet docs/snippets/gpu/remote_objects_creation.cpp wrap_usm_pointer
.. tab:: cl::Buffer
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: wrap_cl_buffer
@endsphinxtab
.. tab:: cl::Image2D
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: wrap_cl_image
@sphinxtab{cl_mem}
.. tab:: biplanar NV12 surface
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: wrap_nv12_surface
@snippet docs/snippets/gpu/remote_objects_creation.cpp wrap_cl_mem
.. tab:: Allocate device memory
@endsphinxtab
.. tab:: USM host memory
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: allocate_usm_host
@sphinxtab{cl::Buffer}
.. tab:: USM device memory
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: allocate_usm_device
@snippet docs/snippets/gpu/remote_objects_creation.cpp wrap_cl_buffer
.. tab:: cl::Buffer
.. doxygensnippet:: docs/snippets/gpu/remote_objects_creation.cpp
:language: cpp
:fragment: allocate_cl_buffer
@endsphinxtab
@sphinxtab{cl::Image2D}
@snippet docs/snippets/gpu/remote_objects_creation.cpp wrap_cl_image
@endsphinxtab
@sphinxtab{biplanar NV12 surface}
@snippet docs/snippets/gpu/remote_objects_creation.cpp wrap_nv12_surface
@endsphinxtab
@endsphinxtabset
@endsphinxtab
@sphinxtab{Allocate device memory}
@sphinxtabset
@sphinxtab{USM host memory}
@snippet docs/snippets/gpu/remote_objects_creation.cpp allocate_usm_host
@endsphinxtab
@sphinxtab{USM device memory}
@snippet docs/snippets/gpu/remote_objects_creation.cpp allocate_usm_device
@endsphinxtab
@sphinxtab{cl::Buffer}
@snippet docs/snippets/gpu/remote_objects_creation.cpp allocate_cl_buffer
@endsphinxtab
@endsphinxtabset
@endsphinxtab
@endsphinxtabset
@endsphinxdirective
The `ov::intel_gpu::ocl::D3DContext` and `ov::intel_gpu::ocl::VAContext` classes are derived from `ov::intel_gpu::ocl::ClContext`.
Therefore, they provide the functionality described above and extend it
to allow creation of `ov::RemoteTensor` objects from `ID3D11Buffer`, `ID3D11Texture2D` pointers or the `VASurfaceID` handle respectively.
## Direct NV12 Video Surface Input
To support the direct consumption of a hardware video decoder output, the plugin accepts two-plane video
surfaces as arguments for the `create_tensor_nv12()` function, which creates a pair of `ov::RemoteTensor`
objects which represent the Y and UV planes.
To support the direct consumption of a hardware video decoder output, the GPU plugin accepts:
To ensure that the plugin generates the correct execution graph for the NV12 dual-plane input, static preprocessing
* Two-plane NV12 video surface input - calling the `create_tensor_nv12()` function creates
a pair of `ov::RemoteTensor` objects, representing the Y and UV planes.
* Single-plane NV12 video surface input - calling the `create_tensor()` function creates one
`ov::RemoteTensor` object, representing the Y and UV planes at once (Y elements before UV elements).
* NV12 to Grey video surface input conversion - calling the `create_tensor()` function creates one
`ov::RemoteTensor` object, representing only the Y plane.
To ensure that the plugin generates a correct execution graph, static preprocessing
should be added before model compilation:
@snippet snippets/gpu/preprocessing.cpp init_preproc
@sphinxdirective
Since the `ov::intel_gpu::ocl::ClImage2DTensor` and its derived classes do not support batched surfaces, if batching and surface sharing are required
at the same time, inputs need to be set via the `ov::InferRequest::set_tensors` method with vector of shared surfaces for each plane:
.. tab:: two-plane
@sphinxtabset
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_two_planes.cpp
:language: cpp
:fragment: [init_preproc]
@sphinxtab{Single batch}
.. tab:: single-plane
@snippet docs/snippets/gpu/preprocessing.cpp single_batch
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_single_plane.cpp
:language: cpp
:fragment: [init_preproc]
@endsphinxtab
.. tab:: NV12 to Grey
@sphinxtab{Multiple batches}
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_to_gray.cpp
:language: cpp
:fragment: [init_preproc]
@snippet docs/snippets/gpu/preprocessing.cpp batched_case
@endsphinxdirective
@endsphinxtab
@endsphinxtabset
Since the `ov::intel_gpu::ocl::ClImage2DTensor` and its derived classes do not support batched surfaces,
if batching and surface sharing are required at the same time,
inputs need to be set via the `ov::InferRequest::set_tensors` method with vector of shared surfaces for each plane:
@sphinxdirective
.. tab:: Single Batch
.. tab:: two-plane
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_two_planes.cpp
:language: cpp
:fragment: single_batch
.. tab:: single-plane
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_single_plane.cpp
:language: cpp
:fragment: single_batch
.. tab:: NV12 to Grey
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_to_gray.cpp
:language: cpp
:fragment: single_batch
.. tab:: Multiple Batches
.. tab:: two-plane
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_two_planes.cpp
:language: cpp
:fragment: batched_case
.. tab:: single-plane
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_single_plane.cpp
:language: cpp
:fragment: batched_case
.. tab:: NV12 to Grey
.. doxygensnippet:: docs/snippets/gpu/preprocessing_nv12_to_gray.cpp
:language: cpp
:fragment: batched_case
@endsphinxdirective
I420 color format can be processed in a similar way
## Context & Queue Sharing
@@ -242,18 +279,12 @@ This sharing mechanism allows performing pipeline synchronization on the app sid
on waiting for the completion of inference. The pseudo-code may look as follows:
@sphinxdirective
.. raw:: html
<div class="collapsible-section" data-title="Queue and context sharing example">
.. dropdown:: Queue and context sharing example
@endsphinxdirective
@snippet snippets/gpu/queue_sharing.cpp queue_sharing
@sphinxdirective
.. raw:: html
</div>
.. doxygensnippet:: docs/snippets/gpu/queue_sharing.cpp
:language: cpp
:fragment: queue_sharing
@endsphinxdirective
@@ -282,60 +313,34 @@ For possible low-level properties and their description, refer to the `openvino/
To see pseudo-code of usage examples, refer to the sections below.
> **NOTE**: For low-level parameter usage examples, see the source code of user-side wrappers from the include files mentioned above.
@sphinxdirective
.. raw:: html
<div class="collapsible-section" data-title="OpenCL Kernel Execution on a Shared Buffer">
.. NOTE::
For low-level parameter usage examples, see the source code of user-side wrappers from the include files mentioned above.
.. dropdown:: OpenCL Kernel Execution on a Shared Buffer
This example uses the OpenCL context obtained from a compiled model object.
.. doxygensnippet:: docs/snippets/gpu/context_sharing.cpp
:language: cpp
:fragment: context_sharing_get_from_ov
.. dropdown:: Running GPU Plugin Inference within User-Supplied Shared Context
.. doxygensnippet:: docs/snippets/gpu/context_sharing.cpp
:language: cpp
:fragment: context_sharing_user_handle
.. dropdown:: Direct Consuming of the NV12 VAAPI Video Decoder Surface on Linux
.. doxygensnippet:: docs/snippets/gpu/context_sharing_va.cpp
:language: cpp
:fragment: context_sharing_va
@endsphinxdirective
This example uses the OpenCL context obtained from a compiled model object.
@snippet snippets/gpu/context_sharing.cpp context_sharing_get_from_ov
@sphinxdirective
.. raw:: html
</div>
@endsphinxdirective
@sphinxdirective
.. raw:: html
<div class="collapsible-section" data-title="Running GPU Plugin Inference within User-Supplied Shared Context">
@endsphinxdirective
@snippet snippets/gpu/context_sharing.cpp context_sharing_user_handle
@sphinxdirective
.. raw:: html
</div>
@endsphinxdirective
@sphinxdirective
.. raw:: html
<div class="collapsible-section" data-title="Direct Consuming of the NV12 VAAPI Video Decoder Surface on Linux">
@endsphinxdirective
@snippet snippets/gpu/context_sharing_va.cpp context_sharing_va
@sphinxdirective
.. raw:: html
</div>
@endsphinxdirective
## See Also

Binary file not shown.

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8c4dfe9d6f15c7aa3b1d09461ceb331434985f00431a973b1eb435d4a320e62c
size 1369
oid sha256:c3a3c1a07f533b99e203a674020136b37f08b0c41fb04f09706f929ff1982156
size 1908

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:918ba4cc259c550ca728cf48aa27742a1c4de8801a45875407543b86ad5f5667
size 55795
oid sha256:b91de53f2140ffde8be29292d74d332c25d128f7263a9d218c241c8b0cd1b3d2
size 57178

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3bfae52ffcc1c7eb763786fd3c42c3f473d6636206eef6b66a17756a42aefdd3
size 6773
oid sha256:ac0af3d1c3cbac3d50db778a3a00f360e59489fe33be35d317d9174aefce2183
size 57576

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:76f6f176677e7c8612ed6070e4188d64a5f3b456ca67b3bca96d1d80e3356648
size 55939
oid sha256:0fe77cae8fe064d05f1242529dfbaba4f4bdfe8cbb4a0fadb4a4c58d1acd919d
size 57242

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2575daaf490040f5495dab69c836c077591e6f26883bdda46c147becf3e2ac70
size 56306
oid sha256:88e5b333acdc302f98ea452e68ce182046476a6886658e6924523669c95d240f
size 57637

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4cf2d63ae8381dfd328ff1d31a29f49225b49ba772eb344df62c11dde053e5f6
size 44685
oid sha256:5e8d1293354e4842a5cb53686ea85b193fff75f997f0ff265099ea28dcadaf8f
size 45937

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3e431d2f5ec6f998a612b11cc0f3b6c6bccf440dfb860aa9bd952e85a968cef7
size 45634
oid sha256:31fdf6441144bad648633b59485a91c19ccfa00fee12b144b28db49cc5e67d10
size 46349

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:64e64059e7416353cfd2ad836a36c12071804addf4fb165f0cf5150aa7658fa4
size 123996

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b3932d0cf0071c629e1013f3e17a9f8abda800eb01c50b3e826a42127e42da7
size 48770

View File

@@ -310,7 +310,7 @@ class Graph {
$(document).ready(function () {
$('#build-graphs-btn').on('click', showModal);
$('.ov-toolkit-benchmark-results').on('click', showModal);
function clickBuildGraphs(graph, networkModels, ietype, platforms, kpis, precisions) {
renderData(graph, networkModels, ietype, platforms, kpis, precisions);

View File

@@ -21,7 +21,6 @@ Benchmarks are available for:
* [Intel® Distribution of OpenVINO™ toolkit](performance_benchmarks_openvino.md).
You can also test performance for your system yourself, following the guide on [getting performance numbers](../MO_DG/prepare_model/Getting_performance_numbers.md).
Performance of a particular application can also be evaluated virtually using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/). It is a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. To learn more about it, visit [the website](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/overview.html) or [create an account](https://www.intel.com/content/www/us/en/forms/idz/devcloud-registration.html?tgt=https://www.intel.com/content/www/us/en/secure/forms/devcloud-enrollment/account-provisioning.html).

View File

@@ -9,89 +9,81 @@
openvino_docs_performance_int8_vs_fp32
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2022.3/_static/benchmarks_files/OV-2022.3-Performance-Data.xlsx>
@endsphinxdirective
Click the "Benchmark Graphs" button to see the OpenVINO™ benchmark graphs. Select the models, the hardware platforms (CPU SKUs),
precision and performance index from the lists and click the “Build Graphs” button.
@sphinxdirective
.. button-link:: #
:class: ov-toolkit-benchmark-results
:color: primary
:outline:
:material-regular:`bar_chart;1.4em` Benchmark Graphs
.. raw:: html
<section class="build-benchmark-section">
<div class="title">
<h3>Build benchmark graphs to your specifications</h3>
</div>
<div class="btn-container">
<button id="build-graphs-btn" class="configure-graphs-btn">Configure Graphs</button>
</div>
<img src="_static/images/sample-graph-image.png" class="sample-graph-image">
</section>
@endsphinxdirective
Measuring inference performance involves many variables and is extremely use-case and application dependent.
Below are four parameters for measurements, which are key elements to consider for a successful deep learning inference application:
@sphinxdirective
.. raw:: html
.. tab:: :material-regular:`keyboard_double_arrow_right;1.4em` Throughput
<div class="picker-options">
<span class="selectable option throughput selected" data-option="throughput">
Throughput
</span>
<span class="selectable option value" data-option="value">
Value
</span>
<span class="selectable option efficiency" data-option="efficiency">
Efficiency
</span>
<span class="selectable option latency" data-option="latency">
Latency
</span>
<p class="selectable throughput selected">
Measures the number of inferences delivered within a latency threshold. (for example, number of Frames Per Second - FPS). When deploying a system with deep learning inference, select the throughput that delivers the best trade-off between latency and power for the price and performance that meets your requirements.
</p>
<p class="selectable value">
While throughput is important, what is more critical in edge AI deployments is the performance efficiency or performance-per-cost. Application performance in throughput per dollar of system cost is the best measure of value. The value KPI is calculated as “Throughput measured as inferences per second / price of inference engine”. This means for a 2 socket system 2x the price of a CPU is used. Prices are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
<p class="selectable efficiency">
System power is a key consideration from the edge to the data center. When selecting deep learning solutions, power efficiency (throughput/watt) is a critical factor to consider. Intel designs provide excellent power efficiency for running deep learning workloads. The efficiency KPI is calculated as “Throughput measured as inferences per second / TDP of inference engine”. This means for a 2 socket system 2x the power dissipation (TDP) of a CPU is used. TDP-values are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
<p class="selectable latency">
This measures the synchronous execution of inference requests and is reported in milliseconds. Each inference request (for example: preprocess, infer, postprocess) is allowed to complete before the next is started. This performance metric is relevant in usage scenarios where a single image input needs to be acted upon as soon as possible. An example would be the healthcare sector where medical personnel only request analysis of a single ultra sound scanning image or in real-time or near real-time applications for example an industrial robot's response to actions in its environment or obstacle avoidance for autonomous vehicles.
</p>
</div>
Measures the number of inferences delivered within a latency threshold (for example, number of Frames Per Second - FPS). When deploying a system with deep learning inference, select the throughput that delivers the best trade-off between latency and power for the price and performance that meets your requirements.
<h3>Platform & Configurations </h3>
<p>For a listing of all platforms and configurations used for testing, refer to the following:</p>
<container class="platform-configurations">
<div>
<a href="https://docs.openvino.ai/2022.3/_static/benchmarks_files/platform_list_22.3.pdf" target="_blank" class="pdf"><img src="_static/css/media/pdf-icon.svg"/>Hardware Platforms (PDF)</a>
</div>
<div>
<a href="https://docs.openvino.ai/2022.3/_static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx" class="xls"><img src="_static/css/media/xls-icon.svg"/>Configuration Details (XLSX)</a>
</div>
</container>
.. tab:: :material-regular:`attach_money;1.4em` Value
While throughput is important, what is more critical in edge AI deployments is the performance efficiency or performance-per-cost. Application performance in throughput per dollar of system cost is the best measure of value. The value KPI is calculated as “Throughput measured as inferences per second / price of inference engine”. This means for a 2 socket system 2x the price of a CPU is used. Prices are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
.. tab:: :material-regular:`flash_on;1.4em` Efficiency
System power is a key consideration from the edge to the data center. When selecting deep learning solutions, power efficiency (throughput/watt) is a critical factor to consider. Intel designs provide excellent power efficiency for running deep learning workloads. The efficiency KPI is calculated as “Throughput measured as inferences per second / TDP of inference engine”. This means for a 2 socket system 2x the power dissipation (TDP) of a CPU is used. TDP-values are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
.. tab:: :material-regular:`hourglass_empty;1.4em` Latency
This measures the synchronous execution of inference requests and is reported in milliseconds. Each inference request (for example: preprocess, infer, postprocess) is allowed to complete before the next is started. This performance metric is relevant in usage scenarios where a single image input needs to be acted upon as soon as possible. An example would be the healthcare sector where medical personnel only request analysis of a single ultra sound scanning image or in real-time or near real-time applications for example an industrial robot's response to actions in its environment or obstacle avoidance for autonomous vehicles.
Platform & Configurations
####################################
For a listing of all platforms and configurations used for testing, refer to the following:
.. button-link:: _static/benchmarks_files/platform_list_22.3.pdf
:color: primary
:outline:
:material-regular:`download;1.5em` Click for Hardware Platforms [PDF]
.. button-link:: _static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx
:color: primary
:outline:
:material-regular:`download;1.5em` Click for Configuration Details [XLSX]
@endsphinxdirective
This benchmark setup includes a single machine on which both the benchmark application and the OpenVINO™ installation reside. The presented performance benchmark numbers are based on the release 2022.3 of the Intel® Distribution of OpenVINO™ toolkit.
The benchmark application loads the OpenVINO™ Runtime and executes inferences on the specified hardware (CPU, GPU or VPU).
It measures the time spent on actual inferencing (excluding any pre or post processing) and then reports on the inferences per second (or Frames Per Second).
The benchmark application loads the OpenVINO™ Runtime and executes inferences on the specified hardware (CPU, GPU or GNA).
It measures the time spent on actual inference (excluding any pre or post processing) and then reports on the inferences per second (or Frames Per Second).
## Disclaimers
Disclaimers
####################################
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2022.3.
Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of December 13, 2022 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Performance varies by use, configuration and other factors. Learn more at :ref:`www.intel.com/PerformanceIndex<https://www.intel.com/PerformanceIndex>`.
Your costs and results may vary.
Intel optimizations, for Intel compilers or other products, may not optimize to the same degree for non-Intel products.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
@endsphinxdirective

View File

@@ -1,4 +1,4 @@
# Model Accuracy and Performance for INT8 and FP32 {#openvino_docs_performance_int8_vs_fp32}
# Model Accuracy {#openvino_docs_performance_int8_vs_fp32}
The following table presents the absolute accuracy drop calculated as the accuracy difference between FP32 and INT8 representations of a model on two platforms

View File

@@ -79,7 +79,7 @@ html_theme = "openvino_sphinx_theme"
html_theme_path = ['_themes']
html_theme_options = {
"navigation_depth": 6,
"navigation_depth": 8,
"show_nav_level": 2,
"use_edit_page_button": True,
"github_url": "https://github.com/openvinotoolkit/openvino",

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:774f53500c6ed360001ca0478c96452c1037fa2c42eb39459d13c836ebcaeee1
size 22041
oid sha256:68d5003431670cea03abc68eba89ffc9c566e08782ae6f5a80dd4a2a20766847
size 21883

View File

@@ -2,34 +2,37 @@
@sphinxdirective
.. note:: On platforms where Intel® GNA is not enabled in the BIOS, the driver cannot be installed, so the GNA plugin uses the software emulation mode only.
.. note::
@endsphinxdirective
On platforms where Intel® GNA is not enabled in the BIOS, the driver cannot be installed, so the GNA plugin uses the software emulation mode only.
## Drivers and Dependencies
Drivers and Dependencies
########################
@sphinxdirective
Intel® GNA hardware requires a driver to be installed on the system.
.. _gna guide:
@endsphinxdirective
Linux
####################
Prerequisites
++++++++++++++++++++
Ensure that make, gcc, and Linux kernel headers are installed. Use the following command to install required software:
.. code-block:: sh
sudo apt-get install gcc make linux-headers-generic
## Linux
Configuration steps
++++++++++++++++++++
### Prerequisites
Ensure that make, gcc, and Linux kernel headers are installed.
### Configuration steps
@sphinxdirective
#. Download `Intel® GNA driver for Ubuntu Linux 18.04.3 LTS (with HWE Kernel version 5.4+) <https://storage.openvinotoolkit.org/drivers/gna/>`__
#. Run the sample_install.sh script provided in the installation package:
1. Download `Intel® GNA driver for Ubuntu Linux 18.04.3 LTS (with HWE Kernel version 5.4+) <https://storage.openvinotoolkit.org/drivers/gna/>`__
2. Run the sample_install.sh script provided in the installation package:
.. code-block:: sh
@@ -54,28 +57,28 @@ To unload the driver:
.. _gna guide windows:
@endsphinxdirective
## Windows
Windows
####################
Intel® GNA driver for Windows is available through Windows Update.
## Whats Next?
@sphinxdirective
Whats Next?
####################
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
Developing in Python:
* `Start with tensorflow models with OpenVINO™ <https://docs.openvino.ai/nightly/notebooks/101-tensorflow-to-openvino-with-output.html>`_
* `Start with ONNX and PyTorch models with OpenVINO™ <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`_
* `Start with PaddlePaddle models with OpenVINO™ <https://docs.openvino.ai/nightly/notebooks/103-paddle-onnx-to-openvino-classification-with-output.html>`_
* `Start with tensorflow models with OpenVINO™ <https://docs.openvino.ai/2022.3/notebooks/101-tensorflow-to-openvino-with-output.html>`_
* `Start with ONNX and PyTorch models with OpenVINO™ <https://docs.openvino.ai/2022.3/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`_
* `Start with PaddlePaddle models with OpenVINO™ <https://docs.openvino.ai/2022.3/notebooks/103-paddle-onnx-to-openvino-classification-with-output.html>`_
Developing in C++:
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
* :doc:`Hello Reshape SSD C++ Sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
* :doc:`Hello Reshape SSD C++ Sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
@endsphinxdirective

View File

@@ -10,10 +10,11 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
@sphinxdirective
.. tab:: System Requirements
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`_
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`_
.. tab:: Processor Notes
@@ -25,8 +26,38 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
.. tab:: Software
* `CMake 3.13 or higher, 64-bit <https://cmake.org/download/>`_
* GCC 7.5.0 (for Ubuntu 18.04) or GCC 9.3.0 (for Ubuntu 20.04)
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/>`_
* GCC:
.. tab:: Ubuntu 18.04
* GCC 7.5.0
.. tab:: Ubuntu 20.04
* GCC 9.3.0
.. tab:: RHEL 8
* GCC 8.4.1
.. tab:: CENTOS 7
* GCC 8.3.1
Use folloving instructions to install it:
Install GCC 8.3.1 via devtoolset-8
.. code-block:: sh
sudo yum update -y && sudo yum install -y centos-release-scl epel-release
sudo yum install -y devtoolset-8 git patchelf
Enable devtoolset-8 and check current gcc version
.. code-block:: sh
source /opt/rh/devtoolset-8/enable
gcc -v
@endsphinxdirective

View File

@@ -1,107 +1,142 @@
# Install Intel® Distribution of OpenVINO™ Toolkit from PyPI Repository {#openvino_docs_install_guides_installing_openvino_pip}
@sphinxdirective
You can install both OpenVINO™ Runtime and OpenVINO Development Tools through the PyPI repository. This page provides the main steps for installing OpenVINO Runtime.
> **NOTE**: From the 2022.1 release, the OpenVINO™ Development Tools can only be installed via PyPI. See [Install OpenVINO Development Tools](installing-model-dev-tools.md) for detailed steps.
.. note:
## Installing OpenVINO Runtime
From the 2022.1 release, the OpenVINO™ Development Tools can only be installed via PyPI. See :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` for detailed steps.
For system requirements and troubleshooting, see <https://pypi.org/project/openvino/>.
### Step 1. Set Up Python Virtual Environment
Installing OpenVINO Runtime
###########################
Use a virtual environment to avoid dependency conflicts.
For system requirements and troubleshooting, see https://pypi.org/project/openvino/
Step 1. Set Up Python Virtual Environment
+++++++++++++++++++++++++++++++++++++++++
Use a virtual environment to avoid dependency conflicts.
To create a virtual environment, use the following command:
@sphinxdirective
.. tab:: Linux and macOS
.. tab-set::
.. code-block:: sh
python3 -m venv openvino_env
.. tab:: Windows
.. tab-item:: Linux and macOS
:sync: linmac
.. code-block:: sh
python -m venv openvino_env
@endsphinxdirective
.. code-block:: sh
### Step 2. Activate Virtual Environment
python3 -m venv openvino_env
@sphinxdirective
.. tab-item:: Windows
:sync: win
.. tab:: Linux and macOS
.. code-block:: sh
python -m venv openvino_env
Step 2. Activate Virtual Environment
++++++++++++++++++++++++++++++++++++
.. tab-set::
.. tab-item:: Linux and macOS
:sync: linmac
.. code-block:: sh
source openvino_env/bin/activate
.. tab-item:: Windows
:sync: win
.. code-block:: sh
openvino_env\Scripts\activate
.. code-block:: sh
source openvino_env/bin/activate
.. tab:: Windows
.. code-block:: sh
openvino_env\Scripts\activate
.. important::
The above command must be re-run every time a new command terminal window is opened.
@endsphinxdirective
### Step 3. Set Up and Update PIP to the Highest Version
Step 3. Set Up and Update PIP to the Highest Version
++++++++++++++++++++++++++++++++++++++++++++++++++++
Use the following command:
```sh
python -m pip install --upgrade pip
```
### Step 4. Install the Package
.. code-block:: sh
python -m pip install --upgrade pip
Step 4. Install the Package
+++++++++++++++++++++++++++
Use the following command:
```
pip install openvino
```
### Step 5. Verify that the Package Is Installed
.. code-block:: sh
pip install openvino
Step 5. Verify that the Package Is Installed
++++++++++++++++++++++++++++++++++++++++++++
Run the command below:
```sh
python -c "from openvino.runtime import Core"
```
.. code-block:: sh
python -c "from openvino.runtime import Core"
If installation was successful, you will not see any error messages (no console output).
Congratulations! You finished installing OpenVINO Runtime. Now you can start exploring OpenVINO's functionality through Jupyter Notebooks and sample applications. See the <a href="#whats-next">What's Next</a> section to learn more!
Congratulations! You finished installing OpenVINO Runtime. Now you can start exploring OpenVINO's functionality through Jupyter Notebooks and sample applications. See the :ref:`What's Next <whats-next>` section to learn more!
## Installing OpenVINO Development Tools
OpenVINO Development Tools adds even more functionality to OpenVINO. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you install OpenVINO Development Tools, OpenVINO Runtime will also be installed as a dependency, so you don't need to install OpenVINO Runtime separately.
Installing OpenVINO Development Tools
#####################################
See the [Install OpenVINO Development Tools](installing-model-dev-tools.md) page for step-by-step installation instructions.
OpenVINO Development Tools adds even more functionality to OpenVINO. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you install OpenVINO Development Tools, OpenVINO Runtime will also be installed as a dependency, so you don't need to install OpenVINO Runtime separately.
See the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` page for step-by-step installation instructions.
.. _whats-next:
What's Next?
####################
<a name="whats-next"></a>
## What's Next?
Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials.
<img src="https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif" width=400>
.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400
Try the [Python Quick Start Example](https://docs.openvino.ai/2022.3/notebooks/201-vision-monodepth-with-output.html) to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
Try the `Python Quick Start Example <https://docs.openvino.ai/2022.3/notebooks/201-vision-monodepth-with-output.html>`__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
### Get started with Python
Visit the [Tutorials](../tutorials.md) page for more Jupyter Notebooks to get you started with OpenVINO, such as:
* [OpenVINO Python API Tutorial](https://docs.openvino.ai/2022.3/notebooks/002-openvino-api-with-output.html)
* [Basic image classification program with Hello Image Classification](https://docs.openvino.ai/2022.3/notebooks/001-hello-world-with-output.html)
* [Convert a PyTorch model and use it for image background removal](https://docs.openvino.ai/2022.3/notebooks/205-vision-background-removal-with-output.html)
Get started with Python
+++++++++++++++++++++++
### Run OpenVINO on accelerated devices
OpenVINO Runtime has a plugin architecture that enables you to run inference on multiple devices without rewriting your code. Supported devices include integrated GPUs, discrete GPUs, NCS2, VPUs, and GNAs. Visit the [Additional Configurations](configurations-header.md) page for instructions on how to configure your hardware devices to work with OpenVINO.
Visit the :doc:`Tutorials <tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
## Additional Resources
* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2022.3/notebooks/002-openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2022.3/notebooks/001-hello-world-with-output.html>`__
* `Convert a PyTorch model and use it for image background removal <https://docs.openvino.ai/2022.3/notebooks/205-vision-background-removal-with-output.html>`__
- Intel® Distribution of OpenVINO™ toolkit home page: <https://software.intel.com/en-us/openvino-toolkit>
- For IoT Libraries & Code Samples, see [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)
Run OpenVINO on accelerated devices
+++++++++++++++++++++++++++++++++++
OpenVINO Runtime has a plugin architecture that enables you to run inference on multiple devices without rewriting your code. Supported devices include integrated GPUs, discrete GPUs and GNAs. Visit the :doc:`Additional Configurations <openvino_docs_install_guides_configurations_header>` page for instructions on how to configure your hardware devices to work with OpenVINO.
Additional Resources
####################
- Intel® Distribution of OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
- For IoT Libraries & Code Samples, see `Intel® IoT Developer Kit <https://github.com/intel-iot-devkit>`__.
- `OpenVINO Installation Selector Tool <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html>`__
@endsphinxdirective

View File

@@ -28,7 +28,7 @@ If you have installed OpenVINO Runtime from archive files, you can uninstall it
.. code-block:: sh
rm /home/<USER>/intel/openvino_2022
rm /opt/intel/openvino_2022
To delete the files:

View File

@@ -8,7 +8,7 @@ repo_owner = "openvinotoolkit"
repo_name = "openvino_notebooks"
artifacts_link = "http://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20221115220807/dist/rst_files/"
artifacts_link = "http://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20230309220806/dist/rst_files/"
blacklisted_extensions = ['.xml', '.bin']

View File

@@ -9,7 +9,7 @@ from pathlib import Path
from bs4 import BeautifulSoup
from sphinx.util import logging
from pydata_sphinx_theme import index_toctree
from .directives.code import DoxygenSnippet
from .directives.code import DoxygenSnippet, Scrollbox, Nodescrollbox, visit_scrollbox, depart_scrollbox
SPHINX_LOGGER = logging.getLogger(__name__)
@@ -219,4 +219,10 @@ def setup(app):
app.connect('env-before-read-docs', read_doxygen_configs)
app.add_html_theme('openvino_sphinx_theme', theme_path)
rst.directives.register_directive('doxygensnippet', DoxygenSnippet)
rst.directives.register_directive('scrollbox', Scrollbox)
app.add_node(
Nodescrollbox,
html=(visit_scrollbox, depart_scrollbox),
latex=(visit_scrollbox, depart_scrollbox)
)
return {'parallel_read_safe': True, 'parallel_write_safe': True}

View File

@@ -2,7 +2,7 @@ import os.path
from sphinx.directives.code import LiteralInclude, LiteralIncludeReader, container_wrapper
from sphinx.util import logging
from docutils.parsers.rst import directives
from docutils.parsers.rst import Directive, directives
from typing import List, Tuple
from docutils.nodes import Node
from docutils import nodes
@@ -74,3 +74,61 @@ class DoxygenSnippet(LiteralInclude):
return [retnode]
except Exception as exc:
return [document.reporter.warning(exc, line=self.lineno)]
def visit_scrollbox(self, node):
attrs = {}
attrs["style"] = (
(("height:" + "".join(c for c in str(node["height"]) if c.isdigit()) + "px!important; " ) if "height" in node is not None else "")
+ (("width:" + "".join(c for c in str(node["width"]) if c.isdigit()) ) if "width" in node is not None else "")
+ (("px; " if node["width"].find("px") != -1 else "%;") if "width" in node is not None else "")
+ ( ("border-left:solid "+"".join(c for c in str(node["delimiter"]) if c.isdigit())+ "px " + (("".join(str(node["delimiter-color"]))) if "delimiter-color" in node is not None else "#dee2e6") +"; ") if "delimiter" in node is not None else "")
)
attrs["class"] = "scrollbox"
self.body.append(self.starttag(node, "div", **attrs))
def depart_scrollbox(self, node):
self.body.append("</div>\n")
class Nodescrollbox(nodes.container):
def create_scrollbox_component(
rawtext: str = "",
**attributes,
) -> nodes.container:
node = nodes.container(rawtext, is_div=True, **attributes)
return node
class Scrollbox(Directive):
has_content = True
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
option_spec = {
'name': directives.unchanged,
'width': directives.length_or_percentage_or_unitless,
'height': directives.length_or_percentage_or_unitless,
'style': directives.unchanged,
'delimiter': directives.length_or_percentage_or_unitless,
'delimiter-color': directives.unchanged,
}
has_content = True
def run(self):
classes = ['scrollbox','']
node = Nodescrollbox("div", rawtext="\n".join(self.content), classes=classes)
if 'height' in self.options:
node['height'] = self.options['height']
if 'width' in self.options:
node['width'] = self.options['width']
if 'delimiter' in self.options:
node['delimiter'] = self.options['delimiter']
if 'delimiter-color' in self.options:
node['delimiter-color'] = self.options['delimiter-color']
self.add_name(node)
if self.content:
self.state.nested_parse(self.content, self.content_offset, node)
return [node]

View File

@@ -55,6 +55,13 @@ body {
border-color: rgb(var(--ost-color-primary));
}
/* Scrollbox Extension */
.scrollbox {
overflow-y:scroll;
height:300px;
}
/* Syntax Highlighting */
code {

View File

@@ -1,13 +1,12 @@
# Supported Models {#openvino_supported_models}
@sphinxdirective
The OpenVINO team continues the effort to support as many models out-of-the-box as possible.
Based on our research and user feedback, we prioritize the most common models and test them
before every release. These models are considered officially supported.
@sphinxdirective
.. button-link:: _static/download/OV_2022_models_supported.pdf
.. button-link:: _static/download/OV_2023_models_supported.pdf
:color: primary
:outline:
@@ -18,36 +17,33 @@ before every release. These models are considered officially supported.
| If your model is not included but is similar to those that are, it is still very likely to work.
If your model fails to execute properly there are a few options available:
@endsphinxdirective
* If the model originates from a framework like TensorFlow or PyTorch, OpenVINO™ offers a hybrid solution. The original model can be run without explicit conversion into the OpenVINO format. For more information, see [OpenVINO TensorFlow Integration](https://docs.openvino.ai/latest/ovtf_integration.html).
* If the model originates from a framework like TensorFlow or PyTorch, OpenVINO™ offers a hybrid solution. The original model can be run without explicit conversion into the OpenVINO format. For more information, see :ref:`OpenVINO TensorFlow Integration <https://docs.openvino.ai/latest/ovtf_integration.html>`.
* You can create a GitHub request for the operation(s) that are missing. These requests are reviewed regularly. You will be informed if and how the request will be accommodated. Additionally, your request may trigger a reply from someone in the community who can help.
* As OpenVINO™ is open source you can enhance it with your own contribution to the GitHub repository. To learn more, see the articles on [OpenVINO Extensibility](https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_Intro.html).
* As OpenVINO™ is open source you can enhance it with your own contribution to the GitHub repository. To learn more, see the articles on :ref:`OpenVINO Extensibility<https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_Intro.html>`.
The following table summarizes the number of models supported by OpenVINO™ in different categories:
@sphinxdirective
+--------------------------------------------+-------------------+
| Model Categories: | Number of Models: |
+============================================+===================+
| Object Detection | 149 |
| Object Detection | 149 |
+--------------------------------------------+-------------------+
| Instance Segmentation | 3 |
+--------------------------------------------+-------------------+
| Semantic Segmentation | 19 |
+--------------------------------------------+-------------------+
| Image Processing, Enhancement | 16 |
| Image Processing, Enhancement | 16 |
+--------------------------------------------+-------------------+
| Monodepth | 2 |
| Monodepth | 2 |
+--------------------------------------------+-------------------+
| Colorization | 2 |
| Colorization | 2 |
+--------------------------------------------+-------------------+
| Behavior / Decision Prediction | 1 |
| Behavior / Decision Prediction | 1 |
+--------------------------------------------+-------------------+
| Action Recognition | 2 |
| Action Recognition | 2 |
+--------------------------------------------+-------------------+
| Time Series Forecasting | 1 |
| Time Series Forecasting | 1 |
+--------------------------------------------+-------------------+
| Image Classification | 68 |
+--------------------------------------------+-------------------+
@@ -55,14 +51,15 @@ The following table summarizes the number of models supported by OpenVINO™ in
+--------------------------------------------+-------------------+
| Image Classification, Emotion | 1 |
+--------------------------------------------+-------------------+
| Image Translation | 1 |
| Image Translation | 1 |
+--------------------------------------------+-------------------+
| Natural language Processing | 35 |
| Natural language Processing | 35 |
+--------------------------------------------+-------------------+
| Text Detection | 18 |
| Text Detection | 18 |
+--------------------------------------------+-------------------+
| Audio Enhancement | 3 |
| Audio Enhancement | 3 |
+--------------------------------------------+-------------------+
| Sound Classification | 2 |
| Sound Classification | 2 |
+--------------------------------------------+-------------------+
@endsphinxdirective

18
docs/snippets/AUTO7.cpp Normal file
View File

@@ -0,0 +1,18 @@
#include <openvino/openvino.hpp>
int auto7() {
{
//! [part7]
ov::Core core;
// read a network in IR, PaddlePaddle, or ONNX format
std::shared_ptr<ov::Model> model = core.read_model("sample.xml");
// compile a model on AUTO and set log level to debug
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO");
// query the runtime target devices on which the inferences are being executed
ov::Any execution_devices = compiled_model.get_property(ov::execution_devices);
//! [part7]
}
return 0;
}

View File

@@ -17,17 +17,12 @@ endif()
file(GLOB SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/src/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/src/*.c"
"${CMAKE_CURRENT_SOURCE_DIR}/vpu/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/gpu/*.cpp")
"${CMAKE_CURRENT_SOURCE_DIR}/src/*.c")
file(GLOB GPU_SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/gpu/*.cpp")
# remove GPU remote snippets if OpenCL hasn't been found
if (NOT TARGET OpenCL::OpenCL)
list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/gpu/context_sharing_va.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/gpu/context_sharing.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/gpu/preprocessing.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/gpu/queue_sharing.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/gpu/remote_objects_creation.cpp")
# add GPU snippets if OpenCL has been found
if(TARGET OpenCL::OpenCL)
list(APPEND SOURCES ${GPU_SOURCES})
endif()
# try to find VA libraries

View File

@@ -0,0 +1,47 @@
#include <openvino/runtime/core.hpp>
#include <openvino/runtime/intel_gpu/ocl/ocl.hpp>
#include <openvino/runtime/intel_gpu/properties.hpp>
#include <openvino/core/preprocess/pre_post_process.hpp>
ov::intel_gpu::ocl::ClImage2DTensor get_yuv_tensor();
int main() {
ov::Core core;
auto model = core.read_model("model.xml");
//! [init_preproc]
using namespace ov::preprocess;
auto p = PrePostProcessor(model);
p.input().tensor().set_element_type(ov::element::u8)
.set_color_format(ColorFormat::NV12_SINGLE_PLANE)
.set_memory_type(ov::intel_gpu::memory_type::surface);
p.input().preprocess().convert_color(ov::preprocess::ColorFormat::BGR);
p.input().model().set_layout("NCHW");
auto model_with_preproc = p.build();
//! [init_preproc]
auto compiled_model = core.compile_model(model_with_preproc, "GPU");
auto context = compiled_model.get_context().as<ov::intel_gpu::ocl::ClContext>();
auto infer_request = compiled_model.create_infer_request();
{
//! [single_batch]
auto input_yuv = model_with_preproc->input(0);
ov::intel_gpu::ocl::ClImage2DTensor yuv_tensor = get_yuv_tensor();
infer_request.set_tensor(input_yuv.get_any_name(), yuv_tensor);
infer_request.infer();
//! [single_batch]
}
{
auto yuv_tensor_0 = get_yuv_tensor();
auto yuv_tensor_1 = get_yuv_tensor();
//! [batched_case]
auto input_yuv = model_with_preproc->input(0);
std::vector<ov::Tensor> yuv_tensors = {yuv_tensor_0, yuv_tensor_1};
infer_request.set_tensors(input_yuv.get_any_name(), yuv_tensors);
infer_request.infer();
//! [batched_case]
}
return 0;
}

View File

@@ -0,0 +1,49 @@
#include <openvino/runtime/intel_gpu/ocl/ocl.hpp>
#include <openvino/runtime/intel_gpu/properties.hpp>
#include <openvino/core/preprocess/pre_post_process.hpp>
ov::intel_gpu::ocl::ClImage2DTensor get_y_tensor();
ov::intel_gpu::ocl::ClImage2DTensor get_uv_tensor();
int main() {
ov::Core core;
auto model = core.read_model("model.xml");
//! [init_preproc]
using namespace ov::preprocess;
auto p = PrePostProcessor(model);
p.input().tensor().set_element_type(ov::element::u8)
.set_layout("NHWC")
.set_memory_type(ov::intel_gpu::memory_type::surface);
p.input().model().set_layout("NCHW");
auto model_with_preproc = p.build();
//! [init_preproc]
auto compiled_model = core.compile_model(model_with_preproc, "GPU");
auto remote_context = compiled_model.get_context().as<ov::intel_gpu::ocl::ClContext>();
auto input = model->input(0);
auto infer_request = compiled_model.create_infer_request();
{
//! [single_batch]
cl::Image2D img_y_plane;
auto input_y = model_with_preproc->input(0);
auto remote_y_tensor = remote_context.create_tensor(input_y.get_element_type(), input.get_shape(), img_y_plane);
infer_request.set_tensor(input_y.get_any_name(), remote_y_tensor);
infer_request.infer();
//! [single_batch]
}
{
//! [batched_case]
cl::Image2D img_y_plane_0, img_y_plane_l;
auto input_y = model_with_preproc->input(0);
auto remote_y_tensor_0 = remote_context.create_tensor(input_y.get_element_type(), input.get_shape(), img_y_plane_0);
auto remote_y_tensor_1 = remote_context.create_tensor(input_y.get_element_type(), input.get_shape(), img_y_plane_l);
std::vector<ov::Tensor> y_tensors = {remote_y_tensor_0, remote_y_tensor_1};
infer_request.set_tensors(input_y.get_any_name(), y_tensors);
infer_request.infer();
//! [batched_case]
}
return 0;
}

View File

@@ -108,6 +108,17 @@ def part6():
compiled_model = core.compile_model(model=model, device_name="AUTO");
#! [part6]
def part7():
#! [part7]
core = Core()
# read a network in IR, PaddlePaddle, or ONNX format
model = core.read_model(model_path)
# compile a model on AUTO and set log level to debug
compiled_model = core.compile_model(model=model, device_name="AUTO")
# query the runtime target devices on which the inferences are being executed
execution_devices = compiled_model.get_property("EXECUTION_DEVICES")
#! [part7]
def main():
part0()
part1()
@@ -115,6 +126,7 @@ def main():
part4()
part5()
part6()
part7()
if __name__ == '__main__':
sys.exit(main())

View File

@@ -133,6 +133,11 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools.
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `115-async-api <notebooks/115-async-api-with-output.html>`__ | Use Asynchronous Execution to Improve Data Pipelining |
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `116-sparsity-optimization <notebooks/116-sparsity-optimization-with-output.html>`__ | Improve performance of sparse Transformer models |
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `118-optimize-preprocessing <notebooks/118-optimize-preprocessing-with-output.html>`__ | Improve performance of image preprocessing step |
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
.. raw:: html
@@ -200,6 +205,33 @@ Demos that demonstrate inference on a particular model.
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `223-gpt2-text-prediction <notebooks/223-gpt2-text-prediction-with-output.html>`__ | Use GPT-2 to perform text prediction on an input sequence | |n223-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `224-3D-segmentation-point-clouds <notebooks/224-3D-segmentation-point-clouds-with-output.html>`__ | Process point cloud data and run 3D Part Segmentation with OpenVINO | |n224-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `225-stable-diffusion-text-to-image <notebooks/225-stable-diffusion-text-to-image-with-output.html>`__ | Text-to-image generation with Stable Diffusion method | |n225-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `226-yolov7-optimization <notebooks/226-yolov7-optimization-with-output.html>`__ | Optimize YOLOv7 using NNCF PTQ API | |n226-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `227-whisper-subtitles-generation <notebooks/227-whisper-subtitles-generation-with-output.html>`__ | Generate subtitles for video with OpenAI Whisper and OpenVINO | |n227-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `228-clip-zero-shot-image-classification <notebooks/228-clip-zero-shot-image-classification-with-output.html>`__ | Perform Zero-shot Image Classification with CLIP and OpenVINO | |n228-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `229-distilbert-sequence-classification <notebooks/229-distilbert-sequence-classification-with-output.html>`__ | Sequence Classification with OpenVINO | |n229-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `230-yolov8-optimization <notebooks/230-yolov8-optimization-with-output.html>`__ | Optimize YOLOv8 using NNCF PTQ API | |n230-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `231-instruct-pix2pix-image-editing <notebooks/231-instruct-pix2pix-image-editing-with-output.html>`__ | Image editing with InstructPix2Pix | |n231-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `232-clip-language-saliency-map <notebooks/232-clip-language-saliency-map-with-output.html>`__ | Language-Visual Saliency with CLIP and OpenVINO™ | |n232-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `233-blip-visual-language-processing <notebooks/233-blip-visual-language-processing-with-output.html>`__ | Visual Question Answering and Image Captioning using BLIP and OpenVINO™ | |n233-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `234-encodec-audio-compression <notebooks/234-encodec-audio-compression-with-output.html>`__ | Audio compression with EnCodec and OpenVINO™ | |n234-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
.. raw:: html
@@ -243,8 +275,14 @@ Live inference demos that run on a webcam or video files.
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `403-action-recognition-webcam <notebooks/403-action-recognition-webcam-with-output.html>`__ |br| |n403| | Human action recognition with a webcam or video file. | |n403-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `404-style-transfer-webcam <notebooks/404-style-transfer-with-output.html>`__ |br| |n404| | Style Transfer with a webcam or video file | |n404-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `405-paddle-ocr-webcam <notebooks/405-paddle-ocr-webcam-with-output.html>`__ |br| |n405| | OCR with a webcam or video file | |n405-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `406-3D-pose-estimation-webcam <notebooks/406-3D-pose-estimation-with-output.html>`__ |br| |n406| | 3D display of human pose estimation with a webcam or video file | |n406-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `407-person-tracking-webcam <notebooks/407-person-tracking-with-output.html>`__ |br| |n407| | Person tracking with a webcam or video file | |n407-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
.. raw:: html
@@ -385,6 +423,28 @@ Made with `contributors-img <https://contrib.rocks>`__.
:target: https://user-images.githubusercontent.com/18904157/166343139-c6568e50-b856-4066-baef-5cdbd4e8bc18.png
.. |n223-img1| image:: https://user-images.githubusercontent.com/91228207/185105225-0f996b0b-0a3b-4486-872d-364ac6fab68b.png
:target: https://user-images.githubusercontent.com/91228207/185105225-0f996b0b-0a3b-4486-872d-364ac6fab68b.png
.. |n224-img1| image:: https://user-images.githubusercontent.com/91237924/185752178-3882902c-907b-4614-b0e6-ea1de08bf3ef.png
:target: https://user-images.githubusercontent.com/91237924/185752178-3882902c-907b-4614-b0e6-ea1de08bf3ef.png
.. |n225-img1| image:: https://user-images.githubusercontent.com/15709723/200945747-1c584e5c-b3f2-4e43-b1c1-e35fd6edc2c3.png
:target: https://user-images.githubusercontent.com/15709723/200945747-1c584e5c-b3f2-4e43-b1c1-e35fd6edc2c3.png
.. |n226-img1| image:: https://raw.githubusercontent.com/WongKinYiu/yolov7/main/figure/horses_prediction.jpg
:target: https://raw.githubusercontent.com/WongKinYiu/yolov7/main/figure/horses_prediction.jpg
.. |n227-img1| image:: https://user-images.githubusercontent.com/29454499/204548693-1304ef33-c790-490d-8a8b-d5766acb6254.png
:target: https://user-images.githubusercontent.com/29454499/204548693-1304ef33-c790-490d-8a8b-d5766acb6254.png
.. |n228-img1| image:: https://user-images.githubusercontent.com/29454499/207795060-437b42f9-e801-4332-a91f-cc26471e5ba2.png
:target: https://user-images.githubusercontent.com/29454499/207795060-437b42f9-e801-4332-a91f-cc26471e5ba2.png
.. |n229-img1| image:: https://user-images.githubusercontent.com/95271966/206130638-d9847414-357a-4c79-9ca7-76f4ae5a6d7f.png
:target: https://user-images.githubusercontent.com/95271966/206130638-d9847414-357a-4c79-9ca7-76f4ae5a6d7f.png
.. |n230-img1| image:: https://user-images.githubusercontent.com/29454499/212105105-f61c8aab-c1ff-40af-a33f-d0ed1fccc72e.png
:target: https://user-images.githubusercontent.com/29454499/212105105-f61c8aab-c1ff-40af-a33f-d0ed1fccc72e.png
.. |n231-img1| image:: https://user-images.githubusercontent.com/29454499/219943222-d46a2e2d-d348-4259-8431-37cf14727eda.png
:target: https://user-images.githubusercontent.com/29454499/219943222-d46a2e2d-d348-4259-8431-37cf14727eda.png
.. |n232-img1| image:: https://user-images.githubusercontent.com/29454499/218967961-9858efd5-fff2-4eb0-bde9-60852f4b31cb.JPG
:target: https://user-images.githubusercontent.com/29454499/218967961-9858efd5-fff2-4eb0-bde9-60852f4b31cb.JPG
.. |n233-img1| image:: https://user-images.githubusercontent.com/29454499/221933762-4ff32ecb-5e5d-4484-80e1-e9396cb3c511.png
:target: https://user-images.githubusercontent.com/29454499/221933762-4ff32ecb-5e5d-4484-80e1-e9396cb3c511.png
.. |n234-img1| image:: https://github.com/facebookresearch/encodec/raw/main/thumbnail.png
:target: https://github.com/facebookresearch/encodec/raw/main/thumbnail.png
.. |n301-img1| image:: https://user-images.githubusercontent.com/15709723/127779607-8fa34947-1c35-4260-8d04-981c41a2a2cc.png
:target: https://user-images.githubusercontent.com/15709723/127779607-8fa34947-1c35-4260-8d04-981c41a2a2cc.png
.. |n401-img1| image:: https://user-images.githubusercontent.com/4547501/141471665-82b28c86-cf64-4bfe-98b3-c314658f2d96.gif
@@ -393,8 +453,14 @@ Made with `contributors-img <https://contrib.rocks>`__.
:target: https://user-images.githubusercontent.com/4547501/138267961-41d754e7-59db-49f6-b700-63c3a636fad7.gif
.. |n403-img1| image:: https://user-images.githubusercontent.com/10940214/151552326-642d6e49-f5a0-4fc1-bf14-ae3f457e1fec.gif
:target: https://user-images.githubusercontent.com/10940214/151552326-642d6e49-f5a0-4fc1-bf14-ae3f457e1fec.gif
.. |n404-img1| image:: https://user-images.githubusercontent.com/109281183/203772234-f17a0875-b068-43ef-9e77-403462fde1f5.gif
:target: https://user-images.githubusercontent.com/109281183/203772234-f17a0875-b068-43ef-9e77-403462fde1f5.gif
.. |n405-img1| image:: https://raw.githubusercontent.com/yoyowz/classification/master/images/paddleocr.gif
:target: https://raw.githubusercontent.com/yoyowz/classification/master/images/paddleocr.gif
.. |n406-img1| image:: https://user-images.githubusercontent.com/42672437/183292131-576cc05a-a724-472c-8dc9-f6bc092190bf.gif
:target: https://user-images.githubusercontent.com/42672437/183292131-576cc05a-a724-472c-8dc9-f6bc092190bf.gif
.. |n407-img1| image:: https://user-images.githubusercontent.com/91237924/210479548-b70dbbaa-5948-4e49-b48e-6cb6613226da.gif
:target: https://user-images.githubusercontent.com/91237924/210479548-b70dbbaa-5948-4e49-b48e-6cb6613226da.gif
.. |launch-jupyter| image:: https://user-images.githubusercontent.com/15709723/120527271-006fd200-c38f-11eb-9935-2d36d50bab9f.gif
:target: https://user-images.githubusercontent.com/15709723/120527271-006fd200-c38f-11eb-9935-2d36d50bab9f.gif
@@ -457,11 +523,18 @@ Made with `contributors-img <https://contrib.rocks>`__.
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F402-pose-estimation-webcam%2F402-pose-estimation.ipynb
.. |n403| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F403-action-recognition-webcam%2F403-action-recognition-webcam.ipynb
.. |n404| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F404-style-transfer-webcam%2F404-style-transfer.ipynb
.. |n405| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F405-paddle-ocr-webcam%2F405-paddle-ocr-webcam.ipynb
.. |n406| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks.git/main?labpath=notebooks%2F406-3D-pose-estimation-webcam%2F406-3D-pose-estimation.ipynb
.. |n407| image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F407-person-tracking-webcam%2F407-person-tracking.ipynb
.. |binder logo| image:: https://mybinder.org/badge_logo.svg
:alt: Binder button
@endsphinxdirective
@endsphinxdirective

View File

@@ -150,8 +150,47 @@ elif [ -f /etc/redhat-release ] || grep -q "rhel" /etc/os-release ; then
python3-devel \
`# samples and tools` \
zlib-devel \
gflags-devel \
libva-devel
gflags-devel
elif [ -f /etc/os-release ] && grep -q "SUSE" /etc/os-release ; then
zypper refresh
zypper install -y \
file \
`# build tools` \
cmake \
ccache \
ninja \
scons \
gcc \
gcc-c++ \
make \
`# to determine openvino version via git` \
git \
git-lfs \
`# to build and check pip packages` \
patchelf \
fdupes \
`# to build and check rpm packages` \
rpm-build \
rpmlint \
`# check bash scripts for correctness` \
ShellCheck \
`# main openvino dependencies` \
tbb-devel \
pugixml-devel \
`# GPU plugin dependency` \
libva-devel \
`# OpenCL for GPU` \
ocl-icd-devel \
opencl-cpp-headers \
opencl-headers \
`# python API` \
python39-pip \
python39-setuptools \
python39-devel \
`# samples and tools` \
zlib-devel \
gflags-devel-static \
nlohmann_json-devel
elif [ -f /etc/os-release ] && grep -q "raspbian" /etc/os-release; then
# Raspbian
apt update
@@ -187,8 +226,10 @@ if [ ! "$(printf '%s\n' "$required_cmake_ver" "$current_cmake_ver" | sort -V | h
if command -v apt-get &> /dev/null; then
apt-get install -y --no-install-recommends wget
else
elif command -v yum &> /dev/null; then
yum install -y wget
elif command -v zypper &> /dev/null; then
zypper in -y wget
fi
cmake_install_bin="cmake-${installed_cmake_ver}-linux-${arch}.sh"

View File

@@ -26,6 +26,7 @@ if(NOT TARGET nlohmann_json::nlohmann_json)
if(TARGET nlohmann_json)
# Ubuntu 18.04 case where target 'nlohmann_json' is here, but nlohmann_json_FOUND is OFF
if(NOT TARGET nlohmann_json::nlohmann_json)
set_target_properties(nlohmann_json PROPERTIES IMPORTED_GLOBAL ON)
add_library(nlohmann_json::nlohmann_json ALIAS nlohmann_json)
endif()
set(nlohmann_json_FOUND ON)

View File

@@ -121,7 +121,7 @@ Options:
'throughput' or 'tput': device performance mode will be set to THROUGHPUT.
'cumulative_throughput' or 'ctput': device performance mode will be set to CUMULATIVE_THROUGHPUT.
'latency': device performance mode will be set to LATENCY.
'none': no device performance mode will be set.
'none': device performance mode will be set to UNDEFINED.
Using explicit 'nstreams' or other device-specific options, please set hint to 'none'
-api "<sync/async>" Optional (deprecated). Enable Sync/Async API. Default value is "async".
-niter "<integer>" Optional. Number of iterations. If not specified, the number of iterations is calculated depending on a device.

View File

@@ -41,7 +41,7 @@ static const char hint_message[] =
" 'cumulative_throughput' or 'ctput': device performance mode will be set to "
"CUMULATIVE_THROUGHPUT.\n"
" 'latency': device performance mode will be set to LATENCY.\n"
" 'none': no device performance mode will be set.\n"
" 'none': device performance mode will be set to UNDEFINED.\n"
" Using explicit 'nstreams' or other device-specific options, please set hint to "
"'none'";

View File

@@ -15,8 +15,8 @@ endif()
if(NOT TARGET zlib::zlib)
if(PkgConfig_FOUND)
pkg_search_module(zlib QUIET
IMPORTED_TARGET GLOBAL
zlib)
IMPORTED_TARGET GLOBAL
zlib)
if(zlib_FOUND)
add_library(zlib::zlib ALIAS PkgConfig::zlib)
endif()

View File

@@ -22,11 +22,6 @@ list(APPEND shellcheck_skip_list
"${OpenVINO_SOURCE_DIR}/src/bindings/python/tests/test_onnx/model_zoo_preprocess.sh"
"${OpenVINO_SOURCE_DIR}/src/bindings/python/tests_compatibility/test_onnx/model_zoo_preprocess.sh")
if(shellcheck_VERSION VERSION_GREATER_EQUAL 0.7.0)
list(APPEND shellcheck_skip_list
"${OpenVINO_SOURCE_DIR}/scripts/setupvars/setupvars.sh")
endif()
ie_shellcheck_process(DIRECTORY "${OpenVINO_SOURCE_DIR}"
SKIP ${shellcheck_skip_list})

View File

@@ -89,6 +89,7 @@ if [ "$os" == "auto" ] ; then
case $os in
centos7|centos8|rhel8|rhel9.1|\
almalinux8.7|amzn2|\
opensuse-leap15.3| \
fedora34|fedora35|fedora36|fedora37|fedora38|\
raspbian9|debian9|ubuntu18.04|\
raspbian10|debian10|ubuntu20.04|ubuntu20.10|ubuntu21.04|\
@@ -142,7 +143,7 @@ elif [ "$os" == "ubuntu20.04" ] || [ "$os" == "debian10" ] || [ "$os" == "raspbi
[ "$os" == "ubuntu21.10" ] || [ "$os" == "ubuntu22.04" ] || [ "$os" == "debian11" ] || [ "$os" == "raspbian11" ] ||
[ "$os" == "ubuntu22.10" ] || [ "$os" == "debian12" ] || [ "$os" == "raspbian12" ]; then
pkgs_core=(libpugixml1v5)
pkgs_core=(libpugixml1v5 libtbb2)
pkgs_opencv_req=(libgtk-3-0 libgl1)
pkgs_python=(python3 python3-venv python3-pip)
pkgs_dev=(cmake pkg-config g++ gcc libc6-dev libgflags-dev zlib1g-dev nlohmann-json3-dev make curl sudo)
@@ -162,19 +163,15 @@ elif [ "$os" == "ubuntu20.04" ] || [ "$os" == "debian10" ] || [ "$os" == "raspbi
)
if [ "$os" == "debian10" ] || [ "$os" == "raspbian10" ] ; then
pkgs_core=(${pkgs_core[@]} libtbb2)
pkgs_python=(${pkgs_python[@]} libpython3.7)
elif [ "$os" == "ubuntu20.04" ] || [ "$os" == "ubuntu20.10" ] || [ "$os" == "ubuntu21.04" ] ; then
pkgs_core=(${pkgs_core[@]} libtbb2)
pkgs_python=(${pkgs_python[@]} libpython3.8)
pkgs_opencv_opt=(${pkgs_opencv_opt[@]} libavresample4)
elif [ "$os" == "ubuntu21.10" ] ||
[ "$os" == "debian11" ] || [ "$os" == "raspbian11" ] ; then
pkgs_core=(${pkgs_core[@]} libtbb2)
pkgs_python=(${pkgs_python[@]} libpython3.9)
elif [ "$os" == "ubuntu22.04" ] || [ "$os" == "ubuntu22.10" ] ||
[ "$os" == "debian12" ] || [ "$os" == "raspbian12" ] ; then
pkgs_core=(${pkgs_core[@]} libtbb12)
pkgs_python=(${pkgs_python[@]} libpython3.10)
fi
@@ -282,6 +279,11 @@ elif [ "$os" == "centos7" ] || [ "$os" == "centos8" ] ||
pkgs_dev+=(https://download-ib01.fedoraproject.org/pub/epel/9/Everything/$arch/Packages/g/gflags-devel-2.2.2-9.el9.$arch.rpm)
extra_repos+=(https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm)
fi
elif [ "$os" == "opensuse-leap15.3" ] ; then
pkgs_core=(libtbb2 libtbbmalloc2 libpugixml1)
pkgs_gpu=()
pkgs_python=(python39-base python39 python39-venv python39-pip)
pkgs_dev=(cmake pkg-config gcc-c++ gcc gflags-devel-static zlib-devel nlohmann_json-devel make curl sudo)
else
echo "Internal script error: invalid OS (${os}) after check (package selection)" >&2
exit 3
@@ -346,6 +348,14 @@ elif [ "$os" == "centos7" ] || [ "$os" == "centos8" ] ||
yum install $iopt ${pkgs[@]}
elif [ "$os" == "opensuse-leap15.3" ] ; then
[ -z "$interactive" ] && iopt="-y"
[ -n "$dry" ] && iopt="--dry-run"
[ -n "$keepcache" ] && zypper clean --all
zypper ref && zypper in --auto-agree-with-licenses --no-recommends "$iopt" "${pkgs[@]}"
else
echo "Internal script error: invalid OS (${os}) after check (package installation)" >&2
exit 3

View File

@@ -3,7 +3,13 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
SCRIPT_DIR="$( cd "$( dirname "$(realpath "${BASH_SOURCE[0]}")" )" >/dev/null 2>&1 && pwd )"
abs_path () {
path=$(eval echo "$1")
directory=$(dirname "$path")
echo "$(cd "$directory" || exit; pwd -P)/$(basename "$path")";
}
SCRIPT_DIR="$( cd "$( dirname "$(abs_path "${BASH_SOURCE[0]}")" )" >/dev/null 2>&1 && pwd )"
INSTALLDIR="${SCRIPT_DIR}"
export INTEL_OPENVINO_DIR="$INSTALLDIR"
@@ -86,10 +92,12 @@ fi
# OpenCV environment
if [ -f "$INSTALLDIR/opencv/setupvars.sh" ]; then
# shellcheck source=/dev/null
source "$INSTALLDIR/opencv/setupvars.sh"
fi
if [ -f "$INSTALLDIR/extras/opencv/setupvars.sh" ]; then
# shellcheck source=/dev/null
source "$INSTALLDIR/extras/opencv/setupvars.sh"
fi
@@ -104,23 +112,12 @@ MAX_SUPPORTED_PYTHON_VERSION_MINOR="10"
check_python_version () {
if [ -z "$python_version" ]; then
python_version=$(python3 -c 'import sys; print(str(sys.version_info[0])+"."+str(sys.version_info[1]))')
fi
# splitting Python version variable depending on the used shell
if [ -n "$ZSH_VERSION" ]; then
version_arr=(${(@s:.:)python_version})
if [ "${#version_arr[@]}" -ge "2" ]; then
# zsh starts indexing from 1
python_version_major=${version_arr[1]}
python_version_minor=${version_arr[2]}
fi
python_version_major=$( python3 -c 'import sys; print(str(sys.version_info[0]))' )
python_version_minor=$( python3 -c 'import sys; print(str(sys.version_info[1]))' )
python_version="$python_version_major.$python_version_minor"
else
version_arr=(${python_version//./ })
if [ "${#version_arr[@]}" -ge "2" ]; then
python_version_major=${version_arr[0]}
python_version_minor=${version_arr[1]}
fi
python_version_major=$( python3 -c "import sys; print(str(\"${python_version}\".split('.')[0]))" )
python_version_minor=$( python3 -c "import sys; print(str(\"${python_version}\".split('.')[1]))" )
fi
if [ "$PYTHON_VERSION_MAJOR" != "$python_version_major" ] ||

View File

@@ -64,7 +64,7 @@
/**
* @defgroup ov_c_api OpenVINO Runtime C API
* OpenVINO Runtime C API
*
*
* @defgroup ov_base_c_api Basics
* @ingroup ov_c_api
* @brief The basic definitions & interfaces of OpenVINO C API to work with other components
@@ -72,51 +72,51 @@
* @defgroup ov_compiled_model_c_api Compiled Model
* @ingroup ov_c_api
* @brief The operations about compiled model
*
*
* @defgroup ov_core_c_api Core
* @ingroup ov_c_api
* @brief The definitions & operations about core
*
*
* @defgroup ov_dimension_c_api Dimension
* @ingroup ov_c_api
* @brief The definitions & operations about dimension
*
*
* @defgroup ov_infer_request_c_api Infer Request
* @ingroup ov_c_api
* @brief The definitions & operations about infer request
*
*
* @defgroup ov_layout_c_api Layout
* @ingroup ov_c_api
* @brief The definitions & operations about layout
*
*
* @defgroup ov_model_c_api Model
* @ingroup ov_c_api
* @brief The definitions & operations about model
*
*
* @defgroup ov_node_c_api Node
* @ingroup ov_c_api
* @brief The definitions & operations about node
*
*
* @defgroup ov_partial_shape_c_api Partial Shape
* @ingroup ov_c_api
* @brief The definitions & operations about partial shape
*
*
* @defgroup ov_prepostprocess_c_api Pre Post Process
* @ingroup ov_c_api
* @brief The definitions & operations about prepostprocess
*
*
* @defgroup ov_property_c_api Property
* @ingroup ov_c_api
* @brief The definitions & operations about property
*
*
* @defgroup ov_rank_c_api Rank
* @ingroup ov_c_api
* @brief The definitions & operations about rank
*
*
* @defgroup ov_shape_c_api Shape
* @ingroup ov_c_api
* @brief The definitions & operations about shape
*
*
* @defgroup ov_tensor_c_api Tensor
* @ingroup ov_c_api
* @brief The definitions & operations about tensor
@@ -128,33 +128,33 @@
* @brief This enum contains codes for all possible return values of the interface functions
*/
typedef enum {
OK = 0, //!< SUCCESS
OK = 0, //!< SUCCESS
/*
* @brief map exception to C++ interface
*/
GENERAL_ERROR = -1, //!< GENERAL_ERROR
NOT_IMPLEMENTED = -2, //!< NOT_IMPLEMENTED
NETWORK_NOT_LOADED = -3, //!< NETWORK_NOT_LOADED
PARAMETER_MISMATCH = -4, //!< PARAMETER_MISMATCH
NOT_FOUND = -5, //!< NOT_FOUND
OUT_OF_BOUNDS = -6, //!< OUT_OF_BOUNDS
GENERAL_ERROR = -1, //!< GENERAL_ERROR
NOT_IMPLEMENTED = -2, //!< NOT_IMPLEMENTED
NETWORK_NOT_LOADED = -3, //!< NETWORK_NOT_LOADED
PARAMETER_MISMATCH = -4, //!< PARAMETER_MISMATCH
NOT_FOUND = -5, //!< NOT_FOUND
OUT_OF_BOUNDS = -6, //!< OUT_OF_BOUNDS
/*
* @brief exception not of std::exception derived type was thrown
*/
UNEXPECTED = -7, //!< UNEXPECTED
REQUEST_BUSY = -8, //!< REQUEST_BUSY
RESULT_NOT_READY = -9, //!< RESULT_NOT_READY
NOT_ALLOCATED = -10, //!< NOT_ALLOCATED
INFER_NOT_STARTED = -11, //!< INFER_NOT_STARTED
NETWORK_NOT_READ = -12, //!< NETWORK_NOT_READ
INFER_CANCELLED = -13, //!< INFER_CANCELLED
UNEXPECTED = -7, //!< UNEXPECTED
REQUEST_BUSY = -8, //!< REQUEST_BUSY
RESULT_NOT_READY = -9, //!< RESULT_NOT_READY
NOT_ALLOCATED = -10, //!< NOT_ALLOCATED
INFER_NOT_STARTED = -11, //!< INFER_NOT_STARTED
NETWORK_NOT_READ = -12, //!< NETWORK_NOT_READ
INFER_CANCELLED = -13, //!< INFER_CANCELLED
/*
* @brief exception in C wrapper
*/
INVALID_C_PARAM = -14, //!< INVALID_C_PARAM
UNKNOWN_C_ERROR = -15, //!< UNKNOWN_C_ERROR
NOT_IMPLEMENT_C_METHOD = -16, //!< NOT_IMPLEMENT_C_METHOD
UNKNOW_EXCEPTION = -17, //!< UNKNOW_EXCEPTION
INVALID_C_PARAM = -14, //!< INVALID_C_PARAM
UNKNOWN_C_ERROR = -15, //!< UNKNOWN_C_ERROR
NOT_IMPLEMENT_C_METHOD = -16, //!< NOT_IMPLEMENT_C_METHOD
UNKNOW_EXCEPTION = -17, //!< UNKNOW_EXCEPTION
} ov_status_e;
/**

View File

@@ -78,8 +78,7 @@ ov_compiled_model_input_by_name(const ov_compiled_model_t* compiled_model,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_outputs_size(const ov_compiled_model_t* compiled_model,
size_t* size);
ov_compiled_model_outputs_size(const ov_compiled_model_t* compiled_model, size_t* size);
/**
* @brief Get the single const output port of ov_compiled_model_t, which only support single output model.
@@ -89,8 +88,7 @@ ov_compiled_model_outputs_size(const ov_compiled_model_t* compiled_model,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_output(const ov_compiled_model_t* compiled_model,
ov_output_const_port_t** output_port);
ov_compiled_model_output(const ov_compiled_model_t* compiled_model, ov_output_const_port_t** output_port);
/**
* @brief Get a const output port of ov_compiled_model_t by port index.
@@ -126,8 +124,7 @@ ov_compiled_model_output_by_name(const ov_compiled_model_t* compiled_model,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_get_runtime_model(const ov_compiled_model_t* compiled_model,
ov_model_t** model);
ov_compiled_model_get_runtime_model(const ov_compiled_model_t* compiled_model, ov_model_t** model);
/**
* @brief Creates an inference request object used to infer the compiled model.
@@ -137,8 +134,7 @@ ov_compiled_model_get_runtime_model(const ov_compiled_model_t* compiled_model,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_create_infer_request(const ov_compiled_model_t* compiled_model,
ov_infer_request_t** infer_request);
ov_compiled_model_create_infer_request(const ov_compiled_model_t* compiled_model, ov_infer_request_t** infer_request);
/**
* @brief Sets properties for a device, acceptable keys can be found in ov_property_key_xxx.
@@ -173,8 +169,7 @@ ov_compiled_model_get_property(const ov_compiled_model_t* compiled_model,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_export_model(const ov_compiled_model_t* compiled_model,
const char* export_model_path);
ov_compiled_model_export_model(const ov_compiled_model_t* compiled_model, const char* export_model_path);
/**
* @brief Release the memory allocated by ov_compiled_model_t.

View File

@@ -60,8 +60,8 @@ typedef struct {
* @brief Represent all available devices.
*/
typedef struct {
char** devices; //!< devices' name
size_t size; //!< devices' number
char** devices; //!< devices' name
size_t size; //!< devices' number
} ov_available_devices_t;
/**
@@ -141,10 +141,7 @@ ov_core_free(ov_core_t* core);
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_read_model(const ov_core_t* core,
const char* model_path,
const char* bin_path,
ov_model_t** model);
ov_core_read_model(const ov_core_t* core, const char* model_path, const char* bin_path, ov_model_t** model);
#ifdef OPENVINO_ENABLE_UNICODE_PATH_SUPPORT
/**
@@ -281,10 +278,7 @@ ov_core_set_property(const ov_core_t* core, const char* device_name, ...);
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_get_property(const ov_core_t* core,
const char* device_name,
const char* property_key,
char** property_value);
ov_core_get_property(const ov_core_t* core, const char* device_name, const char* property_key, char** property_value);
/**
* @brief Returns devices available for inference.
@@ -334,9 +328,7 @@ ov_core_import_model(const ov_core_t* core,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_get_versions_by_device_name(const ov_core_t* core,
const char* device_name,
ov_core_version_list_t* versions);
ov_core_get_versions_by_device_name(const ov_core_t* core, const char* device_name, ov_core_version_list_t* versions);
/**
* @brief Releases memory occupied by ov_core_version_list_t.

View File

@@ -27,8 +27,8 @@ typedef struct ov_infer_request ov_infer_request_t;
* @brief Completion callback definition about the function and args
*/
typedef struct {
void(OPENVINO_C_API_CALLBACK* callback_func)(void* args); //!< The callback func
void* args; //!< The args of callback func
void(OPENVINO_C_API_CALLBACK* callback_func)(void* args); //!< The callback func
void* args; //!< The args of callback func
} ov_callback_t;
/**
@@ -37,11 +37,11 @@ typedef struct {
* @brief Store profiling info data
*/
typedef struct {
enum Status { //!< Defines the general status of a node.
NOT_RUN, //!< A node is not executed.
OPTIMIZED_OUT, //!< A node is optimized out during graph optimization phase.
EXECUTED //!< A node is executed.
} status; //!< status
enum Status { //!< Defines the general status of a node.
NOT_RUN, //!< A node is not executed.
OPTIMIZED_OUT, //!< A node is optimized out during graph optimization phase.
EXECUTED //!< A node is executed.
} status; //!< status
int64_t real_time; //!< The absolute time, in microseconds, that the node ran (in total).
int64_t cpu_time; //!< The net host CPU time that the node ran.
const char* node_name; //!< Name of a node.
@@ -55,8 +55,8 @@ typedef struct {
* @brief A list of profiling info data
*/
typedef struct {
ov_profiling_info_t* profiling_infos; //!< The list of ov_profilling_info_t
size_t size; //!< The list size
ov_profiling_info_t* profiling_infos; //!< The list of ov_profilling_info_t
size_t size; //!< The list size
} ov_profiling_info_list_t;
/**
@@ -68,9 +68,7 @@ typedef struct {
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_set_tensor(ov_infer_request_t* infer_request,
const char* tensor_name,
const ov_tensor_t* tensor);
ov_infer_request_set_tensor(ov_infer_request_t* infer_request, const char* tensor_name, const ov_tensor_t* tensor);
/**
* @brief Set an input/output tensor to infer request for the port.
@@ -157,9 +155,7 @@ ov_infer_request_set_output_tensor(ov_infer_request_t* infer_request, const ov_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_get_tensor(const ov_infer_request_t* infer_request,
const char* tensor_name,
ov_tensor_t** tensor);
ov_infer_request_get_tensor(const ov_infer_request_t* infer_request, const char* tensor_name, ov_tensor_t** tensor);
/**
* @brief Get an input/output tensor by const port.
@@ -209,8 +205,7 @@ ov_infer_request_get_input_tensor_by_index(const ov_infer_request_t* infer_reque
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_get_input_tensor(const ov_infer_request_t* infer_request,
ov_tensor_t** tensor);
ov_infer_request_get_input_tensor(const ov_infer_request_t* infer_request, ov_tensor_t** tensor);
/**
* @brief Get an output tensor by the index of output tensor.
@@ -235,8 +230,7 @@ ov_infer_request_get_output_tensor_by_index(const ov_infer_request_t* infer_requ
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_get_output_tensor(const ov_infer_request_t* infer_request,
ov_tensor_t** tensor);
ov_infer_request_get_output_tensor(const ov_infer_request_t* infer_request, ov_tensor_t** tensor);
/**
* @brief Infer specified input(s) in synchronous mode.

View File

@@ -48,9 +48,7 @@ ov_model_const_input(const ov_model_t* model, ov_output_const_port_t** input_por
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_model_const_input_by_name(const ov_model_t* model,
const char* tensor_name,
ov_output_const_port_t** input_port);
ov_model_const_input_by_name(const ov_model_t* model, const char* tensor_name, ov_output_const_port_t** input_port);
/**
* @brief Get a const input port of ov_model_t by port index.
@@ -61,9 +59,7 @@ ov_model_const_input_by_name(const ov_model_t* model,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_model_const_input_by_index(const ov_model_t* model,
const size_t index,
ov_output_const_port_t** input_port);
ov_model_const_input_by_index(const ov_model_t* model, const size_t index, ov_output_const_port_t** input_port);
/**
* @brief Get single input port of ov_model_t, which only support single input model.
@@ -84,9 +80,7 @@ ov_model_input(const ov_model_t* model, ov_output_port_t** input_port);
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_model_input_by_name(const ov_model_t* model,
const char* tensor_name,
ov_output_port_t** input_port);
ov_model_input_by_name(const ov_model_t* model, const char* tensor_name, ov_output_port_t** input_port);
/**
* @brief Get an input port of ov_model_t by port index.
@@ -129,9 +123,7 @@ ov_model_const_output_by_index(const ov_model_t* model, const size_t index, ov_o
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_model_const_output_by_name(const ov_model_t* model,
const char* tensor_name,
ov_output_const_port_t** output_port);
ov_model_const_output_by_name(const ov_model_t* model, const char* tensor_name, ov_output_const_port_t** output_port);
/**
* @brief Get an single output port of ov_model_t, which only support single output model.

View File

@@ -83,7 +83,7 @@ ov_port_get_element_type(const ov_output_const_port_t* port, ov_element_type_e*
* @ingroup ov_node_c_api
* @param port The pointer to the instance of the ov_output_port_t to free.
*/
OPENVINO_C_API(void)
OPENVINO_C_API(void)
ov_output_port_free(ov_output_port_t* port);
/**

View File

@@ -30,8 +30,8 @@
* An interface to make user can initialize ov_partial_shape_t
*/
typedef struct ov_partial_shape {
ov_rank_t rank; //!< The rank
ov_dimension_t* dims; //!< The dimension
ov_rank_t rank; //!< The rank
ov_dimension_t* dims; //!< The dimension
} ov_partial_shape_t;
/**
@@ -47,9 +47,7 @@ typedef struct ov_partial_shape {
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_partial_shape_create(const int64_t rank,
const ov_dimension_t* dims,
ov_partial_shape_t* partial_shape_obj);
ov_partial_shape_create(const int64_t rank, const ov_dimension_t* dims, ov_partial_shape_t* partial_shape_obj);
/**
* @brief Initialze a partial shape with dynamic rank and dynamic dimension.
@@ -81,9 +79,7 @@ ov_partial_shape_create_dynamic(const ov_rank_t rank,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_partial_shape_create_static(const int64_t rank,
const int64_t* dims,
ov_partial_shape_t* partial_shape_obj);
ov_partial_shape_create_static(const int64_t rank, const int64_t* dims, ov_partial_shape_t* partial_shape_obj);
/**
* @brief Release internal memory allocated in partial shape.

View File

@@ -100,8 +100,7 @@ typedef enum {
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_create(const ov_model_t* model,
ov_preprocess_prepostprocessor_t** preprocess);
ov_preprocess_prepostprocessor_create(const ov_model_t* model, ov_preprocess_prepostprocessor_t** preprocess);
/**
* @brief Release the memory allocated by ov_preprocess_prepostprocessor_t.
@@ -119,9 +118,8 @@ ov_preprocess_prepostprocessor_free(ov_preprocess_prepostprocessor_t* preprocess
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_input_info(
const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_input_info_t** preprocess_input_info);
ov_preprocess_prepostprocessor_get_input_info(const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_input_info_t** preprocess_input_info);
/**
* @brief Get the input info of ov_preprocess_prepostprocessor_t instance by tensor name.
@@ -131,11 +129,10 @@ ov_preprocess_prepostprocessor_get_input_info(
* @param preprocess_input_info A pointer to the ov_preprocess_input_info_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_input_info_by_name(
const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_input_info_t** preprocess_input_info);
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_input_info_by_name(const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_input_info_t** preprocess_input_info);
/**
* @brief Get the input info of ov_preprocess_prepostprocessor_t instance by tensor order.
@@ -146,10 +143,9 @@ ov_preprocess_prepostprocessor_get_input_info_by_name(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_input_info_by_index(
const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_input_info_t** preprocess_input_info);
ov_preprocess_prepostprocessor_get_input_info_by_index(const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_input_info_t** preprocess_input_info);
/**
* @brief Release the memory allocated by ov_preprocess_input_info_t.
@@ -167,9 +163,8 @@ ov_preprocess_input_info_free(ov_preprocess_input_info_t* preprocess_input_info)
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_info_get_tensor_info(
const ov_preprocess_input_info_t* preprocess_input_info,
ov_preprocess_input_tensor_info_t** preprocess_input_tensor_info);
ov_preprocess_input_info_get_tensor_info(const ov_preprocess_input_info_t* preprocess_input_info,
ov_preprocess_input_tensor_info_t** preprocess_input_tensor_info);
/**
* @brief Release the memory allocated by ov_preprocess_input_tensor_info_t.
@@ -187,9 +182,8 @@ ov_preprocess_input_tensor_info_free(ov_preprocess_input_tensor_info_t* preproce
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_info_get_preprocess_steps(
const ov_preprocess_input_info_t* preprocess_input_info,
ov_preprocess_preprocess_steps_t** preprocess_input_steps);
ov_preprocess_input_info_get_preprocess_steps(const ov_preprocess_input_info_t* preprocess_input_info,
ov_preprocess_preprocess_steps_t** preprocess_input_steps);
/**
* @brief Release the memory allocated by ov_preprocess_preprocess_steps_t.
@@ -197,8 +191,7 @@ ov_preprocess_input_info_get_preprocess_steps(
* @param preprocess_input_steps A pointer to the ov_preprocess_preprocess_steps_t to free memory.
*/
OPENVINO_C_API(void)
ov_preprocess_preprocess_steps_free(
ov_preprocess_preprocess_steps_t* preprocess_input_process_steps);
ov_preprocess_preprocess_steps_free(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps);
/**
* @brief Add resize operation to model's dimensions.
@@ -208,9 +201,8 @@ ov_preprocess_preprocess_steps_free(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocess_steps_resize(
ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
const ov_preprocess_resize_algorithm_e resize_algorithm);
ov_preprocess_preprocess_steps_resize(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
const ov_preprocess_resize_algorithm_e resize_algorithm);
/**
* @brief Add scale preprocess operation. Divide each element of input by specified value.
@@ -247,7 +239,10 @@ ov_preprocess_preprocess_steps_mean(ov_preprocess_preprocess_steps_t* preprocess
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocess_steps_crop(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
int32_t* begin, int32_t begin_size, int32_t* end, int32_t end_size);
int32_t* begin,
int32_t begin_size,
int32_t* end,
int32_t end_size);
/**
* @brief Add 'convert layout' operation to specified layout.
@@ -257,7 +252,8 @@ ov_preprocess_preprocess_steps_crop(ov_preprocess_preprocess_steps_t* preprocess
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocess_steps_convert_layout(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps, ov_layout_t* layout);
ov_preprocess_preprocess_steps_convert_layout(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
ov_layout_t* layout);
/**
* @brief Reverse channels operation.
@@ -275,9 +271,8 @@ ov_preprocess_preprocess_steps_reverse_channels(ov_preprocess_preprocess_steps_t
* @param element_type A point to element_type
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_tensor_info_set_element_type(
ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_element_type_e element_type);
ov_preprocess_input_tensor_info_set_element_type(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_element_type_e element_type);
/**
* @brief Set ov_preprocess_input_tensor_info_t color format.
@@ -286,9 +281,8 @@ ov_preprocess_input_tensor_info_set_element_type(
* @param colorFormat The enumerate of colorFormat
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_tensor_info_set_color_format(
ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_color_format_e colorFormat);
ov_preprocess_input_tensor_info_set_color_format(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_color_format_e colorFormat);
/**
* @brief Set ov_preprocess_input_tensor_info_t spatial_static_shape.
@@ -299,9 +293,9 @@ ov_preprocess_input_tensor_info_set_color_format(
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_tensor_info_set_spatial_static_shape(
ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const size_t input_height,
const size_t input_width);
ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const size_t input_height,
const size_t input_width);
/**
* @brief Convert ov_preprocess_preprocess_steps_t element type.
@@ -310,9 +304,8 @@ ov_preprocess_input_tensor_info_set_spatial_static_shape(
* @param element_type preprocess input element type.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocess_steps_convert_element_type(
ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
const ov_element_type_e element_type);
ov_preprocess_preprocess_steps_convert_element_type(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
const ov_element_type_e element_type);
/**
* @brief Convert ov_preprocess_preprocess_steps_t color.
@@ -321,9 +314,8 @@ ov_preprocess_preprocess_steps_convert_element_type(
* @param colorFormat The enumerate of colorFormat.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocess_steps_convert_color(
ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
const ov_color_format_e colorFormat);
ov_preprocess_preprocess_steps_convert_color(ov_preprocess_preprocess_steps_t* preprocess_input_process_steps,
const ov_color_format_e colorFormat);
/**
* @brief Helper function to reuse element type and shape from user's created tensor.
@@ -332,9 +324,8 @@ ov_preprocess_preprocess_steps_convert_color(
* @param tensor A point to ov_tensor_t
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_tensor_info_set_from(
ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_tensor_t* tensor);
ov_preprocess_input_tensor_info_set_from(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_tensor_t* tensor);
/**
* @brief Set ov_preprocess_input_tensor_info_t layout.
@@ -343,9 +334,8 @@ ov_preprocess_input_tensor_info_set_from(
* @param layout A point to ov_layout_t
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_tensor_info_set_layout(
ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
ov_layout_t* layout);
ov_preprocess_input_tensor_info_set_layout(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
ov_layout_t* layout);
/**
* @brief Get the output info of ov_preprocess_output_info_t instance.
@@ -355,9 +345,8 @@ ov_preprocess_input_tensor_info_set_layout(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_output_info(
const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_output_info_t** preprocess_output_info);
ov_preprocess_prepostprocessor_get_output_info(const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_output_info_t** preprocess_output_info);
/**
* @brief Get the output info of ov_preprocess_output_info_t instance.
@@ -368,10 +357,9 @@ ov_preprocess_prepostprocessor_get_output_info(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_output_info_by_index(
const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_output_info_t** preprocess_output_info);
ov_preprocess_prepostprocessor_get_output_info_by_index(const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_output_info_t** preprocess_output_info);
/**
* @brief Get the output info of ov_preprocess_output_info_t instance.
@@ -382,10 +370,9 @@ ov_preprocess_prepostprocessor_get_output_info_by_index(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_get_output_info_by_name(
const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_output_info_t** preprocess_output_info);
ov_preprocess_prepostprocessor_get_output_info_by_name(const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_output_info_t** preprocess_output_info);
/**
* @brief Release the memory allocated by ov_preprocess_output_info_t.
@@ -403,9 +390,8 @@ ov_preprocess_output_info_free(ov_preprocess_output_info_t* preprocess_output_in
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_output_info_get_tensor_info(
const ov_preprocess_output_info_t* preprocess_output_info,
ov_preprocess_output_tensor_info_t** preprocess_output_tensor_info);
ov_preprocess_output_info_get_tensor_info(const ov_preprocess_output_info_t* preprocess_output_info,
ov_preprocess_output_tensor_info_t** preprocess_output_tensor_info);
/**
* @brief Release the memory allocated by ov_preprocess_output_tensor_info_t.
@@ -413,8 +399,7 @@ ov_preprocess_output_info_get_tensor_info(
* @param preprocess_output_tensor_info A pointer to the ov_preprocess_output_tensor_info_t to free memory.
*/
OPENVINO_C_API(void)
ov_preprocess_output_tensor_info_free(
ov_preprocess_output_tensor_info_t* preprocess_output_tensor_info);
ov_preprocess_output_tensor_info_free(ov_preprocess_output_tensor_info_t* preprocess_output_tensor_info);
/**
* @brief Set ov_preprocess_input_tensor_info_t precesion.
@@ -423,9 +408,8 @@ ov_preprocess_output_tensor_info_free(
* @param element_type A point to element_type
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_output_set_element_type(
ov_preprocess_output_tensor_info_t* preprocess_output_tensor_info,
const ov_element_type_e element_type);
ov_preprocess_output_set_element_type(ov_preprocess_output_tensor_info_t* preprocess_output_tensor_info,
const ov_element_type_e element_type);
/**
* @brief Get current input model information.
@@ -435,9 +419,8 @@ ov_preprocess_output_set_element_type(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_info_get_model_info(
const ov_preprocess_input_info_t* preprocess_input_info,
ov_preprocess_input_model_info_t** preprocess_input_model_info);
ov_preprocess_input_info_get_model_info(const ov_preprocess_input_info_t* preprocess_input_info,
ov_preprocess_input_model_info_t** preprocess_input_model_info);
/**
* @brief Release the memory allocated by ov_preprocess_input_model_info_t.
@@ -454,9 +437,8 @@ ov_preprocess_input_model_info_free(ov_preprocess_input_model_info_t* preprocess
* @param layout A point to ov_layout_t
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_input_model_info_set_layout(
ov_preprocess_input_model_info_t* preprocess_input_model_info,
ov_layout_t* layout);
ov_preprocess_input_model_info_set_layout(ov_preprocess_input_model_info_t* preprocess_input_model_info,
ov_layout_t* layout);
/**
* @brief Adds pre/post-processing operations to function passed in constructor.
@@ -466,5 +448,4 @@ ov_preprocess_input_model_info_set_layout(
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_build(
const ov_preprocess_prepostprocessor_t* preprocess, ov_model_t** model);
ov_preprocess_prepostprocessor_build(const ov_preprocess_prepostprocessor_t* preprocess, ov_model_t** model);

View File

@@ -18,8 +18,8 @@
* @brief Reprents a static shape.
*/
typedef struct {
int64_t rank; //!< the rank of shape
int64_t* dims; //!< the dims of shape
int64_t rank; //!< the rank of shape
int64_t* dims; //!< the dims of shape
} ov_shape_t;
/**

View File

@@ -45,9 +45,7 @@ ov_tensor_create_from_host_ptr(const ov_element_type_e type,
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_tensor_create(const ov_element_type_e type,
const ov_shape_t shape,
ov_tensor_t** tensor);
ov_tensor_create(const ov_element_type_e type, const ov_shape_t shape, ov_tensor_t** tensor);
/**
* @brief Set new shape for tensor, deallocate/allocate if new total size is bigger than previous one.

View File

@@ -5,7 +5,7 @@
set(TARGET_NAME openvino_c)
file(GLOB SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)
file(GLOB HEADERS ${OpenVINO_C_API_SOURCE_DIR}/include/*)
file(GLOB_RECURSE HEADERS ${OpenVINO_C_API_SOURCE_DIR}/include/*.h)
# create library
add_library(${TARGET_NAME} ${HEADERS} ${SOURCES})

View File

@@ -1,3 +1,3 @@
setuptools>=53.0.0
setuptools>=65.6.1
wheel>=0.38.1
patchelf; sys_platform == 'linux' and platform_machine == 'x86_64'

View File

@@ -141,6 +141,23 @@ macro(ov_find_package_tbb)
list(APPEND TBB_IMPORTED_TARGETS ${target})
endif()
endforeach()
if(WIN32 AND TARGET TBB::tbbbind_2_5)
# Add HWLOC::hwloc_2_5 target to check via Apivalidator
get_target_property(TBB_location TBB::tbb IMPORTED_LOCATION_RELEASE)
get_filename_component(TBB_dir "${TBB_location}" DIRECTORY)
set(hwloc_dll_name "${CMAKE_SHARED_LIBRARY_PREFIX}hwloc${CMAKE_SHARED_LIBRARY_SUFFIX}")
find_file(HWLOC_DLL NAMES ${hwloc_dll_name} PATHS "${TBB_dir}" DOC "Path to hwloc.dll")
if(NOT HWLOC_DLL)
message(FATAL_ERROR "Failed to find ${hwloc_dll_name} in ${TBB_dir}")
endif()
add_library(HWLOC::hwloc_2_5 SHARED IMPORTED)
set_property(TARGET HWLOC::hwloc_2_5 APPEND PROPERTY IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(HWLOC::hwloc_2_5 PROPERTIES
IMPORTED_LOCATION_RELEASE "${HWLOC_DLL}")
endif()
endif()
if(NOT TBB_FOUND)

View File

@@ -29,7 +29,6 @@ add_library(openvino::runtime ALIAS ${TARGET_NAME})
set_target_properties(${TARGET_NAME} PROPERTIES EXPORT_NAME runtime)
ie_add_vs_version_file(NAME ${TARGET_NAME} FILEDESCRIPTION "OpenVINO runtime library")
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${OpenVINO_SOURCE_DIR}/src/core/include>
@@ -59,6 +58,9 @@ endif()
set_ie_threading_interface_for(${TARGET_NAME})
ie_mark_target_as_cc(${TARGET_NAME})
# must be called after all target_link_libraries
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME} EXTRA ${TBB_IMPORTED_TARGETS})
# LTO
set_target_properties(${TARGET_NAME} PROPERTIES INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})

View File

@@ -166,6 +166,7 @@ if(ENABLE_GAPI_PREPROCESSING)
set_target_properties(${TARGET_NAME} PROPERTIES COMPILE_PDB_NAME ${TARGET_NAME})
endif()
# must be called after all target_link_libraries
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
ie_add_vs_version_file(NAME ${TARGET_NAME}

View File

@@ -27,5 +27,5 @@ ngraph::snippets::pass::ConvertConstantsToScalars::ConvertConstantsToScalars() {
return true;
};
register_matcher(std::make_shared<ov::pass::pattern::Matcher>(constants), callback);
register_matcher(std::make_shared<ov::pass::pattern::Matcher>(constants, matcher_name), callback);
}

View File

@@ -16,7 +16,7 @@ ngraph::snippets::pass::ConvertPowerToPowerStatic::ConvertPowerToPowerStatic() {
is_type<snippets::op::Scalar>(n->get_input_node_shared_ptr(1));
});
ngraph::graph_rewrite_callback callback = [this](ngraph::pattern::Matcher &m) {
OV_ITT_SCOPED_TASK(ngraph::pass::itt::domains::SnippetsTransform, "Snippets::op::ConvertConstantsToScalars")
OV_ITT_SCOPED_TASK(ngraph::pass::itt::domains::SnippetsTransform, "Snippets::op::ConvertPowerToPowerStatic")
auto power = ov::as_type_ptr<ov::op::v1::Power>(m.get_match_root());
auto scalar = ov::as_type_ptr<snippets::op::Scalar>(power->get_input_node_shared_ptr(1));
auto value = scalar->cast_vector<float>()[0];

View File

@@ -14,7 +14,9 @@
#include "itt.hpp"
using namespace std;
using namespace ov;
using namespace ov::opset7;
static bool can_propagate_conv_stride(const std::shared_ptr<ngraph::Node>& conv) {
const auto& kernel_shape = conv->input_value(1).get_shape();
@@ -40,40 +42,36 @@ static std::tuple<ngraph::Strides, bool> check_next_ops(const std::vector<ngraph
return std::make_tuple(strides[0], all_ops_are_valid);
}
static void insert_pooling(const ngraph::Output<ngraph::Node>& first,
ngraph::Input<ngraph::Node>& second,
const ngraph::Strides& strides) {
static void insert_pooling(const Output<Node>& first, Input<Node>& second, const Strides& strides) {
pass::NodeRegistry rg;
auto first_node = first.get_node_shared_ptr();
auto rank = first.get_partial_shape().rank();
bool do_reshape = rank.is_static() && static_cast<size_t>(rank.get_length()) < strides.size() + 2;
const auto rank = first.get_partial_shape().rank();
const bool do_reshape = rank.is_static() && static_cast<size_t>(rank.get_length()) < strides.size() + 2;
if (do_reshape) {
size_t diff = strides.size() + 2 - static_cast<size_t>(rank.get_length());
auto ones = opset7::Constant::create(ngraph::element::i64, ngraph::Shape{diff}, std::vector<int64_t>(diff, 1));
auto current_shape = std::make_shared<opset7::ShapeOf>(first);
std::shared_ptr<ngraph::Node> new_shape =
std::make_shared<opset7::Concat>(ngraph::OutputVector{ones, current_shape}, 0);
std::shared_ptr<ngraph::Node> constant_new_shape = get_constant_from_source(new_shape);
if (constant_new_shape)
const size_t diff = strides.size() + 2 - static_cast<size_t>(rank.get_length());
const auto ones = rg.make<Constant>(element::i64, Shape{diff}, vector<int64_t>(diff, 1));
const auto current_shape = rg.make<ShapeOf>(first);
shared_ptr<Node> new_shape = rg.make<Concat>(OutputVector{ones, current_shape}, 0);
if (const auto constant_new_shape = get_constant_from_source(new_shape)) {
rg.add(constant_new_shape);
new_shape = constant_new_shape;
first_node = std::make_shared<opset7::Reshape>(first_node, new_shape, false);
}
first_node = rg.make<Reshape>(first_node, new_shape, false);
}
std::shared_ptr<ngraph::Node> new_node = std::make_shared<opset7::MaxPool>(first_node,
strides,
ngraph::Shape{},
ngraph::Shape{},
ngraph::Shape(strides.size(), 1));
shared_ptr<Node> new_node = rg.make<MaxPool>(first_node, strides, Shape{}, Shape{}, Shape(strides.size(), 1));
if (do_reshape) {
// squeeze dimensions back
size_t diff = strides.size() + 2 - static_cast<size_t>(rank.get_length());
std::vector<size_t> axes(diff);
std::iota(axes.begin(), axes.end(), 0);
new_node = std::make_shared<opset7::Squeeze>(
new_node,
opset7::Constant::create(ngraph::element::u64, ngraph::Shape{diff}, axes));
const size_t diff = strides.size() + 2 - static_cast<size_t>(rank.get_length());
vector<size_t> axes(diff);
iota(axes.begin(), axes.end(), 0);
new_node = rg.make<Squeeze>(new_node, rg.make<Constant>(element::u64, Shape{diff}, axes));
}
std::shared_ptr<ngraph::Node> constant_new_node = get_constant_from_source(new_node);
if (constant_new_node)
if (const auto constant_new_node = get_constant_from_source(new_node)) {
rg.add(constant_new_node);
new_node = constant_new_node;
}
copy_runtime_info(as_node_vector({second.get_source_output()}), rg.get());
second.replace_source_output(new_node);
}

View File

@@ -171,9 +171,6 @@ TEST_F(TransformationTestsF, StridesOptimization5) {
function_ref = std::make_shared<ngraph::Function>(ngraph::NodeVector{conv_2}, ngraph::ParameterVector{data});
}
// TODO: update transformation and remove this check XXX-68696
disable_rt_info_check();
}
// Pl->Conv(1x1,1x1)->Conv(1x1,2x2)->Conv(3x3,1x1)->Conv(1x1,2x2)
@@ -261,8 +258,6 @@ TEST_F(TransformationTestsF, StridesOptimization7) {
function_ref = std::make_shared<ngraph::Function>(ngraph::NodeVector{conv_3, conv_4}, ngraph::ParameterVector{data});
}
// TODO: update transformation and remove this check XXX-68696
disable_rt_info_check();
}
// Pl--->Conv(1x1,1x1)->ReLU--->Eltwise-->Conv(1x1,2x2)-->Eltwise-->Conv(1x1, 2x2)
@@ -318,8 +313,6 @@ TEST_F(TransformationTestsF, StridesOptimization8) {
function_ref = std::make_shared<ngraph::Function>(ngraph::NodeVector{conv_3}, ngraph::ParameterVector{data, data_2});
}
// TODO: update transformation and remove this check XXX-68696
disable_rt_info_check();
}
// Pl------->Conv(1x1,1x1)------>Eltwise------>Conv(1x1,2x2)---->Eltwise-->Conv(1x1, 2x2)
@@ -401,6 +394,4 @@ TEST_F(TransformationTestsF, StridesOptimization9) {
function_ref = std::make_shared<ngraph::Function>(ngraph::NodeVector{conv_3}, ngraph::ParameterVector{data, data_2, data_3});
}
// TODO: update transformation and remove this check XXX-68696
disable_rt_info_check();
}

View File

@@ -267,7 +267,6 @@ TEST_F(TransformationTestsF, PropagateMasksBasic) {
compare_masks(*getMask(conv2->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -329,7 +328,6 @@ TEST_F(TransformationTestsF, PropagateMasksDynamicConvolution) {
compare_masks(*getMask(conv2->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -506,7 +504,6 @@ TEST_F(TransformationTestsF, PropagateMaskPassThrough) {
compare_masks(*getMask(max_pool->output(0)), Mask({{}, {1, 2, 3}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -655,7 +652,6 @@ TEST_F(TransformationTestsF, PropagateMasksHardDependencies) {
//compare_masks(*getMask(conv2), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -770,7 +766,6 @@ TEST_F(TransformationTestsF, PropagateMasksQuantizedGroupConvolution) {
compare_masks(*getMask(conv2->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -905,7 +900,6 @@ TEST_F(TransformationTestsF, PropagateMasksQuantizedGroupConvolutionWithShapeOf)
compare_masks(*getMask(weights_2->output(0)), Mask({{}, {0, 1, 2, 3}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1020,7 +1014,6 @@ TEST_F(TransformationTestsF, PropagateMasksFakeQuantizePerTensor) {
compare_masks(*getMask(conv2->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1202,7 +1195,6 @@ TEST_F(TransformationTestsF, PropagateMasksFakeQuantizePerChannel) {
compare_masks(*getMask(fq->input(4).get_source_output()), Mask({{}, {0, 1, 2, 3, 4}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1296,7 +1288,6 @@ TEST_F(TransformationTestsF, TestConcatMaskPropagation) {
compare_masks(*getMask(weights_out_conv.get_node_shared_ptr()->output(0)), Mask({{}, {0, 1, 2, 3, 15, 16, 17, 18, 28, 29, 30, 31}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1400,7 +1391,6 @@ TEST_F(TransformationTestsF, TestConcatMaskPropagationUp) {
compare_masks(*getMask(weights_out_conv.get_node_shared_ptr()->output(0)), Mask({{}, {0, 1, 2, 3, 15, 16, 17, 18, 28, 29, 30, 31}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1545,7 +1535,6 @@ TEST_F(TransformationTestsF, PruneConvIsClosingAndInGroup) {
compare_masks(*getMask(end_conv->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1716,7 +1705,6 @@ TEST_F(TransformationTestsF, PruneReducelayerUp) {
compare_masks(*getMask(conv_1->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1807,7 +1795,6 @@ TEST_F(TransformationTestsF, PruneReduceLayerDown) {
compare_masks(*getMask(end_conv->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -1965,7 +1952,6 @@ TEST_F(TransformationTestsF, MaskPropagationReshapeUp) {
compare_masks(*getMask(conv_1->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2066,7 +2052,6 @@ TEST_P(TransformationTestsBoolParamF, MaskPropagationReshapeUpWithShapeOf) {
compare_masks(*getMask(conv_1->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2254,7 +2239,6 @@ TEST_F(TransformationTestsF, MaskPropagationReshapeExtend) {
compare_masks(*getMask(conv_1->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2351,7 +2335,6 @@ TEST_F(DISABLED_TransformationTestsF, MaskPropagationReshapeDownMul) {
compare_masks(*getMask(last_conv->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2446,7 +2429,6 @@ TEST_F(TransformationTestsF, MaskPropagationReshapeDownAdd) {
compare_masks(*getMask(last_conv->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2605,7 +2587,6 @@ TEST_F(TransformationTestsF, MaskPropagationReshapeUnsqueezeUp) {
compare_masks(*getMask(mul_left->output(0)), Mask({{}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2670,7 +2651,6 @@ TEST_F(TransformationTestsF, MaskPropagationReshapeUnsqueezeDown) {
compare_masks(*getMask(mul_left->output(0)), Mask({{}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2829,7 +2809,6 @@ TEST_F(TransformationTestsF, PruneSEBlock) {
compare_masks(*getMask(end_conv->output(0)), Mask({{}, {}, {}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -2918,7 +2897,6 @@ TEST_F(TransformationTestsF, PropagateMasksLinear) {
compare_masks(*getMask(last_linear->output(0)), Mask{{}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3177,7 +3155,6 @@ TEST_F(TransformationTestsF, MaskPropagationLinearOuterDims) {
compare_masks(*getMask(last_mul->output(0)), Mask({{}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3315,7 +3292,6 @@ TEST_F(TransformationTestsF, PruneMasksMatMulColsStopRowsUp) {
compare_masks(*getMask(last_linear->output(0)), Mask{{}, {}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3393,7 +3369,6 @@ TEST_F(TransformationTestsF, PruneMasksMatMulRowsStopColsUp) {
compare_masks(*getMask(last_linear->output(0)), Mask{{}, {}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3486,7 +3461,6 @@ TEST_F(TransformationTestsF, PropagateFlattenUp) {
compare_masks(*getMask(linear->output(0)), Mask{{}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3556,7 +3530,6 @@ TEST_F(TransformationTestsF, PropagateFlattenDown) {
compare_masks(*getMask(linear->output(0)), {{}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3607,7 +3580,6 @@ TEST_F(TransformationTestsF, PropagateMasksTranspose) {
compare_masks(*getMask(last_mul->output(0)), Mask{{}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3680,7 +3652,6 @@ TEST_F(TransformationTestsF, PropagateMasksTransposeComplex) {
compare_masks(*getMask(last_mul->output(0)), Mask{{}, {}, {}, {}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -3879,7 +3850,6 @@ TEST_F(DISABLED_TransformationTestsF, PropagateMasksBroadcastedEltwiseWithInputs
compare_masks(*getMask(last_mul->output(0)), Mask({{}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -4054,7 +4024,6 @@ TEST_F(TransformationTestsF, PropagateMasksBroadcastedEltwise) {
compare_masks(*getMask(last_mul->output(0)), Mask({{}, {}}));
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -4240,7 +4209,6 @@ TEST_F(TransformationTestsF, MaskPropagationComplexReshape) {
manager.register_pass<pass::ShrinkWeights>();
manager.register_pass<ngraph::pass::VisualizeTree>(std::string(VISUALIZE_TREE_ROOT) + "MaskPropagationComplexReshapeWithMasks.svg", modifier);
}
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -4431,7 +4399,6 @@ TEST_P(TransformationTestsBoolParamF, MaskPropagationReshapedPassThroughP) {
auto postfix = (add_shape_of)? "True" : "False";
manager.register_pass<ngraph::pass::VisualizeTree>(std::string(VISUALIZE_TREE_ROOT) +
"MaskPropagationReverseFlattenWithMasks" + postfix + ".svg", modifier);
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -4496,9 +4463,7 @@ TEST_P(TransformationTestsBoolParamF, MaskPropagationBroadcastedSameRankEltwiseS
compare_masks(*getMask(mult->output(0)), Mask{{1, 2, 3}, {}});
compare_masks(*getMask(mul_last->output(0)), Mask{{}, {}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}
@@ -4658,7 +4623,6 @@ TEST_F(TransformationTestsF, MaskPropagationMatMulWithSeveralOutputs) {
compare_masks(*getMask(right_matmul), Mask{{}, {}});
manager.register_pass<pass::ShrinkWeights>();
disable_rt_info_check();
comparator.enable(FunctionsComparator::CmpValues::ACCURACY);
}

View File

@@ -224,6 +224,7 @@ template <typename T>
std::vector<std::shared_ptr<Node>> topological_sort(T root_nodes) {
std::stack<Node*, std::vector<Node*>> nodes_to_do;
std::unordered_set<Node*> nodes_done;
std::unordered_map<Node*, uint8_t /*is_visited*/> nodes_visited;
std::vector<std::shared_ptr<Node>> result;
for (auto& node : root_nodes) {
@@ -233,6 +234,13 @@ std::vector<std::shared_ptr<Node>> topological_sort(T root_nodes) {
Node* node = nodes_to_do.top();
if (nodes_done.count(node) == 0) {
bool can_add = true;
if (++nodes_visited[node] > 2)
// Node may be at the top of `nodes_to_do` not more than twice before it's added to `nodes_done` -
// when visited and placed in `nodes_to_do` and after the subtree traversal is finished.
// Otherwise it's a loop.
throw Exception("Loop detected during topological sort starting from '" + node->get_friendly_name() +
"' node.");
size_t arg_count = node->get_input_size();
for (size_t i = 0; i < arg_count; ++i) {
Node* dep = node->get_input_node_ptr(arg_count - i - 1);

View File

@@ -68,7 +68,10 @@ void op::v0::NormalizeL2::validate_and_infer_types() {
AxisSet op::v0::NormalizeL2::get_reduction_axes() const {
AxisSet axes;
if (auto const_op = get_constant_from_source(input_value(1))) {
axes = const_op->get_axis_set_val();
const auto const_data = const_op->cast_vector<int64_t>();
const auto input_data_rank = get_input_partial_shape(0).rank();
const auto normalized_axes = ov::normalize_axes(get_friendly_name(), const_data, input_data_rank);
axes = AxisSet{normalized_axes};
}
return axes;
}

View File

@@ -82,9 +82,9 @@ size_t ngraph::hash_combine(const std::vector<size_t>& list) {
}
void* ngraph::ngraph_malloc(size_t size) {
auto ptr = malloc(size);
auto ptr = calloc(size, 1);
if (size != 0 && !ptr) {
NGRAPH_ERR << "malloc failed to allocate memory of size " << size;
NGRAPH_ERR << "calloc failed to allocate memory of size " << size;
throw std::bad_alloc();
}
return ptr;

View File

@@ -1333,6 +1333,63 @@ bool all_ops_have_same_info(const std::shared_ptr<ov::Model>& f) {
}
} // namespace
TEST(model, topological_sort_throws_if_loop_with_one_node) {
auto arg0 = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::PartialShape{1});
auto relu1 = std::make_shared<ov::opset8::Relu>(arg0);
auto result = std::make_shared<ov::opset8::Result>(relu1);
// Loop relu2->relu2
auto relu2 = std::make_shared<ov::opset8::Relu>(relu1->output(0));
ov::replace_node(relu1, relu2);
ASSERT_THROW(std::ignore = std::make_shared<ov::Model>(ov::ResultVector{result}, ov::ParameterVector{arg0}),
ov::Exception);
}
TEST(model, topological_sort_throws_if_loop_with_several_nodes) {
auto arg0 = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::PartialShape{1});
auto relu1 = std::make_shared<ov::opset8::Relu>(arg0);
auto result = std::make_shared<ov::opset8::Result>(relu1);
// Loop relu2->relu3->relu2
auto relu2 = std::make_shared<ov::opset8::Relu>(relu1->output(0));
auto relu3 = std::make_shared<ov::opset8::Relu>(relu2);
ov::replace_node(relu1, relu3);
ASSERT_THROW(std::ignore = std::make_shared<ov::Model>(ov::ResultVector{result}, ov::ParameterVector{arg0}),
ov::Exception);
}
TEST(model, topological_sort_throws_if_loop_with_control_dependency) {
auto arg0 = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::PartialShape{1});
auto relu1 = std::make_shared<ov::opset8::Relu>(arg0);
auto relu2 = std::make_shared<ov::opset8::Relu>(relu1);
auto result = std::make_shared<ov::opset8::Result>(relu2);
// Loop relu1->relu2->relu1
relu1->add_control_dependency(relu2);
ASSERT_THROW(std::ignore = std::make_shared<ov::Model>(ov::ResultVector{result}, ov::ParameterVector{arg0}),
ov::Exception);
}
TEST(model, topological_sort_throws_if_loop_with_control_dependency_only) {
auto arg0 = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::PartialShape{1});
auto relu0 = std::make_shared<ov::opset8::Relu>(arg0);
auto result0 = std::make_shared<ov::opset8::Result>(relu0);
auto arg1 = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::PartialShape{1});
auto relu1 = std::make_shared<ov::opset8::Relu>(arg1);
auto result1 = std::make_shared<ov::opset8::Result>(relu1);
// Loop relu0->relu1->relu0
relu0->add_control_dependency(relu1);
relu1->add_control_dependency(relu0);
ASSERT_THROW(
std::ignore = std::make_shared<ov::Model>(ov::ResultVector{result0, result1}, ov::ParameterVector{arg0, arg1}),
ov::Exception);
}
TEST(model, topological_sort_caching_basic) {
auto arg0 = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::PartialShape{1});
auto relu1 = std::make_shared<ov::opset8::Relu>(arg0);

View File

@@ -10,6 +10,7 @@
#include "openvino/pass/serialize.hpp"
#include "openvino/util/file_util.hpp"
#include "read_ir.hpp"
#include "transformations/hash.hpp"
#include "util/test_common.hpp"
class SerializationDeterministicityTest : public ov::test::TestsCommon {
@@ -83,6 +84,23 @@ TEST_F(SerializationDeterministicityTest, ModelWithMultipleLayers) {
ASSERT_TRUE(files_equal(bin_1, bin_2));
}
TEST_F(SerializationDeterministicityTest, Hash) {
const std::string model =
CommonTestUtils::getModelFromTestModelZoo(ov::util::path_join({SERIALIZED_ZOO, "ir/addmul_abc.onnx"}));
uint64_t seed1 = 0;
uint64_t seed2 = 0;
{
auto expected = ov::test::readModel(model, "");
ov::pass::Hash(seed1).run_on_model(expected);
}
{
auto expected = ov::test::readModel(model, "");
ov::pass::Hash(seed2).run_on_model(expected);
}
ASSERT_TRUE(seed1 == seed2);
}
#endif
TEST_F(SerializationDeterministicityTest, ModelWithMultipleOutputs) {

View File

@@ -80,9 +80,22 @@ TEST(type_prop, normalize_l2_axes_out_of_bounds) {
auto normalize = make_shared<op::v0::NormalizeL2>(data, axes, eps, eps_mode);
// Should have thrown, so fail if it didn't
FAIL() << "Invalid input tensor rank.";
} catch (const NodeValidationFailure& error) {
EXPECT_HAS_SUBSTRING(error.what(), std::string("Reduction axis ("));
} catch (const ov::AssertFailure& error) {
EXPECT_HAS_SUBSTRING(error.what(), std::string("(axis_range_min <= axis) && (axis <= axis_range_max)"));
} catch (...) {
FAIL() << "Deduced type check failed for unexpected reason";
}
}
TEST(type_prop, normalize_l2_negative_axes) {
PartialShape data_shape{1, 2, 3, 4};
auto data = make_shared<op::Parameter>(element::f32, data_shape);
const auto axes = make_shared<op::Constant>(element::i32, Shape{1}, vector<int64_t>{-1});
float eps{1e-6f};
auto eps_mode = op::EpsMode::ADD;
auto normalize = make_shared<op::v0::NormalizeL2>(data, axes, eps, eps_mode);
EXPECT_EQ(normalize->get_element_type(), element::f32);
EXPECT_EQ(normalize->get_reduction_axes(), ov::AxisSet{3});
EXPECT_EQ(normalize->get_output_partial_shape(0), data_shape);
}

View File

@@ -99,6 +99,27 @@ public:
const std::map<std::string, std::string>& config,
const std::function<void(const ie::CNNNetwork&)>& val = nullptr) = 0;
/**
* @brief Creates an executable network from a model memory.
*
* Users can create as many networks as they need and use
* them simultaneously (up to the limitation of the hardware resources)
*
* @param modelStr String data of model
* @param weights Model's weights
* @param deviceName Name of device to load network to
* @param config Optional map of pairs: (config parameter name, config parameter value) relevant only for this load
* operation
* @param val Optional callback to perform validation of loaded CNNNetwork, if ReadNetwork is triggered
* @return An executable network reference
*/
virtual ie::SoExecutableNetworkInternal LoadNetwork(
const std::string& modelStr,
const ie::Blob::CPtr& weights,
const std::string& deviceName,
const std::map<std::string, std::string>& config,
const std::function<void(const ie::CNNNetwork&)>& val = nullptr) = 0;
/**
* @brief Creates an executable network from a previously exported network
* @param networkModel network model stream

View File

@@ -13,7 +13,7 @@
namespace InferenceEngine {
struct PerfHintsConfig {
std::string ovPerfHint = "";
std::string ovPerfHint = "UNDEFINED";
int ovPerfHintNumRequests = 0;
/**
@@ -73,12 +73,12 @@ struct PerfHintsConfig {
*/
static std::string CheckPerformanceHintValue(const std::string& val) {
if (val == PluginConfigParams::LATENCY || val == PluginConfigParams::THROUGHPUT ||
val == PluginConfigParams::CUMULATIVE_THROUGHPUT || val == "")
val == PluginConfigParams::CUMULATIVE_THROUGHPUT || val == PluginConfigParams::UNDEFINED)
return val;
else
IE_THROW() << "Wrong value for property key " << PluginConfigParams::KEY_PERFORMANCE_HINT
<< ". Expected only " << PluginConfigParams::LATENCY << "/" << PluginConfigParams::THROUGHPUT
<< "/" << PluginConfigParams::CUMULATIVE_THROUGHPUT;
<< "/" << PluginConfigParams::CUMULATIVE_THROUGHPUT << "/" << PluginConfigParams::UNDEFINED;
}
/**

View File

@@ -268,6 +268,7 @@ DECLARE_CONFIG_VALUE(MODEL_PRIORITY_LOW);
DECLARE_CONFIG_KEY(PERFORMANCE_HINT);
DECLARE_CONFIG_VALUE(LATENCY);
DECLARE_CONFIG_VALUE(THROUGHPUT);
DECLARE_CONFIG_VALUE(UNDEFINED);
DECLARE_CONFIG_VALUE(CUMULATIVE_THROUGHPUT);
/**
* @brief (Optional) config key that backs the (above) Performance Hints

View File

@@ -255,6 +255,44 @@ public:
return compile_model(model_path, device_name, AnyMap{std::forward<Properties>(properties)...});
}
/**
* @brief Reads a model and creates a compiled model from the IR/ONNX/PDPD memory.
* @param model String with a model in IR/ONNX/PDPD format.
* @param weights Shared pointer to a constant tensor with weights.
* Reading ONNX/PDPD models does not support loading weights from the @p weights tensors.
* @param device_name Name of a device to load a model to.
* @param properties Optional map of pairs: (property name, property value) relevant only for this load
* operation.
* @note Created model object shares the weights with the @p weights object.
* Thus, do not create @p weights on temporary data that can be freed later, since the model
* constant data will point to an invalid memory.
* @return A compiled model.
*/
CompiledModel compile_model(const std::string& model,
const ov::Tensor& weights,
const std::string& device_name,
const AnyMap& properties = {});
/**
* @brief Reads a model and creates a compiled model from the IR/ONNX/PDPD memory.
* @param model String with a model in IR/ONNX/PDPD format.
* @param weights Shared pointer to a constant tensor with weights.
* Reading ONNX/PDPD models does not support loading weights from the @p weights tensors.
* @param device_name Name of a device to load a model to.
* @tparam Properties Should be a pack of `std::pair<std::string, ov::Any>` types.
* @note Created model object shares the weights with the @p weights object.
* Thus, do not create @p weights on temporary data that can be freed later, since the model
* constant data will point to an invalid memory.
* @return A compiled model.
*/
template <typename... Properties>
util::EnableIfAllStringAny<CompiledModel, Properties...> compile_model(const std::string& model,
const ov::Tensor& weights,
const std::string& device_name,
Properties&&... properties) {
return compile_model(model, weights, device_name, AnyMap{std::forward<Properties>(properties)...});
}
/**
* @brief Creates a compiled model from a source model within a specified remote context.
* @param model Model object acquired from Core::read_model.

Some files were not shown because too many files have changed in this diff Show More