Compare commits

..

3 Commits

Author SHA1 Message Date
Eddy Kim
8e464e992e using calloc instead of malloc for deterministic hashing (#16326) 2023-03-20 19:50:14 +04:00
Ilya Lavrenov
d1a7b0e3c0 Releases/2022/3 (#16409)
* Docs: Update the doc on default hint and execution devices property (#14836)

* Docs: Update to LATENCY as default hint
* Docs: Update the doc on execution devices property
* Update auto_device_selection.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* 22.3: remove tbb version check for using tbbbind static library (#15700)

* update symbolic link on uninstall page (#15720)

* Update deployment_simplified.svg (#15681)

* [NormalizeL2] normalization of reduction axes (#15841) (#15879)

* Add test for negative axes, preliminary solution to solve uncorrect
results

* Normalize axes in operation NormalizeL2

* Add test for negative axes

* Add EOF

* [67541] - face-detection-0205, 0206 issues fixed (incorrect dimensions error) (#14687)

* [CVS-67541] - face-detection-0205, 0206 issues fixed (incorrect dimensions error)
* [CVS-67541] - face-detection-0205, 0206 issues fixed

* Conversion fail for ov::hint::performance_mode with UNDEFINED value (#15903)

* Update ov::hint::performance_hint UNDEFINED value from empty string to "UNDEFINED".
Update benchmark Python version.
Update the description about hint setting within benchmark APP README and help message.

* Drop the reduntant changes.

* Supported OpenSUSE 15.3 (#15897) (#15907)

* [DOCS] Structure change for 'AUTO Device Selection' article - post merge fix (#15752)

* aligning with 14750

* Fixed samples build on Debian 10 with cmake 3.13 (#15939)

* Fixed samples build on Debian 10 with cmake 3.13

* Use 2022/3 branches

* Limit setuptools version

* Fixed issues in setupvars.sh (#15884) (#15952)

* Fixed issues with setupvar.sh

* Fixes setupvars realpath error

---------

Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com>

* Apivalidator (#15951)

* Improved API validator logic (#15942)

* Fix for apiValidator when more than 1 target needs to be checked (#15950)

* Prevent infinite recursion

* [Snippets] Added matcher_name in ConvertConstantsToScalars pass (#15977)

* Install libtbb2 instead of libtbb12 on U22.04 (#15993)

* Apply Apivalidator to extra TBB libs (#15998)

* [GNA] Changed max layer limit tests to avoid SEH exceptions (#15015) (#15460)

* splitted test model

* Changed test config

* Set SF for all inputs

* [Transformations] Enable missing runtime info check (#15796) (#15972)

* Add rt info propagation to StridesOptimization

* Enable rt info check for pruning tests

* Fixed clang-format for C API (#16025)

* Port to 2022.3 from master (#16049)

* notebooks update (#16091)

20230302220806

* Update Customize_Model_Optimizer.md (#15687)

Recreating #14062

* fix benchmark_app python to support YES and NO values for -pin parameter (#16042)

* support YES and NO for -pin

* add if property_name == 'AFFINITY'

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>

* [Docs] nv12 changes port to 22.3 (#16115)

Port:
#15370
#16004

add single-plane input information
create single-plane cpp snippet
menu fix
update formatting for sphinx directives
additional snippet fixes
---------
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>

* [DOCS] Port  Frontend Extensions and OTX page (#16135)

* [DOCS] Add OTX page to Ecosystem  (#16118)

* add otx page

* change ecosystem page

* add ote img

* move ote page to rst

* fix path

* add path

* img test

* otx page

* add docs to ecosystem page

* [DOCS] Fix Frontend Extensions snippets (#16120)

* move fe to rst

* fix code snippets

* add more line breaks

* fix tabsets

* fix link

* fix anchor

* test

* fixing link

* change tab directive

* fix tabs

* align code tabs

* fix link

* fix snippets

* add dlwb to ecosystem

* change ecosystem menu

* exclude fe page

* Port to 2022.3 (#16174)

* Remove setuptools upperbound (#16054)

* Added missed licenses to openvino-dev (#16057)

* Fixed OpenMP + debian package code-path (#16058)

* [CPU] Prevent out of bounds read inside Graph::InferDynamic (#16067)

* Fixed compilation on Debian 11 with gcc 12.2 (#16096)

* Fix for OpenCL

---------

Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>

* Docs benchmarks page update port 22.3 (#16187)

changes to benchmarks page to align with theme

* Andreib/2022.3 myriad plugin obs (#16079)

* Changed to OBS firmware

* Changed dependencies settings for new FW

---------

Co-authored-by: Daria Mityagina <daria.mityagina@intel.com>

* port-16085 (#16210)

* 234 update (#16212)

Adding notebook 234-encodec-audio-compression

* [DOCS] Adding 'Scrollbox' - new sphinx directive (#15307)

port https://github.com/openvinotoolkit/openvino/pull/15305

* [DOCS] Updating 'Prerequisites' section in `Configurations for GNA` article - for 22.3 (#16237)

* issue-15090
Add command for installation of prerequisites on Linux.

* DOCS-image-fix port22.3 (#16341)

(#16324)
(#16308)

* Clearing of CustomReplacementRegistry.registry in convert_model() (#15893) (#16347)

* Clearing of CustomReplacementRegistry.registry.

* Added test.

* Fixed clearing of pipeline config params and TF session in convert_model() (#16191) (#16346)

* Fixed pipeline config params clearing.

* Added clearing of TF session. Added tests.

---------

Co-authored-by: Wang Wangwang <wangwang.wang@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Fang Xu <fang.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Daria Mityagina <daria.mityagina@intel.com>
Co-authored-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
Co-authored-by: Otoka, Tomasz <tomasz.otoka@intel.com>
Co-authored-by: Alexandra Sidorova <alexandra.sidorova@intel.com>
Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com>
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Haiqi Pan <haiqi.pan@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Przemyslaw Wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Maksim Kutakov <maksim.kutakov@intel.com>
Co-authored-by: Andrei-George Boji <andrei-george.boji@intel.com>
Co-authored-by: Anastasiia Pnevskaia <anastasia.popova@intel.com>
2023-03-20 19:47:11 +04:00
Xuejun Zhai
b692afc764 Xuejun/port cache model api (#15637)
* Add new compile model api to support hash model memory (#14543)

* Add new compile_model api for ONNX RUNTIME OV EP

Allow compile_model() accept model/weight data.

* Update minor place

* Cache model if possible

* Compute hash based on model_xml and model_weight

* Update typo

* Change hash key computation for model's weights

* Resolve test case issue

* Use tensor replace blob for hash computation

* Fix hash computation isssue and add more test cases

* Fix a build issue caused by data format

* Add ov::loaded_from_cache checking for CompileModelLoadFromMemoryTest (#15030)

* Add ov::loaded_from_cache checking for CompileModelLoadFromMemoryTestBase

* Skip gna in skip_tests_config

* Ignore empty tensor for hash calculation (#15282)

* Ignore empty tensor for hash calculation

* Added test

* Fix conflict

* Trigger ci run test for customer_A branch

---------

Co-authored-by: River Li <river.li@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-02-13 15:29:30 +04:00
219 changed files with 2800 additions and 5182 deletions

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -261,19 +263,12 @@ jobs:
displayName: 'Remove debian dependencies'
condition: eq(variables['CMAKE_CPACK_GENERATOR'], 'DEB')
continueOnError: false
- script: cmake -DCOMPONENT=python_wheels -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_DIR)/cmake_install.cmake
displayName: 'Install wheel packages'
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_LAYER_TESTS_DIR)/cmake_install.cmake
displayName: 'Install Layer Tests'
- script: |
set -e
python3 -m pip install $(INSTALL_DIR)/tools/openvino-*
python3 -m pip install $(INSTALL_DIR)/tools/openvino_dev-*
- script: python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
displayName: 'Install python wheels'
- script: |
set -e
cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -DCOMPONENT=tests -P $(BUILD_DIR)/cmake_install.cmake

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -138,7 +140,6 @@ jobs:
-DENABLE_PYTHON=OFF
-DENABLE_NVIDIA=ON
-DENABLE_TESTS=ON
-DENABLE_DATA=OFF
/root/repos/openvino &&
/root/w/ninja -v CudaFuncTests CudaUnitTests"
workingDirectory: $(WORK_DIR)

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -17,6 +18,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -3,6 +3,7 @@ trigger:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/
@@ -16,6 +17,7 @@ pr:
include:
- master
- releases/*
- customer_A
paths:
exclude:
- docs/

View File

@@ -62,8 +62,6 @@ jobs:
python3 -m pip install -r docs/requirements.txt --user
cd docs/openvino_sphinx_theme
python3 setup.py install --user
cd ../openvino_custom_sphinx_sitemap
python3 setup.py install --user
cd ../..
# install doxyrest
wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz

View File

@@ -92,8 +92,8 @@ jobs:
- name: Install Clang dependency
run: |
sudo apt update
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13 clang-15
sudo apt --assume-yes install clang-14 libclang-14-dev
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
sudo apt --assume-yes install libclang-14-dev
- name: Install Python-based dependencies
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt

View File

@@ -101,10 +101,10 @@ function(ov_download_tbb)
if(WIN32 AND X86_64)
# TODO: add target_path to be platform specific as well, to avoid following if
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "tbb2020_81e4471_win.zip"
ARCHIVE_WIN "tbb2020_617e9a71_win.zip"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "5e7c9dc430e8a61becd0b149668cb336ff44d0b4f8f823fc695b181880e213d2"
SHA256 "01cac3cc48705bd52b83a6e1fa1ed95c708928be76160f5b9c5c37f954d56df4"
USE_NEW_LOCATION TRUE)
elseif(ANDROID AND X86_64)
RESOLVE_DEPENDENCY(TBB

View File

@@ -363,8 +363,6 @@ else()
endif()
endif()
check_cxx_compiler_flag("-Wunused-but-set-variable" UNUSED_BUT_SET_VARIABLE_SUPPORTED)
# Links provided libraries and include their INTERFACE_INCLUDE_DIRECTORIES as SYSTEM
function(link_system_libraries TARGET_NAME)
set(MODE PRIVATE)

View File

@@ -110,12 +110,12 @@ ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performan
ie_dependent_option (ENABLE_MYRIAD_MVNC_TESTS "functional and behavior tests for mvnc api" OFF "ENABLE_TESTS;ENABLE_INTEL_MYRIAD" OFF)
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
ie_dependent_option (ENABLE_BEH_TESTS "tests oriented to check OpenVINO Runtime API correctness" ON "ENABLE_TESTS" OFF)
ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS" OFF)
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
ie_option (ENABLE_SAMPLES "console samples are part of OpenVINO Runtime package" ON)
ie_option (ENABLE_OPENCV "enables custom OpenCV download" OFF)

View File

@@ -37,7 +37,6 @@ include(CMakeFindDependencyMacro)
find_dependency(OpenVINO
PATHS "${CMAKE_CURRENT_LIST_DIR}"
"${CMAKE_CURRENT_LIST_DIR}/../openvino${InferenceEngine_VERSION}"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)

View File

@@ -274,14 +274,6 @@ endfunction()
# OpenVINO config
#
cmake_policy(PUSH)
# we need CMP0057 to allow IN_LIST in if() command
if(POLICY CMP0057)
cmake_policy(SET CMP0057 NEW)
else()
message(FATAL_ERROR "OpenVINO requires CMake 3.3 or newer")
endif()
# need to store current PACKAGE_PREFIX_DIR, because it's overwritten by sub-package one
set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
@@ -297,30 +289,10 @@ _ov_find_dependency(Threads)
unset(_OV_ENABLE_OPENVINO_BUILD_SHARED)
set(RUNTIME_AND_FRONTEND_TARGETS openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle
openvino::frontend::tensorflow)
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
foreach(target ${RUNTIME_AND_FRONTEND_TARGETS})
if(TARGET ${target})
get_target_property(ORIGINAL_NAME ${target} ALIASED_TARGET)
if (ORIGINAL_NAME MATCHES "ORIGINAL_NAME-NOTFOUND")
set (ORIGINAL_NAME ${target})
endif()
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
if(RELWITHDEBINFO IN_LIST imported_configs)
set (MAP_CONFIG RELWITHDEBINFO)
else()
set (MAP_CONFIG RELEASE)
endif()
set_property(TARGET ${ORIGINAL_NAME} PROPERTY MAP_IMPORTED_CONFIG_RELWITHDEBINFO ${MAP_CONFIG})
endif()
endforeach()
# WA for cmake version < 3.16 which does not export
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
@@ -360,7 +332,8 @@ endif()
# Apply common functions
#
foreach(target ${RUNTIME_AND_FRONTEND_TARGETS})
foreach(target openvino::runtime openvino::runtime::c
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow)
if(TARGET ${target} AND _ov_as_external_package)
_ov_target_no_deprecation_error(${target})
endif()
@@ -378,10 +351,7 @@ if(_need_package_name_reset)
unset(_need_package_name_reset)
endif()
unset(RUNTIME_AND_FRONTEND_TARGETS)
unset(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND)
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND)
cmake_policy(POP)

View File

@@ -1,5 +1,5 @@
# ******************************************************************************
# Copyright 2017-2023 Intel Corporation
# Copyright 2017-2022 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,7 +15,7 @@
# ******************************************************************************
#
#
# ngraph config file
# FindNGraph
# ------
#
# This script defines the following variables and imported targets:
@@ -44,7 +44,7 @@ include(CMakeFindDependencyMacro)
find_dependency(OpenVINO
PATHS "${CMAKE_CURRENT_LIST_DIR}"
"${CMAKE_CURRENT_LIST_DIR}/../openvino${ngraph_VERSION}"
"${CMAKE_CURRENT_LIST_DIR}/ngraph"
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)

View File

@@ -1,71 +0,0 @@
# Datumaro {#datumaro_documentation}
@sphinxdirective
Datumaro provides a suite of basic data import/export (IE) for more than 35 public vision data
formats and manipulation functionalities such as validation, correction, filtration, and some
transformations. To achieve the web-scale training, this further aims to merge multiple
heterogeneous datasets through comparator and merger. Datumaro is integrated into Geti™, OpenVINO™
Training Extensions, and CVAT for the ease of data preparation. Datumaro is open-sourced and
available on `GitHub <https://github.com/openvinotoolkit/datumaro>`__.
Refer to the official `documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__ to learn more.
Plus, enjoy `Jupyter notebooks <https://github.com/openvinotoolkit/datumaro/tree/develop/notebooks>`__ for the real Datumaro practices.
Detailed Workflow
#################
.. image:: ./_static/images/datumaro.png
1. To start working with Datumaro, download public datasets or prepare your own annotated dataset.
.. note::
Datumaro provides a CLI `datum download` for downloading `TensorFlow Datasets <https://www.tensorflow.org/datasets>`__.
2. Import data into Datumaro and manipulate the dataset for the data quality using `Validator`, `Corrector`, and `Filter`.
3. Compare two datasets and transform the label schemas (category information) before merging them.
4. Merge two datasets to a large-scale dataset.
.. note::
There are some choices of merger, i.e., `ExactMerger`, `IntersectMerger`, and `UnionMerger`.
5. Split the unified dataset into subsets, e.g., `train`, `valid`, and `test` through `Splitter`.
.. note::
We can split data with a given ratio of subsets according to both the number of samples or
annotations. Please see `SplitTask` for the task-specific split.
6. Export the cleaned and unified dataset for follow-up workflows such as model training.
Go to :doc:`OpenVINO™ Training Extensions <ote_documentation>`.
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
Datumaro Components
###################
* `Datumaro CLIs <https://openvinotoolkit.github.io/datumaro/stable/docs/command-reference/overview.html>`__
* `Datumaro APIs <https://openvinotoolkit.github.io/datumaro/stable/docs/reference/datumaro_module.html>`__
* `Datumaro data format <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/datumaro_format.html>`__
* `Supported data formats <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/formats/index.html>`__
Tutorials
#########
* `Basic skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/basic_skills/index.html>`__
* `Intermediate skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/intermediate_skills/index.html>`__
* `Advanced skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/advanced_skills/index.html>`__
Python Hands-on Examples
########################
* `Data IE <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/dataset_IO.html>`__
* `Data manipulation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/manipulate.html>`__
* `Data exploration <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/explore.html>`__
* `Data refinement <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/refine.html>`__
* `Data transformation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/transform.html>`__
* `Deep learning end-to-end use-cases <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/e2e_example.html>`__
@endsphinxdirective

View File

@@ -33,4 +33,5 @@ Once you have a model that meets both OpenVINO™ and your requirements, you can
@endsphinxdirective
OpenVINO 2023.0 provides more options, providing inference of TensorFlow models with no additional conversion.
Apart from the default deployment options, you may also [deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).

View File

@@ -7,7 +7,7 @@
:hidden:
ote_documentation
datumaro_documentation
ovtf_integration
ovsa_get_started
openvino_inference_engine_tools_compile_tool_README
openvino_docs_tuning_utilities
@@ -36,16 +36,6 @@ More resources:
* [GitHub](https://github.com/openvinotoolkit/training_extensions)
* [Documentation](https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html)
### Dataset Management Framework (Datumaro)
A framework and CLI tool to build, transform, and analyze datasets.
More resources:
* [Overview](@ref datumaro_documentation)
* [PyPI](https://pypi.org/project/datumaro/)
* [GitHub](https://github.com/openvinotoolkit/datumaro)
* [Documentation](https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html)
### OpenVINO™ Security Add-on
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
@@ -57,8 +47,6 @@ More resources:
### OpenVINO™ integration with TensorFlow (OVTF)
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions.
More resources:
* [documentation](https://github.com/openvinotoolkit/openvino_tensorflow)
* [PyPI](https://pypi.org/project/openvino-tensorflow/)
@@ -89,3 +77,11 @@ More resources:
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
* [GitHub](https://github.com/openvinotoolkit/cvat)
### Dataset Management Framework (Datumaro)
A framework and CLI tool to build, transform, and analyze datasets.
More resources:
* [documentation on GitHub](https://openvinotoolkit.github.io/datumaro/docs/)
* [PyPI](https://pypi.org/project/datumaro/)
* [GitHub](https://github.com/openvinotoolkit/datumaro)

View File

@@ -0,0 +1,44 @@
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
This is all you need:
```bash
import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')
```
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
- Intel® CPUs
- Intel® integrated GPUs
- Intel® Movidius™ Vision Processing Units - referred to as VPU
- Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL
> **NOTE**: For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated [GitHub repository](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs).
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples folder](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples) in our GitHub repository.
Sample tutorials are also hosted on [Intel® DevCloud](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html). The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
## License
**OpenVINO™ integration with TensorFlow** is licensed under [Apache License Version 2.0](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE).
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
## Support
Submit your questions, feature requests and bug reports via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
## How to Contribute
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
* Share your proposal via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
* Submit a [pull request](https://github.com/openvinotoolkit/openvino_tensorflow/pulls).
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
---
\* Other names and brands may be claimed as the property of others.

View File

@@ -26,15 +26,14 @@ If the results are unsatisfactory, add datasets and perform the same steps, star
OpenVINO Training Extensions Components
#######################################
* `OpenVINO Training Extensions API <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/api>`__
* `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/cli>`__
* `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/algorithms>`__
- `OpenVINO Training Extensions SDK <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_sdk>`__
- `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_cli>`__
- `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/master/external>`__
Tutorials
#########
* `Base tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/base/index.html>`__
* `Advanced tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/advanced/index.html>`__
`Object Detection <https://github.com/openvinotoolkit/training_extensions/blob/master/ote_cli/notebooks/train.ipynb>`__
@endsphinxdirective

View File

@@ -1010,6 +1010,7 @@ EXCLUDE_SYMBOLS = InferenceEngine::details \
ie_api::BlobBuffer \
*impl* \
*device_name* \
*num_requests* \
*exec_net* \
*c_config* \
*ie_core_impl* \

View File

@@ -1,7 +1,5 @@
# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_Extensibility_UG_VPU_Kernel}
To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.
> **NOTE:**

View File

@@ -1,18 +1,14 @@
# Frontend Extensions {#openvino_docs_Extensibility_UG_Frontend_Extensions}
@sphinxdirective
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to understand entire flow.
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
.. note::
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__, which demonstrates extension development details based on minimalistic ``Identity`` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
## Single Operation Mapping with OpExtension
Single Operation Mapping with OpExtension
#########################################
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension`` class that works well if all the following conditions are satisfied:
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is `OpExtension` class that works well if all the following conditions are satisfied:
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
@@ -24,87 +20,54 @@ This section covers the case when a single operation in framework representation
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
.. note::
``OpExtension`` class is currently available for ONNX frontend. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
The next example maps ONNX operation with type `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__ to OpenVINO template extension ``Identity`` class.
The next example maps ONNX operation with type [Identity]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_Identity_header]
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_Identity]
@snippet ov_extensions.cpp frontend_extension_Identity_header
@snippet ov_extensions.cpp frontend_extension_Identity
The mapping doesnt involve any attributes, as operation Identity doesnt have them.
Extension objects, like just constructed ``extension`` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
Extension objects, like just constructed `extension` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_read_model]
@snippet ov_extensions.cpp frontend_extension_read_model
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or ``benchmark_app``. Read about how to build and load such library in chapter “Create library with extensions” in :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or `benchmark_app`. Read about how to build and load such library in chapter “Create library with extensions” in [Introduction to OpenVINO Extension](Intro.md).
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces ``f32`` data type then operation that consumes this output should also support ``f32``. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces `f32` data type then operation that consumes this output should also support `f32`. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
Converting to Standard OpenVINO Operation
+++++++++++++++++++++++++++++++++++++++++
### Converting to Standard OpenVINO Operation
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
`OpExtension` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like `TemplateExtension::Identity` implemented.
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> ``Relu`` mapping should be used:
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> `Relu` mapping should be used:
.. tab-set::
@snippet ov_extensions.cpp frontend_extension_MyRelu
.. tab-item:: C++
:sync: cpp
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation `Relu` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a `ov::opset8::Relu` class name as a template parameter for `OpExtension`. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with `TemplateExtension::Identity`.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_MyRelu]
### Attributes Mapping
.. tab-item:: Python
:sync: python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_MyRelu]
As described above, `OpExtension` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on `visit_attributes` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names `attr1` and `attr2`:
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation ``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a template parameter for ``OpExtension``. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with ``TemplateExtension::Identity``.
@snippet ov_extensions.cpp frontend_extension_CustomOperation
Attributes Mapping
++++++++++++++++++
And original model in framework representation also has operation with name “CustomOperatoin” with the same `attr1` and `attr2` attributes. Then with the following code:
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
@snippet ov_extensions.cpp frontend_extension_CustomOperation_as_is
Imagine you have CustomOperation class implementation that has two attributes with names ``attr1`` and ``attr2``:
both `attr1` and `attr2` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in `OpExtension` constructor:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation]
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename
And original model in framework representation also has operation with name “CustomOperatoin” with the same ``attr1`` and ``attr2`` attributes. Then with the following code:
Where `fw_attr1` and `fw_attr2` are names for corresponding attributes in framework operation representation.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is]
If copying of an attribute is not what you need, `OpExtension` also can set attribute to predefined constant value. For the same `CustomOperation`, imagine you want to set `attr2` to value 5 instead of copying from `fw_attr2`, to achieve that do the following:
both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in ``OpExtension`` constructor:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename]
Where ``fw_attr1`` and ``fw_attr2`` are names for corresponding attributes in framework operation representation.
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value. For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``, to achieve that do the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_rename_set]
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename_set
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
@@ -116,62 +79,27 @@ So the conclusion is that each attribute of target OpenVINO operation should be
This is achieved by specifying maps as arguments for `OpExtension` constructor.
Mapping to Multiple Operations with ConversionExtension
#######################################################
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
## Mapping to Multiple Operations with ConversionExtension
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated ``ConversionExtension`` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make `OpExtension` usable.
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated `ConversionExtension` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu” from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
`ConversionExtension` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter [Build a Model in OpenVINO Runtime](@ref ov_ug_build_model) to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
.. note::
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of ``ThresholdedRelu``.
The next example illustrates using `ConversionExtension` for conversion of “ThresholdedRelu” from ONNX according to the formula: `ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))`.
.. tab-set::
> **NOTE**: `ThresholdedRelu` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of `ThresholdedRelu`.
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_ThresholdedReLU_header]
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU_header
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU
.. tab-item:: Python
:sync: python
To access original framework operation attribute value and connect to inputs, `node` object of type `NodeContext` is used. It has two main methods:
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU_header]
* `NodeContext::get_input` to get input with a given index,
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_ThresholdedReLU]
.. tab-item:: Python
:sync: python
.. doxygensnippet:: docs/snippets/ov_extensions.py
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU]
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has two main methods:
* ``NodeContext::get_input`` to get input with a given index,
* ``NodeContext::get_attribute`` to get attribute value with a given name.
* `NodeContext::get_attribute` to get attribute value with a given name.
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.
@endsphinxdirective

View File

@@ -490,7 +490,6 @@ Some of TensorFlow operations do not match any OpenVINO operations. Yet, they ar
| Abs |
| Acos |
| Acosh |
| Add |
| And |
| ArgMin |
| ArgMax |

View File

@@ -8,20 +8,48 @@ There are several public versions of EfficientDet model implementation available
convert models from the [repository](https://github.com/google/automl/tree/master/efficientdet)
(commit 96e1fee) to the OpenVINO format.
Download and extract the model checkpoint [efficientdet-d4.tar.gz](https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco2/efficientdet-d4.tar.gz) referenced in the **Pretrained EfficientDet Checkpoints** section of the model repository:
### Getting a Frozen TensorFlow Model
Follow the instructions below to get frozen TensorFlow EfficientDet model. EfficientDet-D4 model is an example:
1. Clone the repository:<br>
```sh
git clone https://github.com/google/automl
cd automl/efficientdet
```
2. (Optional) Checkout to the commit that the conversion was tested on:<br>
```sh
git checkout 96e1fee
```
3. Install required dependencies:<br>
```sh
python3 -m pip install --upgrade pip
python3 -m pip install -r requirements.txt
python3 -m pip install --upgrade tensorflow-model-optimization
```
4. Download and extract the model checkpoint [efficientdet-d4.tar.gz](https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco2/efficientdet-d4.tar.gz)
referenced in the **"Pretrained EfficientDet Checkpoints"** section of the model repository:<br>
```sh
wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco2/efficientdet-d4.tar.gz
tar zxvf efficientdet-d4.tar.gz
```
5. Freeze the model:<br>
```sh
mo --runmode=saved_model --model_name=efficientdet-d4 --ckpt_path=efficientdet-d4 --saved_model_dir=savedmodeldir
```
As a result, the frozen model file `savedmodeldir/efficientdet-d4_frozen.pb` will be generated.
> **NOTE**: For custom trained models, specify `--hparams` flag to `config.yaml` which was used during training.
> **NOTE**: If you see an error *AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.initializers' has no attribute 'variance_scaling'*, apply the fix from the [patch](https://github.com/google/automl/pull/846).
### Converting an EfficientDet TensorFlow Model to the IR
To generate the IR of the EfficientDet TensorFlow model, run:<br>
```sh
mo \
--input_meta_graph efficientdet-d4/model.meta \
--input_model savedmodeldir/efficientdet-d4_frozen.pb \
--transformations_config front/tf/automl_efficientdet.json \
--input_shape [1,$IMAGE_SIZE,$IMAGE_SIZE,3] \
--reverse_input_channels
```
@@ -31,6 +59,12 @@ EfficientDet models were trained with different input image sizes. To determine
dictionary in the [hparams_config.py](https://github.com/google/automl/blob/96e1fee/efficientdet/hparams_config.py#L304) file.
The attribute `image_size` specifies the shape to be defined for the model conversion.
The `transformations_config` command line parameter specifies the configuration json file containing hints
for the Model Optimizer on how to convert the model and trigger transformations implemented in the
`<PYTHON_SITE_PACKAGES>/openvino/tools/mo/front/tf/AutomlEfficientDet.py`. The json file contains some parameters which must be changed if you
train the model yourself and modified the `hparams_config` file or the parameters are different from the ones used for EfficientDet-D4.
The attribute names are self-explanatory or match the name in the `hparams_config` file.
> **NOTE**: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to the **When to Reverse Input Channels** section of the [Converting a Model to Intermediate Representation (IR)](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
OpenVINO toolkit provides samples that can be used to infer EfficientDet model.
@@ -39,21 +73,21 @@ For more information, refer to the [Open Model Zoo Demos](@ref omz_demos).
## <a name="efficientdet-ir-results-interpretation"></a>Interpreting Results of the TensorFlow Model and the IR
The TensorFlow model produces as output a list of 7-element tuples: `[image_id, y_min, x_min, y_max, x_max, confidence, class_id]`, where:
* `image_id` - image batch index.
* `y_min` - absolute `y` coordinate of the lower left corner of the detected object.
* `x_min` - absolute `x` coordinate of the lower left corner of the detected object.
* `y_max` - absolute `y` coordinate of the upper right corner of the detected object.
* `x_max` - absolute `x` coordinate of the upper right corner of the detected object.
* `confidence` - is the confidence of the detected object.
* `class_id` - is the id of the detected object class counted from 1.
* `image_id` -- image batch index.
* `y_min` -- absolute `y` coordinate of the lower left corner of the detected object.
* `x_min` -- absolute `x` coordinate of the lower left corner of the detected object.
* `y_max` -- absolute `y` coordinate of the upper right corner of the detected object.
* `x_max` -- absolute `x` coordinate of the upper right corner of the detected object.
* `confidence` -- is the confidence of the detected object.
* `class_id` -- is the id of the detected object class counted from 1.
The output of the IR is a list of 7-element tuples: `[image_id, class_id, confidence, x_min, y_min, x_max, y_max]`, where:
* `image_id` - image batch index.
* `class_id` - is the id of the detected object class counted from 0.
* `confidence` - is the confidence of the detected object.
* `x_min` - normalized `x` coordinate of the lower left corner of the detected object.
* `y_min` - normalized `y` coordinate of the lower left corner of the detected object.
* `x_max` - normalized `x` coordinate of the upper right corner of the detected object.
* `y_max` - normalized `y` coordinate of the upper right corner of the detected object.
* `image_id` -- image batch index.
* `class_id` -- is the id of the detected object class counted from 0.
* `confidence` -- is the confidence of the detected object.
* `x_min` -- normalized `x` coordinate of the lower left corner of the detected object.
* `y_min` -- normalized `y` coordinate of the lower left corner of the detected object.
* `x_max` -- normalized `x` coordinate of the upper right corner of the detected object.
* `y_max` -- normalized `y` coordinate of the upper right corner of the detected object.
The first element with `image_id = -1` means end of data.

View File

@@ -66,12 +66,12 @@ Note that if you choose to exclude CPU from the priority list or disable the ini
This mechanism can be easily observed in the :ref:`Using AUTO with Benchmark app sample <using-auto-with-openvino-samples-and-benchmark-app>` section, showing how the first-inference latency (the time it takes to compile the model and perform the first inference) is reduced when using AUTO. For example:
.. code-block:: sh
.. code-block: sh
benchmark_app -m ../public/alexnet/FP32/alexnet.xml -d GPU -niter 128
.. code-block:: sh
.. code-block: sh
benchmark_app -m ../public/alexnet/FP32/alexnet.xml -d AUTO -niter 128
@@ -224,14 +224,9 @@ Performance Hints for AUTO
The ``ov::hint::performance_mode`` property enables you to specify a performance option for AUTO to be more efficient for particular use cases. The default hint for AUTO is ``LATENCY``.
The THROUGHPUT and CUMULATIVE_THROUGHPUT hints below only improve performance in an asynchronous inference pipeline. For information on asynchronous inference, see the `Async API documentation <https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_Infer_request.html#doxid-openvino-docs-o-v-u-g-infer-request>`__ . The following notebooks provide examples of how to set up an asynchronous pipeline:
* :doc:`Image Classification Async Sample <openvino_inference_engine_samples_classification_sample_async_README>`
* `Notebook - Asynchronous Inference with OpenVINO™ <https://docs.openvino.ai/2022.3/notebooks/115-async-api-with-output.html>`__
* `Notebook - Automatic Device Selection with OpenVINO <https://docs.openvino.ai/2022.3/notebooks/106-auto-device-with-output.html>`__
LATENCY
----------
^^^^^^^
This option prioritizes low latency, providing short response time for each inference job. It performs best for tasks where inference is required for a single input image, e.g. a medical analysis of an ultrasound scan image. It also fits the tasks of real-time or nearly real-time applications, such as an industrial robot's response to actions in its environment or obstacle avoidance for autonomous vehicles.
@@ -256,43 +251,15 @@ While ``LATENCY`` and ``THROUGHPUT`` can select one target device with your pref
CUMULATIVE_THROUGHPUT has similar behavior as :doc:`the Multi-Device execution mode (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`. The only difference is that CUMULATIVE_THROUGHPUT uses the devices specified by AUTO, which means that it's not mandatory to add devices manually, while with MULTI, you need to specify the devices before inference.
If device priority is specified when using CUMULATIVE_THROUGHPUT, AUTO will run inference requests on devices based on the priority. In the following example, AUTO will always try to use GPU first, and then use CPU if GPU is busy:
With the CUMULATIVE_THROUGHPUT option:
.. tab-set::
* If ``AUTO`` without any device names is specified, and the system has more than two GPU devices, AUTO will remove CPU from the device candidate list to keep GPU running at full capacity.
* If device priority is specified, AUTO will run inference requests on devices based on the priority. In the following example, AUTO will always try to use GPU first, and then use CPU if GPU is busy:
.. tab-item:: C++
:sync: cpp
.. code-block: sh
.. code-block:: sh
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:GPU,CPU", ov::hint::performance_mode(ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT));
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:GPU,CPU", ov::hint::performance_mode(ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT));
.. tab-item:: Python
:sync: py
.. code-block:: sh
compiled_model = core.compile_model(model, "AUTO:GPU,CPU", {"PERFORMANCE_HINT" : {"CUMULATIVE_THROUGHPUT"}})
If AUTO is used without specifying any device names, and if there are multiple GPUs in the system, CUMULATIVE_THROUGHPUT mode will use all of the GPUs by default. If the system has more than two GPU devices, AUTO will remove CPU from the device candidate list to keep the GPUs running at full capacity. A full list of system devices and their unique identifiers can be queried using ov::Core::get_available_devices (for more information, see :doc:`Query Device Properties <openvino_docs_OV_UG_query_api>`). To explicitly specify which GPUs to use, set their priority when compiling with AUTO:
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. code-block:: sh
ov::CompiledModel compiled_model = core.compile_model(model, "AUTO:GPU.1,GPU.0", ov::hint::performance_mode(ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT));
.. tab-item:: Python
:sync: py
.. code-block:: sh
compiled_model = core.compile_model(model, "AUTO:GPU.1,GPU.0", {"PERFORMANCE_HINT" : {"CUMULATIVE_THROUGHPUT"})
Code Examples
--------------------
@@ -300,21 +267,17 @@ Code Examples
To enable performance hints for your application, use the following code:
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/AUTO3.cpp
:language: cpp
:fragment: [part3]
.. doxygensnippet:: docs/snippets/AUTO3.cpp
:language: cpp
:fragment: [part3]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part3]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part3]
Disabling Auto-Batching for THROUGHPUT and CUMULATIVE_THROUGHPUT
@@ -329,21 +292,17 @@ Configuring Model Priority
The ``ov::hint::model_priority`` property enables you to control the priorities of models in the Auto-Device plugin. A high-priority model will be loaded to a supported high-priority device. A lower-priority model will not be loaded to a device that is occupied by a higher-priority model.
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/AUTO4.cpp
:language: cpp
:fragment: [part4]
.. doxygensnippet:: docs/snippets/AUTO4.cpp
:language: cpp
:fragment: [part4]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part4]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part4]
Checking Target Runtime Devices
@@ -352,43 +311,35 @@ Checking Target Runtime Devices
To query the runtime target devices on which the inferences are being executed using AUTO, you can use the ``ov::execution_devices`` property. It must be used with ``get_property``, for example:
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/AUTO7.cpp
:language: cpp
:fragment: [part7]
.. doxygensnippet:: docs/snippets/AUTO7.cpp
:language: cpp
:fragment: [part7]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part7]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part7]
Configuring Individual Devices and Creating the Auto-Device plugin on Top
#########################################################################
Although the methods described above are currently the preferred way to execute inference with AUTO, the following steps can be also used as an alternative. It is currently available as a legacy feature and used if AUTO is incapable of utilizing the Performance Hints option.
Although the methods described above are currently the preferred way to execute inference with AUTO, the following steps can be also used as an alternative. It is currently available as a legacy feature and used if the device candidate list includes Myriad devices, incapable of utilizing the Performance Hints option.
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/AUTO5.cpp
:language: cpp
:fragment: [part5]
.. doxygensnippet:: docs/snippets/AUTO5.cpp
:language: cpp
:fragment: [part5]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part5]
.. doxygensnippet:: docs/snippets/ov_auto.py
:language: python
:fragment: [part5]
.. _using-auto-with-openvino-samples-and-benchmark-app:
@@ -400,15 +351,15 @@ To see how the Auto-Device plugin is used in practice and test its performance,
For unlimited device choice:
.. code-block:: sh
.. code-block:sh
benchmark_app d AUTO m <model> -i <input> -niter 1000
For limited device choice:
.. code-block:: sh
.. code-block:sh
benchmark_app d AUTO:CPU,GPU,GNA m <model> -i <input> -niter 1000
benchmark_app d AUTO:CPU,GPU,MYRIAD m <model> -i <input> -niter 1000
For more information, refer to the :doc:`C++ <openvino_inference_engine_samples_benchmark_app_README>` or :doc:`Python <openvino_inference_engine_tools_benchmark_tool_README>` version instructions.

View File

@@ -1,237 +1,155 @@
# Automatic Batching {#openvino_docs_OV_UG_Automatic_Batching}
@sphinxdirective
The Automatic Batching Execution mode (or Auto-batching for short) performs automatic batching on-the-fly to improve device utilization by grouping inference requests together, without programming effort from the user.
The Automatic Batching Execution mode (or Auto-batching for short) performs automatic batching on-the-fly to improve device utilization by grouping inference requests together, with no programming effort from the user.
With Automatic Batching, gathering the input and scattering the output from the individual inference requests required for the batch happen transparently, without affecting the application code.
Auto Batching can be used :ref:`directly as a virtual device <auto-batching-as-device>` or as an :ref:`option for inference on CPU/GPU/VPU <auto-batching-as-option>` (by means of configuration/hint). These 2 ways are provided for the user to enable the BATCH devices **explicitly** or **implicitly**, with the underlying logic remaining the same. An example of the difference is that the CPU device doesnt support implicitly to enable BATCH device, commands such as ``./benchmark_app -m <model> -d CPU -hint tput`` will not apply BATCH device **implicitly**, but ``./benchmark_app -m <model> -d "BATCH:CPU(16)`` can **explicitly** load BATCH device.
This article provides a preview of the new Automatic Batching function, including how it works, its configurations, and testing performance.
Auto-batching primarily targets the existing code written for inferencing many requests, each instance with the batch size 1. To get corresponding performance improvements, the application **must be running multiple inference requests simultaneously**.
## Enabling/Disabling Automatic Batching
Auto-batching primarily targets the existing code written for inferencing many requests, each instance with the batch size 1. To obtain corresponding performance improvements, the application **must be running many inference requests simultaneously**.
Auto-batching can also be used via a particular *virtual* device.
This article provides a preview of the Automatic Batching function, including how it works, its configurations, and testing performance.
Batching is a straightforward way of leveraging the compute power of GPU and saving on communication overheads. Automatic Batching is "implicitly" triggered on the GPU when `ov::hint::PerformanceMode::THROUGHPUT` is specified for the `ov::hint::performance_mode` property for the `compile_model` or `set_property` calls.
How Automatic Batching Works
############################
@sphinxtabset
.. tab-set::
.. tab-item:: Enabling Automatic Batching
:sync: enabling-automatic-batching
Batching is a straightforward way of leveraging the compute power of GPU and saving on communication overheads. Automatic Batching is "implicitly" triggered on the GPU when ``ov::hint::PerformanceMode::THROUGHPUT`` is specified for the ``ov::hint::performance_mode`` property for the ``compile_model`` or ``set_property`` calls.
@sphinxtab{C++}
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_auto_batching.cpp
:language: cpp
:fragment: [compile_model]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto_batching.py
:language: Python
:fragment: [compile_model]
To enable Auto-batching in the legacy apps not akin to the notion of performance hints, you need to use the **explicit** device notion, such as ``BATCH:GPU``.
@snippet docs/snippets/ov_auto_batching.cpp compile_model
.. tab-item:: Disabling Automatic Batching
:sync: disabling-automatic-batching
@endsphinxtab
Auto-Batching can be disabled (for example, for the GPU device) to prevent being triggered by ``ov::hint::PerformanceMode::THROUGHPUT``. To do that, set ``ov::hint::allow_auto_batching`` to **false** in addition to the ``ov::hint::performance_mode``, as shown below:
@sphinxtab{Python}
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_auto_batching.cpp
:language: cpp
:fragment: [compile_model_no_auto_batching]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto_batching.py
:language: Python
:fragment: [compile_model_no_auto_batching]
@snippet docs/snippets/ov_auto_batching.py compile_model
@endsphinxtab
@endsphinxtabset
Configuring Automatic Batching
++++++++++++++++++++++++++++++
To enable Auto-batching in the legacy apps not akin to the notion of performance hints, you need to use the **explicit** device notion, such as `BATCH:GPU`.
### Disabling Automatic Batching
Auto-Batching can be disabled (for example, for the GPU device) to prevent being triggered by `ov::hint::PerformanceMode::THROUGHPUT`. To do that, set `ov::hint::allow_auto_batching` to **false** in addition to the `ov::hint::performance_mode`, as shown below:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_auto_batching.cpp compile_model_no_auto_batching
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_auto_batching.py compile_model_no_auto_batching
@endsphinxtab
@endsphinxtabset
## Configuring Automatic Batching
Following the OpenVINO naming convention, the *batching* device is assigned the label of *BATCH*. The configuration options are as follows:
+----------------------------+------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Parameter name | Parameter description | Examples |
+============================+======================================================================================================+==================================================================================================================================================================================================================================================+
| ``AUTO_BATCH_DEVICE`` | The name of the device to apply Automatic batching, with the optional batch size value in brackets. | ``BATCH:GPU`` triggers the automatic batch size selection. ``BATCH:GPU(4)`` directly specifies the batch size. |
+----------------------------+------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ``ov::auto_batch_timeout`` | The timeout value, in ms. (1000 by default) | You can reduce the timeout value to avoid performance penalty when the data arrives too unevenly. For example, set it to "100", or the contrary, i.e., make it large enough to accommodate input preparation (e.g. when it is a serial process). |
+----------------------------+------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Parameter name | Parameter description | Examples |
| :--- | :--- |:-----------------------------------------------------------------------------|
| `AUTO_BATCH_DEVICE` | The name of the device to apply Automatic batching, with the optional batch size value in brackets. | `BATCH:GPU` triggers the automatic batch size selection. `BATCH:GPU(4)` directly specifies the batch size. |
| `ov::auto_batch_timeout` | The timeout value, in ms. (1000 by default) | You can reduce the timeout value to avoid performance penalty when the data arrives too unevenly. For example, set it to "100", or the contrary, i.e., make it large enough to accommodate input preparation (e.g. when it is a serial process). |
Automatic Batch Size Selection
++++++++++++++++++++++++++++++
## Automatic Batch Size Selection
In both the THROUGHPUT hint and the explicit BATCH device cases, the optimal batch size is selected automatically, as the implementation queries the ``ov::optimal_batch_size`` property from the device and passes the model graph as the parameter. The actual value depends on the model and device specifics, for example, the on-device memory for dGPUs.
The support for Auto-batching is not limited to GPU. However, if a device does not support ``ov::optimal_batch_size`` yet, to work with Auto-batching, an explicit batch size must be specified, e.g., ``BATCH:<device>(16)``.
In both the THROUGHPUT hint and the explicit BATCH device cases, the optimal batch size is selected automatically, as the implementation queries the `ov::optimal_batch_size` property from the device and passes the model graph as the parameter. The actual value depends on the model and device specifics, for example, the on-device memory for dGPUs.
The support for Auto-batching is not limited to GPU. However, if a device does not support `ov::optimal_batch_size` yet, to work with Auto-batching, an explicit batch size must be specified, e.g., `BATCH:<device>(16)`.
This "automatic batch size selection" works on the presumption that the application queries ``ov::optimal_number_of_infer_requests`` to create the requests of the returned number and run them simultaneously:
This "automatic batch size selection" works on the presumption that the application queries `ov::optimal_number_of_infer_requests` to create the requests of the returned number and run them simultaneously:
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_auto_batching.cpp
:language: cpp
:fragment: [query_optimal_num_requests]
@sphinxtabset
.. tab-item:: Python
:sync: py
@sphinxtab{C++}
.. doxygensnippet:: docs/snippets/ov_auto_batching.py
:language: Python
:fragment: [query_optimal_num_requests]
@snippet docs/snippets/ov_auto_batching.cpp query_optimal_num_requests
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_auto_batching.py query_optimal_num_requests
@endsphinxtab
@endsphinxtabset
.. _limiting-batch-size:
### Optimizing Performance by Limiting Batch Size
Optimizing Performance by Limiting Batch Size
---------------------------------------------
If not enough inputs were collected, the ``timeout`` value makes the transparent execution fall back to the execution of individual requests. This value can be configured via the ``AUTO_BATCH_TIMEOUT`` property.
If not enough inputs were collected, the `timeout` value makes the transparent execution fall back to the execution of individual requests. This value can be configured via the `AUTO_BATCH_TIMEOUT` property.
The timeout, which adds itself to the execution time of the requests, heavily penalizes the performance. To avoid this, when your parallel slack is bounded, provide OpenVINO with an additional hint.
For example, when the application processes only 4 video streams, there is no need to use a batch larger than 4. The most future-proof way to communicate the limitations on the parallelism is to equip the performance hint with the optional ``ov::hint::num_requests`` configuration key set to 4. This will limit the batch size for the GPU and the number of inference streams for the CPU, hence each device uses ``ov::hint::num_requests`` while converting the hint to the actual device configuration options:
For example, when the application processes only 4 video streams, there is no need to use a batch larger than 4. The most future-proof way to communicate the limitations on the parallelism is to equip the performance hint with the optional `ov::hint::num_requests` configuration key set to 4. This will limit the batch size for the GPU and the number of inference streams for the CPU, hence each device uses `ov::hint::num_requests` while converting the hint to the actual device configuration options:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_auto_batching.cpp hint_num_requests
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_auto_batching.py hint_num_requests
@endsphinxtab
@endsphinxtabset
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_auto_batching.cpp
:language: cpp
:fragment: [hint_num_requests]
For the *explicit* usage, you can limit the batch size by using `BATCH:GPU(4)`, where 4 is the number of requests running in parallel.
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_auto_batching.py
:language: Python
:fragment: [hint_num_requests]
For the *explicit* usage, you can limit the batch size by using ``BATCH:GPU(4)``, where 4 is the number of requests running in parallel.
.. _auto-batching-as-device:
Automatic Batching as an explicit device
++++++++++++++++++++++++++++++++++++++++
The below examples show how AUTO Batching can be used in the form of device that the user can apply to perform inference directly:
.. code-block:: sh
./benchmark_app -m <model> -d "BATCH:GPU"
./benchmark_app -m <model> -d "BATCH:GPU(16)"
./benchmark_app -m <model> -d "BATCH:CPU(16)"
* ``BATCH`` -- load BATCH device explicitly,
* ``:GPU(16)`` -- BATCH devices configuration, which tell BATCH device to apply GPU device with batch size = 16.
.. _auto-batching-as-option:
Automatic Batching as underlying device configured to other devices
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the following example, BATCH device will be configured to another device in case of ``tput/ctput mode``.
.. code-block:: sh
./benchmark_app -m <model> -d GPU -hint tput
./benchmark_app -m <model> -d AUTO -hint tput
./benchmark_app -m <model> -d AUTO -hint ctput
./benchmark_app -m <model> -d AUTO:GPU -hint ctput
.. note::
If you run ``./benchmark_app``, do not set ``batch_size`` by ``-b <batch_size>``, otherwise AUTO mode will not be applied.
Other Performance Considerations
################################
## Other Performance Considerations
To achieve the best performance with Automatic Batching, the application should:
- Operate inference requests of the number that represents the multiple of the batch size. In the example above, for batch size 4, the application should operate 4, 8, 12, 16, etc. requests.
- Use the requests that are grouped by the batch size together. For example, the first 4 requests are inferred, while the second group of the requests is being populated. Essentially, Automatic Batching shifts the asynchronicity from the individual requests to the groups of requests that constitute the batches.
- Balance the `timeout` value vs. the batch size. For example, in many cases, having a smaller `timeout` value/batch size may yield better performance than having a larger batch size with a `timeout` value that is not large enough to accommodate the full number of the required requests.
- When Automatic Batching is enabled, the `timeout` property of `ov::CompiledModel` can be changed anytime, even after the loading/compilation of the model. For example, setting the value to 0 disables Auto-batching effectively, as the collection of requests would be omitted.
- Carefully apply Auto-batching to the pipelines. For example, in the conventional "video-sources -> detection -> classification" flow, it is most beneficial to do Auto-batching over the inputs to the detection stage. The resulting number of detections is usually fluent, which makes Auto-batching less applicable for the classification stage.
- Operate inference requests of the number that represents the multiple of the batch size. In the example from :ref:`Optimizing Performance by Limiting Batch Size section <limiting-batch-size>` -- for batch size 4, the application should operate 4, 8, 12, 16, etc. requests.
- Use the requests that are grouped by the batch size together. For example, the first 4 requests are inferred, while the second group of the requests is being populated. Essentially, Automatic Batching shifts the asynchronicity from the individual requests to the groups of requests that constitute the batches.
- Balance the ``timeout`` value vs. the batch size. For example, in many cases, having a smaller ``timeout`` value/batch size may yield better performance than having a larger batch size with a ``timeout`` value that is not large enough to accommodate the full number of the required requests.
- When Automatic Batching is enabled, the ``timeout`` property of ``ov::CompiledModel`` can be changed anytime, even after the loading/compilation of the model. For example, setting the value to 0 disables Auto-batching effectively, as the collection of requests would be omitted.
- Carefully apply Auto-batching to the pipelines. For example, in the conventional "video-sources -> detection -> classification" flow, it is most beneficial to do Auto-batching over the inputs to the detection stage. The resulting number of detections is usually fluent, which makes Auto-batching less applicable for the classification stage.
Limitations
+++++++++++
The following are limitations of the current AUTO Batching implementations:
- The dynamic model is not supported by ``BATCH`` device.
- ``BATCH`` device can only support ``tput/ctput mode``. The ``latency/none mode`` is not supported.
- Supported are only models with ``batch dimension = 1``.
- The input/output tensor should come from ``inferRequest``, otherwise the user-created tensor will trigger a memory copying.
- The ``OPTIMAL_BATCH_SIZE`` should be greater than ``2``. In case it's not, user needs to specify a batch size which depends on model and device (CPU does not support this property).
- ``BATCH`` device supports GPU by default, while CPU will not trigger ``auto_batch`` in ``tput`` mode.
- ``AUTO_BATCH`` will bring much more compilation latency.
The following are limitations of the current implementations:
- Although it is less critical for the throughput-oriented scenarios, the load time with Auto-batching increases by almost double.
- Certain networks are not safely reshapable by the "batching" dimension (specified as ``N`` in the layout terms). Besides, if the batching dimension is not zeroth, Auto-batching will not be triggered "implicitly" by the throughput hint.
- The "explicit" notion, for example, ``BATCH:GPU``, using the relaxed dimensions tracking, often makes Auto-batching possible. For example, this method unlocks most **detection networks**.
- When *forcing* Auto-batching via the "explicit" device notion, make sure that you validate the results for correctness.
- Performance improvements happen at the cost of the growth of memory footprint. However, Auto-batching queries the available memory (especially for dGPU) and limits the selected batch size accordingly.
- Certain networks are not safely reshapable by the "batching" dimension (specified as `N` in the layout terms). Besides, if the batching dimension is not zeroth, Auto-batching will not be triggered "implicitly" by the throughput hint.
- The "explicit" notion, for example, `BATCH:GPU`, using the relaxed dimensions tracking, often makes Auto-batching possible. For example, this method unlocks most **detection networks**.
- When *forcing* Auto-batching via the "explicit" device notion, make sure that you validate the results for correctness.
- Performance improvements happen at the cost of the growth of memory footprint. However, Auto-batching queries the available memory (especially for dGPU) and limits the selected batch size accordingly.
Testing Performance with Benchmark_app
######################################
The ``benchmark_app`` sample, that has both :doc:`C++ <openvino_inference_engine_samples_benchmark_app_README>` and :doc:`Python <openvino_inference_engine_tools_benchmark_tool_README>` versions, is the best way to evaluate the performance of Automatic Batching:
- The most straightforward way is using the performance hints:
- benchmark_app **-hint tput** -d GPU -m 'path to your favorite model'
- You can also use the "explicit" device notion to override the strict rules of the implicit reshaping by the batch dimension:
- benchmark_app **-hint none -d BATCH:GPU** -m 'path to your favorite model'
- or override the automatically deduced batch size as well:
- $benchmark_app -hint none -d **BATCH:GPU(16)** -m 'path to your favorite model'
- This example also applies to CPU or any other device that generally supports batch execution.
- Keep in mind that some shell versions (e.g. ``bash``) may require adding quotes around complex device names, i.e. ``-d "BATCH:GPU(16)"`` in this example.
## Testing Performance with Benchmark_app
The `benchmark_app` sample, that has both [C++](../../samples/cpp/benchmark_app/README.md) and [Python](../../tools/benchmark_tool/README.md) versions, is the best way to evaluate the performance of Automatic Batching:
- The most straightforward way is using the performance hints:
- benchmark_app **-hint tput** -d GPU -m 'path to your favorite model'
- You can also use the "explicit" device notion to override the strict rules of the implicit reshaping by the batch dimension:
- benchmark_app **-hint none -d BATCH:GPU** -m 'path to your favorite model'
- or override the automatically deduced batch size as well:
- $benchmark_app -hint none -d **BATCH:GPU(16)** -m 'path to your favorite model'
- This example also applies to CPU or any other device that generally supports batch execution.
- Keep in mind that some shell versions (e.g. `bash`) may require adding quotes around complex device names, i.e. `-d "BATCH:GPU(16)"` in this example.
Note that Benchmark_app performs a warm-up run of a *single* request. As Auto-Batching requires significantly more requests to execute in batch, this warm-up run hits the default timeout value (1000 ms), as reported in the following example:
.. code-block:: sh
[ INFO ] First inference took 1000.18ms
This value also exposed as the final execution statistics on the ``benchmark_app`` exit:
.. code-block:: sh
[ INFO ] Latency:
[ INFO ] Max: 1000.18 ms
```
[ INFO ] First inference took 1000.18ms
```
This value also exposed as the final execution statistics on the `benchmark_app` exit:
```
[ INFO ] Latency:
[ INFO ] Max: 1000.18 ms
```
This is NOT the actual latency of the batched execution, so you are recommended to refer to other metrics in the same log, for example, "Median" or "Average" execution.
Additional Resources
####################
### Additional Resources
* :doc:`Supported Devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`
@endsphinxdirective
* [Supported Devices](supported_plugins/Supported_Devices.md)

View File

@@ -14,9 +14,6 @@ To use the Deployment Manager tool, the following requirements need to be met:
* **For VPU**, see [Configurations for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](../../install_guides/configurations-for-ivad-vpu.md).
* **For GNA**, see [Intel® Gaussian & Neural Accelerator (GNA)](../../install_guides/configurations-for-intel-gna.md)
> **IMPORTANT**: The operating system on the target system must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04.
> **TIP**: If your application requires additional dependencies, including the Microsoft Visual C++ Redistributable, use the ['--user_data' option](https://docs.openvino.ai/latest/openvino_docs_install_guides_deployment_manager_tool.html#run-standard-cli-mode) to add them to the deployment archive. Install these dependencies on the target host before running inference.

View File

@@ -38,21 +38,19 @@ The decision about using dynamic shapes should be based on proper benchmarking o
Unlike statically shaped models, dynamically shaped ones require different inference time, depending on input data shape or input tensor content.
Furthermore, using the dynamic shapes can bring more overheads in memory and running time of each inference call depending on hardware plugin and model used.
## Handling Dynamic Shapes
## Handling Dynamic Shapes Natively
This section describes how to handle dynamically shaped models with OpenVINO Runtime API version 2022.1 and higher. When using dynamic shapes, there are three main differences in the workflow than with static shapes:
* Configuring the model
* Preparing and inferencing dynamic data
* Dynamic shapes in outputs
This section describes how to handle dynamically shaped models natively with OpenVINO Runtime API version 2022.1 and higher.
There are three main parts in the flow that differ from static shapes:
- Configure the model.
- Prepare data for inference.
- Read resulting data after inference.
### Configuring the Model
Model input dimensions can be specified as dynamic using the model.reshape method. To set a dynamic dimension, use `-1`, `ov::Dimension()` (C++), or `ov.Dimension()` (Python) as the value for that dimension.
> **NOTE**: Some models may already have dynamic shapes out of the box and do not require additional configuration. This can either be because it was generated with dynamic shapes from the source framework, or because it was converted with Model Optimizer to use dynamic shapes. For more information, see the Dynamic Dimensions “Out of the Box” section.
The examples below show how to set dynamic dimensions with a model that has a static `[1, 3, 224, 224]` input shape (such as [mobilenet-v2](https://docs.openvino.ai/2022.3/omz_models_model_mobilenet_v2.html)). The first example shows how to change the first dimension (batch size) to be dynamic. In the second example, the third and fourth dimensions (height and width) are set as dynamic.
To avoid the methods mentioned in the previous section, there is a way to specify one or multiple dimensions to be dynamic, directly in the model inputs.
This is achieved with the same reshape method that is used for alternating static shape of inputs.
Dynamic dimensions are specified as `-1` or the `ov::Dimension()` instead of a positive number used for static dimensions:
@sphinxtabset
@@ -66,8 +64,6 @@ The examples below show how to set dynamic dimensions with a model that has a st
@snippet docs/snippets/ov_dynamic_shapes.py reshape_undefined
With Python, you may also pass all dimensions as a string and use `?` for the dynamic dimensions (e.g. `model.reshape(“1, 3, ?, ?”)`).
@endsphinxtab
@sphinxtab{C}
@@ -78,84 +74,45 @@ With Python, you may also pass all dimensions as a string and use `?` for the dy
@endsphinxtabset
The examples above assume that the model has a single input layer. To change models with multiple input layers (such as NLP models), iterate over all the input layers, update the shape per layer, and apply the model.reshape method. For example, the following code sets the second dimension as dynamic in every input layer:
To simplify the code, the examples assume that the model has a single input and single output.
However, there are no limitations on the number of inputs and outputs to apply dynamic shapes.
### Undefined Dimensions "Out Of the Box"
Dynamic dimensions may appear in the input model without calling the `reshape` method.
Many DL frameworks support undefined dimensions.
If such a model is converted with Model Optimizer or read directly by the `Core::read_model`, undefined dimensions are preserved.
Such dimensions are automatically treated as dynamic ones.
Therefore, there is no need to call the `reshape` method, if undefined dimensions are already configured in the original or the IR model.
If the input model has undefined dimensions that will not change during inference. It is recommended to set them to static values, using the same `reshape` method of the model.
From the API perspective, any combination of dynamic and static dimensions can be configured.
Model Optimizer provides identical capability to reshape the model during the conversion, including specifying dynamic dimensions.
Use this capability to save time on calling `reshape` method in the end application.
To get information about setting input shapes using Model Optimizer, refer to [Setting Input Shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md).
### Dimension Bounds
Apart from a dynamic dimension, the lower and/or upper bounds can also be specified. They define a range of allowed values for the dimension.
The bounds are coded as arguments for the `ov::Dimension`:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_dynamic_shapes.cpp ov_dynamic_shapes:reshape_multiple_inputs
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_dynamic_shapes.py reshape_multiple_inputs
@endsphinxtab
@endsphinxtabset
For more examples of how to change multiple input layers, see [Changing Input Shapes](ShapeInference.md).
#### Undefined Dimensions "Out Of the Box"
Many DL frameworks support generating models with dynamic (or undefined) dimensions. If such a model is converted with Model Optimizer or read directly by `Core::read_model`, its dynamic dimensions are preserved. These models do not need any additional configuration to use them with dynamic shapes.
To check if a model already has dynamic dimensions, first load it with the `read_model()` method, then check the `partial_shape` property of each layer. If the model has any dynamic dimensions, they will be reported as `?`. For example, the following code will print the name and dimensions of each input layer:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_dynamic_shapes.cpp ov_dynamic_shapes:check_inputs
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_dynamic_shapes.py check_inputs
@endsphinxtab
@endsphinxtabset
If the input model already has dynamic dimensions, that will not change during inference. If the inputs will not be used dynamically, it is recommended to set them to static values using the `reshape` method to save application memory and potentially improve inference speed. The OpenVINO API supports any combination of static and dynamic dimensions.
Static and dynamic dimensions can also be set when converting the model with Model Optimizer. It has identical capabilities to the ``reshape`` method, so you can save time by converting the model with dynamic shapes beforehand rather than in the application code. To get information about setting input shapes using Model Optimizer, refer to [Setting Input Shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md).
#### Dimension Bounds
The lower and/or upper bounds of a dynamic dimension can also be specified. They define a range of allowed values for the dimension. Dimension bounds can be set by passing the lower and upper bounds into the `reshape` method using the options shown below.
@sphinxtabset
@sphinxtab{C++}
The dimension bounds can be coded as arguments for `ov::Dimension`, as shown in these examples:
@snippet docs/snippets/ov_dynamic_shapes.cpp ov_dynamic_shapes:reshape_bounds
@endsphinxtab
@sphinxtab{Python}
Each of these options are equivalent:
- Pass the lower and upper bounds directly into the `reshape` method, e.g. `model.reshape([1, 10), (8,512)])`
- Pass the lower and upper bounds using ov.Dimension, e.g. `model.reshape([ov.Dimension(1, 10), (8, 512)])`
- Pass the dimension ranges as strings, e.g. `model.reshape(“1..10, 8..512”)`
The examples below show how to set dynamic dimension bounds for a mobilenet-v2 model with a default static shape of `[1,3,224,224]`.
@snippet docs/snippets/ov_dynamic_shapes.py reshape_bounds
@endsphinxtab
@sphinxtab{C}
The dimension bounds can be coded as arguments for [ov_dimension](https://docs.openvino.ai/2022.3/structov_dimension.html#doxid-structov-dimension), as shown in these examples:
@snippet docs/snippets/ov_dynamic_shapes.c ov_dynamic_shapes:reshape_bounds
@endsphinxtab
@@ -174,11 +131,11 @@ Depending on the plugin, specifying the upper bounds can be required. For inform
If the lower and upper bounds for a dimension are known, it is recommended to specify them, even if a plugin can execute a model without the bounds.
### Preparing and Inferencing Dynamic Data
### Setting Input Tensors
After configuring a model with the `reshape` method, the next steps are to create tensors with the appropriate data shape and pass them to the model as an inference request. This is similar to the regular steps described in [Integrate OpenVINO™ with Your Application](integrate_with_your_application.md). However, tensors can now be passed into the model with different shapes.
The sample below shows how a model can accept different input shapes. In the first case, the model runs inference on a 1x128 input shape and returns a result. In the second case, a 1x200 input shape is used, which the model can still handle because it is dynamically shaped.
Preparing a model with the `reshape` method is the first step.
The second step is passing a tensor with an appropriate shape to infer request.
This is similar to the [regular steps](integrate_with_your_application.md). However, tensors can now be passed with different shapes for the same executable model and even for the same inference request:
@sphinxtabset
@@ -202,13 +159,45 @@ The sample below shows how a model can accept different input shapes. In the fir
@endsphinxtabset
For more information on how to apply input data to a model and run inference, see [OpenVINO™ Inference Request](ov_infer_request.md).
In the example above, the `set_input_tensor` is used to specify input tensors.
The real dimension of the tensor is always static, because it is a particular tensor and it does not have any dimension variations in contrast to model inputs.
Similar to static shapes, `get_input_tensor` can be used instead of `set_input_tensor`.
In contrast to static input shapes, when using `get_input_tensor` for dynamic inputs, the `set_shape` method for the returned tensor should be called to define the shape and allocate memory.
Without doing so, the tensor returned by `get_input_tensor` is an empty tensor. The shape of the tensor is not initialized and memory is not allocated, because infer request does not have information about the real shape that will be provided.
Setting shape for an input tensor is required when the corresponding input has at least one dynamic dimension, regardless of the bounds.
Contrary to previous example, the following one shows the same sequence of two infer requests, using `get_input_tensor` instead of `set_input_tensor`:
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/ov_dynamic_shapes.cpp ov_dynamic_shapes:get_input_tensor
@endsphinxtab
@sphinxtab{Python}
@snippet docs/snippets/ov_dynamic_shapes.py get_input_tensor
@endsphinxtab
@sphinxtab{C}
@snippet docs/snippets/ov_dynamic_shapes.c ov_dynamic_shapes:get_input_tensor
@endsphinxtab
@endsphinxtabset
### Dynamic Shapes in Outputs
When using dynamic dimensions in the input of a model, one or more output dimensions may also be dynamic depending on how the dynamic inputs are propagated through the model. For example, the batch dimension in an input shape is usually propagated through the whole model and appears in the output shape. It also applies to other dimensions, like sequence length for NLP models or spatial dimensions for segmentation models, that are propagated through the entire network.
Examples above are valid approaches when dynamic dimensions in output may be implied by propagation of dynamic dimension from the inputs.
For example, batch dimension in an input shape is usually propagated through the whole model and appears in the output shape.
It also applies to other dimensions, like sequence length for NLP models or spatial dimensions for segmentation models, that are propagated through the entire network.
To determine if the output has dynamic dimensions, the `partial_shape` property of the models output layers can be queried after the model has been read or reshaped. The same property can be queried for model inputs. For example:
Whether the output has dynamic dimensions or not can be verified by querying the output partial shape after the model is read or reshaped.
The same applies to inputs. For example:
@sphinxtabset
@@ -232,9 +221,9 @@ To determine if the output has dynamic dimensions, the `partial_shape` property
@endsphinxtabset
If the output has any dynamic dimensions, they will be reported as `?` or as a range (e.g.`1..10`).
When there are dynamic dimensions in corresponding inputs or outputs, the `?` or ranges like `1..10` appear.
Output layers can also be checked for dynamic dimensions using the `partial_shape.is_dynamic()` property. This can be used on an entire output layer, or on an individual dimension, as shown in these examples:
It can also be verified in a more programmatic way:
@sphinxtabset
@@ -258,6 +247,8 @@ Output layers can also be checked for dynamic dimensions using the `partial_shap
@endsphinxtabset
If at least one dynamic dimension exists in the output layer of a model, the actual shape of the output tensor will be determined during inference. Before the first inference, the output tensors memory is not allocated and has a shape of `[0]`.
To pre-allocate space in memory for the output tensor, use the `set_output_tensor` method with the expected shape of the output. This will call the `set_shape` method internally, which will cause the initial shape to be replaced by the calculated shape.
If at least one dynamic dimension exists in an output of a model, a shape of the corresponding output tensor will be set as the result of inference call.
Before the first inference, memory for such a tensor is not allocated and has the `[0]` shape.
If the `set_output_tensor` method is called with a pre-allocated tensor, the inference will call the `set_shape` internally, and the initial shape is replaced by the calculated shape.
Therefore, setting a shape for output tensors in this case is useful only when pre-allocating enough memory for output tensor. Normally, the `set_shape` method of a `Tensor` re-allocates memory only if a new shape requires more storage.

View File

@@ -9,14 +9,15 @@ Previously, a certain level of automatic configuration was the result of the *de
The hints, in contrast, respect the actual model, so the parameters for optimal throughput are calculated for each model individually (based on its compute versus memory bandwidth requirements and capabilities of the device).
## Performance Hints: Latency and Throughput
As discussed in the [Optimization Guide](../optimization_guide/dldt_deployment_optimization_guide.md) there are a few different metrics associated with inference speed.
Throughput and latency are some of the most widely used metrics that measure the overall performance of an application.
As discussed in the [Optimization Guide](../optimization_guide/dldt_deployment_optimization_guide.md) there are a few different metrics associated with inference speed. Throughput and latency are some of the most widely used metrics that measure the overall performance of an application.
Therefore, in order to ease the configuration of the device, OpenVINO offers two dedicated hints, namely `ov::hint::PerformanceMode::THROUGHPUT` and `ov::hint::PerformanceMode::LATENCY`. A special `ov::hint::PerformanceMode::UNDEFINED` hint acts the same as specifying no hint.
Therefore, in order to ease the configuration of the device, OpenVINO offers two dedicated hints, namely `ov::hint::PerformanceMode::THROUGHPUT` and `ov::hint::PerformanceMode::LATENCY`.
A special `ov::hint::PerformanceMode::UNDEFINED` hint acts the same as specifying no hint.
For more information on conducting performance measurements with the `benchmark_app`, refer to the last section in this document.
Keep in mind that a typical model may take significantly more time to load with the `ov::hint::PerformanceMode::THROUGHPUT` and consume much more memory, compared to the `ov::hint::PerformanceMode::LATENCY`. Also, the `THROUGHPUT` and `LATENCY` hints only improve performance in an asynchronous inference pipeline. For information on asynchronous inference, see the [Prefer Async API](#prefer-async-api) section of this document.
Keep in mind that a typical model may take significantly more time to load with the `ov::hint::PerformanceMode::THROUGHPUT` and consume much more memory, compared to the `ov::hint::PerformanceMode::LATENCY`.
## Performance Hints: How It Works
Internally, every device "translates" the value of the hint to the actual performance settings.
@@ -93,18 +94,14 @@ The hints are used on the presumption that the application queries `ov::optimal_
@endsphinxdirective
While an application is free to create more requests if needed (for example to support asynchronous inputs population) **it is very important to at least run the** `ov::optimal_number_of_infer_requests` **of the inference requests in parallel**. It is recommended for efficiency, or device utilization, reasons.
While an application is free to create more requests if needed (for example to support asynchronous inputs population) **it is very important to at least run the `ov::optimal_number_of_infer_requests` of the inference requests in parallel**. It is recommended for efficiency, or device utilization, reasons.
Keep in mind that `ov::hint::PerformanceMode::LATENCY` does not necessarily imply using single inference request. For example, multi-socket CPUs can deliver as many requests at the same minimal latency as the number of NUMA nodes in the system.
To make your application fully scalable, make sure to query the `ov::optimal_number_of_infer_requests` directly.
## <a name="prefer-async-api"></a>Prefer Async API
The API of the inference requests offers Sync and Async execution. The `ov::InferRequest::infer()` is inherently synchronous and simple to operate (as it serializes the execution flow in the current application thread). The Async "splits" the `infer()` into `ov::InferRequest::start_async()` and `ov::InferRequest::wait()` (or callbacks). For more information on synchronous and asynchronous modes, refer to the [OpenVINO Inference Request documentation](../OV_Runtime_UG/ov_infer_request.md).
Although the synchronous API can be easier to start with, it is recommended to use the asynchronous (callbacks-based) API in production code. It is the most general and scalable way to implement the flow control for any possible number of requests. The `THROUGHPUT` and `LATENCY` performance hints automatically configure the Asynchronous pipeline to use the optimal number of processing streams and inference requests.
> **NOTE**: **Important:** Performance Hints only work when asynchronous execution mode is used. They do not affect the performance of a synchronous pipeline.
## Prefer Async API
The API of the inference requests offers Sync and Async execution. The `ov::InferRequest::infer()` is inherently synchronous and simple to operate (as it serializes the execution flow in the current application thread). The Async "splits" the `infer()` into `ov::InferRequest::start_async()` and `ov::InferRequest::wait()` (or callbacks). For more information, refer to the [API examples](../OV_Runtime_UG/ov_infer_request.md).
Although the Synchronous API can be somewhat easier to start with, it is recommended to use the Asynchronous (callbacks-based) API in the production code. It is the most general and scalable way to implement the flow control for any possible number of requests (and thus both latency and throughput scenarios).
## Combining the Hints and Individual Low-Level Settings
While sacrificing the portability to some extent, it is possible to combine the hints with individual device-specific settings.
@@ -130,8 +127,8 @@ For example, use `ov::hint::PerformanceMode::THROUGHPUT` to prepare a general co
The `benchmark_app`, that exists in both [C++](../../samples/cpp/benchmark_app/README.md) and [Python](../../tools/benchmark_tool/README.md) versions, is the best way to evaluate the functionality of the performance hints for a particular device:
- benchmark_app **-hint tput** -d 'device' -m 'path to your model'
- benchmark_app **-hint latency** -d 'device' -m 'path to your model'
- Disabling the hints to emulate the pre-hints era (highly recommended before trying the individual low-level settings, such as the number of streams as below, threads, etc):
- benchmark_app **-hint none -nstreams 1** -d 'device' -m 'path to your model'
- Disabling the hints to emulate the pre-hints era (highly recommended before trying the individual low-level settings, such as the number of streams as below, threads, etc):
- - benchmark_app **-hint none -nstreams 1** -d 'device' -m 'path to your model'
### Additional Resources

View File

@@ -1,6 +1,7 @@
# CPU Device {#openvino_docs_OV_UG_supported_plugins_CPU}
The CPU plugin is a part of the Intel® Distribution of OpenVINO™ toolkit. It is developed to achieve high performance inference of neural networks on Intel® x86-64 CPUs.The newer 11th generation and later Intel® CPUs provide even further performance boost, especially with INT8 models.
The CPU plugin is a part of the Intel® Distribution of OpenVINO™ toolkit. It is developed to achieve high performance inference of neural networks on Intel® x86-64 CPUs.
For an in-depth description of CPU plugin, see:
- [CPU plugin developers documentation](https://github.com/openvinotoolkit/openvino/wiki/CPUPluginDevelopersDocs).

View File

@@ -111,7 +111,7 @@ For more details on how to get a quantized model, refer to the [Model Optimizati
Floating-point precision of a GPU primitive is selected based on operation precision in the OpenVINO IR, except for the [compressed f16 OpenVINO IR form](../../MO_DG/prepare_model/FP16_Compression.md), which is executed in the `f16` precision.
> **NOTE**: The newer generation Intel Iris Xe and Xe MAX GPUs provide accelerated performance for i8/u8 models. Hardware acceleration for i8/u8 precision may be unavailable on older generation platforms. In such cases, a model is executed in the floating-point precision taken from IR. Hardware support of u8/i8 acceleration can be queried via the `ov::device::capabilities` property.
> **NOTE**: Hardware acceleration for `i8`/`u8` precision may be unavailable on some platforms. In such cases, a model is executed in the floating-point precision taken from IR. Hardware support of `u8`/`i8` acceleration can be queried via the `ov::device::capabilities` property.
[Hello Query Device C++ Sample](../../../samples/cpp/hello_query_device/README.md) can be used to print out the supported data types for all detected devices.

View File

@@ -1,9 +1,6 @@
# MYRIAD Device {#openvino_docs_OV_UG_supported_plugins_MYRIAD}
The OpenVINO Runtime MYRIAD plugin has been developed for inference of neural networks on Intel® Neural Compute Stick 2.
## Configuring the MYRIAD Plugin

View File

@@ -1,8 +1,7 @@
Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}
==================
The OpenVINO runtime can infer various models of different input and output formats. Here, you can find configurations
supported by OpenVINO devices, which are CPU, GPU, or GNA (Gaussian neural accelerator coprocessor). Currently, 11th generation and later processors (currently up to 13th generation) provide a further performance boost, especially with INT8 models.
The OpenVINO Runtime can infer models in different formats with various input and output formats. This section provides supported and optimal configurations per device. In OpenVINO™ documentation, "device" refers to an Intel® processors used for inference, which can be a supported CPU, GPU, VPU (vision processing unit), or GNA (Gaussian neural accelerator coprocessor), or a combination of those devices.
> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick support has been cancelled.
@@ -12,16 +11,13 @@ The OpenVINO Runtime provides unique capabilities to infer deep learning models
|------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
|[GPU plugin](GPU.md) |Intel&reg; Processor Graphics, including Intel&reg; HD Graphics and Intel&reg; Iris&reg; Graphics |
|[CPU plugin](CPU.md) |Intel&reg; Xeon&reg; with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel&reg; Core&trade; Processors with Intel&reg; AVX2, Intel&reg; Atom&reg; Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|[VPU plugin](VPU.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs |
|[VPU plugins](VPU.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs |
|[GNA plugin](GNA.md) (available in the Intel® Distribution of OpenVINO™ toolkit) |Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, Intel&reg; Core&trade; i3-1000G4 Processor|
|[Arm® CPU plugin](ARM_CPU.md) (unavailable in the Intel® Distribution of OpenVINO™ toolkit) |Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|[Multi-Device execution](../multi_device.md) |Multi-Device execution enables simultaneous inference of the same model on several devices in parallel |
|[Auto-Device plugin](../auto_device_selection.md) |Auto-Device plugin enables selecting Intel&reg; device for inference automatically |
|[Heterogeneous plugin](../hetero_execution.md) |Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't [support certain operation](#supported-layers)). |
> **NOTE**: ARM® CPU plugin is a community-level add-on to OpenVINO™. Intel® welcomes community participation in the OpenVINO™ ecosystem, technical questions and code contributions on community forums. However, this component has not undergone full release validation or qualification from Intel®, hence no official support is offered.
Devices similar to the ones we have used for benchmarking can be accessed using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/), a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. [Learn more](https://devcloud.intel.com/edge/get_started/devcloud/) or [Register here](https://inteliot.force.com/DevcloudForEdge/s/).

View File

@@ -11,8 +11,6 @@
@endsphinxdirective
This chapter provides information on the OpenVINO™ Runtime plugins that enable inference of deep learning models on the supported VPU devices:
* Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X — Supported by the [MYRIAD Plugin](MYRIAD.md)

View File

@@ -1,105 +0,0 @@
Network model,Release,IE-Type,Platform name,Throughput-OVMS-INT8,Throughput-OV-INT8,Throughput-OVMS-FP32,Throughput-OV-FP32
begin_rec,,,,,,,
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,19.05,19.24,12.84,13.02
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,21.75,22.97,17.16,17.32
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,18.00,18.33,11.91,12.06
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,81.48,87.59,46.81,48.37
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,207.39,231.10,104.07,125.89
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,282.09,287.81,159.05,162.28
end_rec,,,,,,,
begin_rec,,,,,,,
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,28.29,31.56,15.94,16.90
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,37.92,40.93,19.35,20.38
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,26.10,27.99,15.33,15.78
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,118.32,142.36,26.18,27.37
DeeplabV3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,347.24,391.34,53.95,73.45
DeeplabV3,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,425.70,538.96,125.09,132.23
end_rec,,,,,,,
begin_rec,,,,,,,
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,117.68,123.85,68.41,71.42
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,151.83,161.15,90.37,94.03
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,97.49,101.95,61.08,62.79
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,765.57,857.26,205.00,225.97
Densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,2039.41,2205.00,582.14,600.78
Densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,2316.39,2501.85,662.25,686.40
end_rec,,,,,,,
begin_rec,,,,,,,
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,42.26,43.69,25.09,26.62
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,49.48,50.11,29.37,30.93
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,37.48,38.96,26.29,27.90
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,125.90,143.68,51.04,55.33
Efficientdet-D0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,302.06,335.20,168.52,177.62
Efficientdet-D0,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,362.66,415.28,244.88,254.03
end_rec,,,,,,,
begin_rec,,,,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,29.95,33.16,16.58,17.08
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,43.60,44.77,22.21,22.39
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,27.76,28.08,14.16,14.41
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,253.30,275.06,60.19,63.55
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,656.23,690.46,158.05,161.39
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,747.08,782.74,185.16,187.21
end_rec,,,,,,,
begin_rec,,,,,,,
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,247.50,275.77,133.42,148.03
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,311.96,358.32,176.63,199.53
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,213.07,237.43,128.63,138.09
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,1382.37,1935.88,391.43,484.28
Mobilenet-SSD ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,3578.49,4790.04,1062.88,1141.50
Mobilenet-SSD ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,4131.44,5693.82,1319.32,1494.70
end_rec,,,,,,,
begin_rec,,,,,,,
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,470.51,546.68,286.64,336.47
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,567.21,690.80,378.24,462.46
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,399.15,470.87,283.32,318.23
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,2493.12,3426.14,765.45,941.54
Mobilenet-V2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,6679.14,9143.29,2302.78,2511.31
Mobilenet-V2 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,7371.67,10494.29,2672.91,3192.44
end_rec,,,,,,,
begin_rec,,,,,,,
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,210.80,228.46,106.61,116.30
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,279.43,303.27,142.79,151.45
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,184.06,194.48,91.60,94.53
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,1490.65,1809.32,409.17,464.62
Resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,3918.52,4568.67,1138.07,1166.20
Resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,4477.09,5192.77,1294.96,1309.89
end_rec,,,,,,,
begin_rec,,,,,,,
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,108.35,114.48,55.15,57.62
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,142.74,149.99,73.33,75.63
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,98.10,100.62,47.21,48.40
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,786.06,893.37,182.61,200.00
Resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,2066.51,2231.60,464.01,518.88
Resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,2336.42,2508.77,613.40,632.38
end_rec,,,,,,,
begin_rec,,,,,,,
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,1.74,1.83,0.89,1.05
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,2.46,2.48,1.37,1.42
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,1.41,1.58,0.66,0.88
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,14.59,15.29,3.97,4.03
SSD-Resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,35.42,36.77,10.14,10.46
SSD-Resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,41.35,43.93,11.73,12.20
end_rec,,,,,,,
begin_rec,,,,,,,
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,2.57,2.78,1.62,1.70
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,3.68,3.71,2.15,2.29
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,2.25,2.38,1.36,1.45
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,25.52,26.93,5.57,5.69
Unet-Camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,60.15,65.11,15.01,15.15
Unet-Camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,69.58,76.46,17.16,17.97
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,114.02,127.37,67.06,72.20
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,148.72,168.41,85.62,91.66
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,98.44,107.53,56.42,60.41
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,592.92,850.58,207.96,240.90
Yolo_V3_Tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,1631.49,2031.46,534.51,611.68
Yolo_V3_Tiny,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,1774.41,2428.00,691.96,725.60
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,5.44,5.66,3.17,3.25
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,7.24,7.40,4.19,4.21
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,4.60,4.71,2.45,2.68
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,36.33,40.21,10.52,10.90
Yolo_V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,81.88,95.46,26.43,27.57
Yolo_V4,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,96.58,111.93,30.48,34.50
end_rec,,,,,,,
1 Network model Release IE-Type Platform name Throughput-OVMS-INT8 Throughput-OV-INT8 Throughput-OVMS-FP32 Throughput-OV-FP32
2 begin_rec
3 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 19.05 19.24 12.84 13.02
4 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 21.75 22.97 17.16 17.32
5 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 18.00 18.33 11.91 12.06
6 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 81.48 87.59 46.81 48.37
7 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 207.39 231.10 104.07 125.89
8 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 282.09 287.81 159.05 162.28
9 end_rec
10 begin_rec
11 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 28.29 31.56 15.94 16.90
12 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 37.92 40.93 19.35 20.38
13 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 26.10 27.99 15.33 15.78
14 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 118.32 142.36 26.18 27.37
15 DeeplabV3 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 347.24 391.34 53.95 73.45
16 DeeplabV3 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 425.70 538.96 125.09 132.23
17 end_rec
18 begin_rec
19 Densenet-121 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 117.68 123.85 68.41 71.42
20 Densenet-121 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 151.83 161.15 90.37 94.03
21 Densenet-121 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 97.49 101.95 61.08 62.79
22 Densenet-121 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 765.57 857.26 205.00 225.97
23 Densenet-121 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 2039.41 2205.00 582.14 600.78
24 Densenet-121 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 2316.39 2501.85 662.25 686.40
25 end_rec
26 begin_rec
27 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 42.26 43.69 25.09 26.62
28 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 49.48 50.11 29.37 30.93
29 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 37.48 38.96 26.29 27.90
30 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 125.90 143.68 51.04 55.33
31 Efficientdet-D0 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 302.06 335.20 168.52 177.62
32 Efficientdet-D0 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 362.66 415.28 244.88 254.03
33 end_rec
34 begin_rec
35 Inception-V4 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 29.95 33.16 16.58 17.08
36 Inception-V4 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 43.60 44.77 22.21 22.39
37 Inception-V4 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 27.76 28.08 14.16 14.41
38 Inception-V4 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 253.30 275.06 60.19 63.55
39 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 656.23 690.46 158.05 161.39
40 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 747.08 782.74 185.16 187.21
41 end_rec
42 begin_rec
43 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 247.50 275.77 133.42 148.03
44 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 311.96 358.32 176.63 199.53
45 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 213.07 237.43 128.63 138.09
46 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 1382.37 1935.88 391.43 484.28
47 Mobilenet-SSD OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 3578.49 4790.04 1062.88 1141.50
48 Mobilenet-SSD OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 4131.44 5693.82 1319.32 1494.70
49 end_rec
50 begin_rec
51 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 470.51 546.68 286.64 336.47
52 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 567.21 690.80 378.24 462.46
53 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 399.15 470.87 283.32 318.23
54 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 2493.12 3426.14 765.45 941.54
55 Mobilenet-V2 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 6679.14 9143.29 2302.78 2511.31
56 Mobilenet-V2 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 7371.67 10494.29 2672.91 3192.44
57 end_rec
58 begin_rec
59 Resnet-18 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 210.80 228.46 106.61 116.30
60 Resnet-18 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 279.43 303.27 142.79 151.45
61 Resnet-18 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 184.06 194.48 91.60 94.53
62 Resnet-18 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 1490.65 1809.32 409.17 464.62
63 Resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 3918.52 4568.67 1138.07 1166.20
64 Resnet-18 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 4477.09 5192.77 1294.96 1309.89
65 end_rec
66 begin_rec
67 Resnet-50 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 108.35 114.48 55.15 57.62
68 Resnet-50 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 142.74 149.99 73.33 75.63
69 Resnet-50 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 98.10 100.62 47.21 48.40
70 Resnet-50 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 786.06 893.37 182.61 200.00
71 Resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 2066.51 2231.60 464.01 518.88
72 Resnet-50 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 2336.42 2508.77 613.40 632.38
73 end_rec
74 begin_rec
75 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 1.74 1.83 0.89 1.05
76 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 2.46 2.48 1.37 1.42
77 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 1.41 1.58 0.66 0.88
78 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 14.59 15.29 3.97 4.03
79 SSD-Resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 35.42 36.77 10.14 10.46
80 SSD-Resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 41.35 43.93 11.73 12.20
81 end_rec
82 begin_rec
83 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 2.57 2.78 1.62 1.70
84 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 3.68 3.71 2.15 2.29
85 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 2.25 2.38 1.36 1.45
86 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 25.52 26.93 5.57 5.69
87 Unet-Camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 60.15 65.11 15.01 15.15
88 Unet-Camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 69.58 76.46 17.16 17.97
89 end_rec
90 begin_rec
91 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 114.02 127.37 67.06 72.20
92 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 148.72 168.41 85.62 91.66
93 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 98.44 107.53 56.42 60.41
94 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 592.92 850.58 207.96 240.90
95 Yolo_V3_Tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 1631.49 2031.46 534.51 611.68
96 Yolo_V3_Tiny OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 1774.41 2428.00 691.96 725.60
97 end_rec
98 begin_rec
99 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 5.44 5.66 3.17 3.25
100 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 7.24 7.40 4.19 4.21
101 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 4.60 4.71 2.45 2.68
102 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 36.33 40.21 10.52 10.90
103 Yolo_V4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 81.88 95.46 26.43 27.57
104 Yolo_V4 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 96.58 111.93 30.48 34.50
105 end_rec

View File

@@ -1,11 +1,5 @@
Network model,Release,IE-Type,Platform name,Throughput-INT8,Throughput-FP16,Throughput-FP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency
Network model,Release,IE-Type,Platform name,Throughput-INT8,Throughput-FP16,Throughput-FP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency
begin_rec,,,,,,,,,,,,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,163.72,,57.83,0.273,1.31,$599 ,125,1,$599 ,125,15.53
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,56.07,,23.3,0.094,0.449,$599 ,125,1,$599 ,125,19.62
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,210.17,,85.83,0.351,1.681,$599 ,125,1,$599 ,125,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,128.05,,45.94,0.389,1.024,$329 ,125,1,$329 ,125,12.71
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,53.03,,21.9,0.161,0.424,$329 ,125,1,$329 ,125,20.81
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,163.33,,64.74,0.496,1.307,$329 ,125,1,$329 ,125,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,96.06,,35.627,0.146,0.582,$658 ,165,1,$658 ,165,17.1432
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,53.093,,22.253,0.081,0.322,$658 ,165,1,$658 ,165,22.0002
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,108.306,,44.797,0.165,0.656,$658 ,165,1,$658 ,165,
@@ -25,25 +19,15 @@ bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,32.073
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,69.053,,40.243,0.116,0.552,$594 ,125,1,$594 ,125,18.309
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,23.402,,14.614,0.094,0.33,$249 ,71,1,$249 ,71,44.8984
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,266.949,,79.033,0.085,1.271,"$3,144 ",210,2,"$1,572 ",105,12.4065
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,2090.76,1372.2,1368.68,0.292,4.646,"$7,166 ",450,2,"$3,583 ",225,4.61
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,2090.76,,326.55,0.292,4.646,"$7,166 ",450,2,"$3,583 ",225,4.61
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,682.593,,225.713,0.04,1.665,"$16,954 ",410,2,"$8,477 ",205,6.9035
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,256.994,,75.502,0.128,1.028,"$2,004 ",250,2,"$1,002 ",125,13.0382
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,64.632,,18.394,0.138,2.308,$469 ,28,1,$469 ,28,17.638
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,95.656,,44.056,0.204,3.416,$469 ,28,1,$469 ,28,14.1005
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,128.005,,50.592,0.273,4.572,$469 ,28,1,$469 ,28,
bert-base-cased ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,906.3,348.52,,0.471,6.042,"$1,925 ",150,1,"$1,925 ",150,7.381
bert-base-cased ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,289.24,152.74,,,,,,1,,50,7.14
bert-base-cased ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,343.57,180.89,,,,,,1,,75,6.64
bert-base-cased ,OV-2022.3-8991,accel,Intel® Arc A750,993.22,1486.36,, 4.138 , 4.414 ,$240 ,225,1,$240 ,225,6.46
bert-base-cased ,OV-2022.3-8991,accel,Intel® Arc A770,1088.69,1622.61,, 3.202 , 4.839 ,$340 ,225,1,$340 ,225,6.29
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,51.72,,17.6,0.086,0.414,$599 ,125,1,$599 ,125,49.13
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,19.05,,6.88,0.032,0.152,$599 ,125,1,$599 ,125,55.82
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,53,,19.95,0.088,0.424,$599 ,125,1,$599 ,125,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,35.31,,11.04,0.107,0.282,$329 ,125,1,$329 ,125,41.56
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,17.93,,6.46,0.054,0.143,$329 ,125,1,$329 ,125,59.37
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,43.42,,16.19,0.132,0.347,$329 ,125,1,$329 ,125,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,7.714,,3.093,0.012,0.047,$658 ,165,1,$658 ,165,155.3633
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,5.617,,1.978,0.009,0.034,$658 ,165,1,$658 ,165,181.8303
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,10.602,,3.753,0.016,0.064,$658 ,165,1,$658 ,165,
@@ -63,25 +47,15 @@ bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Co
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,4.801,,2.729,0.008,0.038,$594 ,125,1,$594 ,125,200.0794
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.098,,1.32,0.008,0.03,$249 ,71,1,$249 ,71,492.0938
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,21.062,,7.021,0.007,0.1,"$3,144 ",210,2,"$1,572 ",105,101.4694
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,651.95,378.57,384.02,0.091,1.449,"$7,166 ",450,2,"$3,583 ",225,12.87
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,651.95,,91.18,0.091,1.449,"$7,166 ",450,2,"$3,583 ",225,12.87
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,46.064,,19.051,0.003,0.112,"$16,954 ",410,2,"$8,477 ",205,49.4869
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,20.014,,6.726,0.01,0.08,"$2,004 ",250,2,"$1,002 ",125,105.9423
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,5.192,,1.626,0.011,0.185,$469 ,28,1,$469 ,28,203.6311
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,10.476,,3.914,0.022,0.374,$469 ,28,1,$469 ,28,95.6598
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,11.75,,4.168,0.025,0.42,$469 ,28,1,$469 ,28,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,74.47,25.77,,0.039,0.496,"$1,925 ",150,1,"$1,925 ",150,19.768
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,74.01,44.31,,,,,,1,,50,17.89
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,92.53,53.04,,,,,,1,,75,15.77
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Arc A750,270.49,185.64,,1.127,1.202,$240 ,225,1,$240 ,225,14.36
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Arc A770,337.47,205.46,,0.993,1.500,$340 ,225,1,$340 ,225,13.97
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,184.93,,63.79,0.309,1.479,$599 ,125,1,$599 ,125,10.31
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,69.31,,22.67,0.116,0.554,$599 ,125,1,$599 ,125,15.02
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,191.48,,62.99,0.32,1.532,$599 ,125,1,$599 ,125,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,139.02,,48.48,0.423,1.112,$329 ,125,1,$329 ,125,10.48
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,65.55,,21.24,0.199,0.524,$329 ,125,1,$329 ,125,16.12
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,154.19,,52.87,0.469,1.234,$329 ,125,1,$329 ,125,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,99.078,,36.552,0.151,0.6,$658 ,165,1,$658 ,165,11.269
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,57.707,,13.789,0.088,0.35,$658 ,165,1,$658 ,165,16.263
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,115.59,,39.82,0.176,0.701,$658 ,165,1,$658 ,165,
@@ -101,25 +75,15 @@ deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,36.559,,18.23
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,79.42,,21.03,0.134,0.635,$594 ,125,1,$594 ,125,12.8397
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,26.173,,16.906,0.105,0.369,$249 ,71,1,$249 ,71,37.9245
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,248.049,,81.667,0.079,1.181,"$3,144 ",210,2,"$1,572 ",105,8.9485
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,1139.5,702.28,699.04,0.159,2.532,"$7,166 ",450,2,"$3,583 ",225,2.47
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,1139.5,,271.62,0.159,2.532,"$7,166 ",450,2,"$3,583 ",225,2.47
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,632.113,,168.65,0.037,1.542,"$16,954 ",410,2,"$8,477 ",205,4.0073
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,241.703,,78.963,0.121,0.967,"$2,004 ",250,2,"$1,002 ",125,9.356
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,64.13,,18.519,0.137,2.29,$469 ,28,1,$469 ,28,16.6586
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,104.926,,24.592,0.224,3.747,$469 ,28,1,$469 ,28,9.1435
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,121.441,,30.498,0.259,4.337,$469 ,28,1,$469 ,28,
deeplabv3,OV-2022.3-8991,accel,Intel® Flex-170 GPU,882.04,98.95,,0.458,5.88,"$1,925 ",150,1,"$1,925 ",150,2.674
deeplabv3,OV-2022.3-8991,accel,Intel® Arc A40 Pro,246.48,197.01,,,,,,1,,50,4.8
deeplabv3,OV-2022.3-8991,accel,Intel® Arc A50 Pro,281.31,221.77,,,,,,1,,75,4.74
deeplabv3,OV-2022.3-8991,accel,Intel® Arc A750,813.12,626.48,,3.388,3.614,$240 ,225,1,$240 ,225,1.9
deeplabv3,OV-2022.3-8991,accel,Intel® Arc A770,763.91,595.5,,2.247,3.395,$340 ,225,1,$340 ,225,1.83
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,777.86,,284.56,1.299,6.223,$599 ,125,1,$599 ,125,3.26
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,195.3,,66.46,0.326,1.562,$599 ,125,1,$599 ,125,6.8
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,899.5,,293.29,1.502,7.196,$599 ,125,1,$599 ,125,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,612.99,,184.9,1.863,4.904,$329 ,125,1,$329 ,125,3.12
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,178.37,,62.69,0.542,1.427,$329 ,125,1,$329 ,125,8.37
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,707.99,,207.12,2.152,5.664,$329 ,125,1,$329 ,125,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,457.193,,165.166,0.695,2.771,$658 ,165,1,$658 ,165,3.141
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,203.417,,68.438,0.309,1.233,$658 ,165,1,$658 ,165,6.6728
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,575.442,,179.858,0.875,3.488,$658 ,165,1,$658 ,165,
@@ -139,25 +103,15 @@ densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,146.463,,6
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,360.501,,182.543,0.607,2.884,$594 ,125,1,$594 ,125,3.6046
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,114.844,,67.188,0.461,1.618,$249 ,71,1,$249 ,71,9.7609
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,1116.372,,295.952,0.355,5.316,"$3,144 ",210,2,"$1,572 ",105,3.9606
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,8279.14,4856.54,4862.51,1.155,18.398,"$7,166 ",450,2,"$3,583 ",225,2.39
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,8279.14,,1137.41,1.155,18.398,"$7,166 ",450,2,"$3,583 ",225,2.39
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,3155.106,,815.725,0.186,7.695,"$16,954 ",410,2,"$8,477 ",205,2.8831
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1064.824,,283.423,0.531,4.259,"$2,004 ",250,2,"$1,002 ",125,4.0689
densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,265.167,,74.501,0.565,9.47,$469 ,28,1,$469 ,28,4.7413
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,391.185,,123.519,0.834,13.971,$469 ,28,1,$469 ,28,6.5259
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,526.12,,150.35,1.122,18.79,$469 ,28,1,$469 ,28,
densenet-121,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3440.18,1178.68,,1.787,22.935,"$1,925 ",150,1,"$1,925 ",150,3.302
densenet-121,OV-2022.3-8991,accel,Intel® Arc A40 Pro,779.8,650.48,,,,,,1,,50,2.97
densenet-121,OV-2022.3-8991,accel,Intel® Arc A50 Pro,817.69,637.04,,,,,,1,,75,3.06
densenet-121,OV-2022.3-8991,accel,Intel® Arc A750,2022.98,1666.6,,8.429,8.991,$240 ,225,1,$240 ,225,2.56
densenet-121,OV-2022.3-8991,accel,Intel® Arc A770,2076.41,1647.41,,6.107,9.228,$340 ,225,1,$340 ,225,2.56
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,209.26,,106.11,0.349,1.674,$599 ,125,1,$599 ,125,10.36
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,82.04,,47.85,0.137,0.656,$599 ,125,1,$599 ,125,22.35
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,197.85,,108.3,0.33,1.583,$599 ,125,1,$599 ,125,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,155.65,,90.91,0.473,1.245,$329 ,125,1,$329 ,125,9.92
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,77.28,,44.91,0.235,0.618,$329 ,125,1,$329 ,125,22.93
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,172.54,,95.94,0.524,1.38,$329 ,125,1,$329 ,125,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,112.297,,64.06,0.171,0.681,$658 ,165,1,$658 ,165,11.8265
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,73.766,,38.742,0.112,0.447,$658 ,165,1,$658 ,165,21.403
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,128.735,,76.62,0.196,0.78,$658 ,165,1,$658 ,165,
@@ -177,25 +131,15 @@ efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,50.35,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,94.981,,36.434,0.16,0.76,$594 ,125,1,$594 ,125,12.658
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,35.831,,27.306,0.144,0.505,$249 ,71,1,$249 ,71,30.9469
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,239.06,,161.224,0.076,1.138,"$3,144 ",210,2,"$1,572 ",105,13.9735
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,875.53,495.04,492.93,0.122,1.946,"$7,166 ",450,2,"$3,583 ",225,5.07
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,875.53,,560.48,0.122,1.946,"$7,166 ",450,2,"$3,583 ",225,5.07
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,471.02,,300.291,0.028,1.149,"$16,954 ",410,2,"$8,477 ",205,9.3866
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,231.873,,156.285,0.116,0.927,"$2,004 ",250,2,"$1,002 ",125,14.1605
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,71.482,,41.123,0.152,2.553,$469 ,28,1,$469 ,28,16.6952
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,92.52,,50.538,0.197,3.304,$469 ,28,1,$469 ,28,17.295
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,107.688,,56.901,0.23,3.846,$469 ,28,1,$469 ,28,
efficientdet-d0,OV-2022.3-8991,accel,Intel® Flex-170 GPU,463.67,295.13,,0.241,3.091,"$1,925 ",150,1,"$1,925 ",150,5.603
efficientdet-d0,OV-2022.3-8991,accel,Intel® Arc A40 Pro,106.04,142.82,,,,,,1,,50,12.31
efficientdet-d0,OV-2022.3-8991,accel,Intel® Arc A50 Pro,110.64,142.38,,,,,,1,,75,11.98
efficientdet-d0,OV-2022.3-8991,accel,Intel® Arc A750,496.2,672.57,,2.068,2.205,$240 ,225,1,$240 ,225,5.17
efficientdet-d0,OV-2022.3-8991,accel,Intel® Arc A770,497.52,680.16,,1.463,2.211,$340 ,225,1,$340 ,225,5.03
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,5.94,,2.41,0.01,0.048,$599 ,125,1,$599 ,125,270.57
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,2.3,,0.71,0.004,0.018,$599 ,125,1,$599 ,125,437.94
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,6.45,,2.25,0.011,0.052,$599 ,125,1,$599 ,125,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,4.55,,1.88,0.014,0.036,$329 ,125,1,$329 ,125,310.58
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,2.17,,0.67,0.007,0.017,$329 ,125,1,$329 ,125,465.03
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,5.3,,2.01,0.016,0.042,$329 ,125,1,$329 ,125,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,12.921,,4.016,0.02,0.078,$658 ,165,1,$658 ,165,89.8929
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,6.802,,1.82,0.01,0.041,$658 ,165,1,$658 ,165,149.7396
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,15.679,,4.499,0.024,0.095,$658 ,165,1,$658 ,165,
@@ -215,25 +159,15 @@ faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-on
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,8.977,,4.542,0.015,0.072,$594 ,125,1,$594 ,125,137.1747
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.867,,1.464,0.012,0.04,$249 ,71,1,$249 ,71,353.2042
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,29.332,,8.19,0.009,0.14,"$3,144 ",210,2,"$1,572 ",105,78.1722
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,19.71,18.01,18.15,0.003,0.044,"$7,166 ",450,2,"$3,583 ",225,129.2
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,282.45,,32.43,0.003,0.044,"$7,166 ",450,2,"$3,583 ",225,12.03
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,85.213,,22.066,0.005,0.208,"$16,954 ",410,2,"$8,477 ",205,30.4317
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,27.847,,7.786,0.014,0.111,"$2,004 ",250,2,"$1,002 ",125,78.6604
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,7.027,,1.855,0.015,0.251,$469 ,28,1,$469 ,28,151.8783
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,13.823,,3.545,0.029,0.494,$469 ,28,1,$469 ,28,70.7933
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,16.898,,4.191,0.036,0.604,$469 ,28,1,$469 ,28,
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Flex-170 GPU,216.3,23.42,,0.112,1.442,"$1,925 ",150,1,"$1,925 ",150,9.137
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Arc A40 Pro,9.24,7.38,,,,,,1,,50,110.7
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Arc A50 Pro,10.67,8.36,,,,,,1,,75,96.79
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Arc A750,37.58,27.13,,0.157,0.167,$240 ,225,1,$240 ,225,29.08
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Arc A770,38.19,27.28,,0.112,0.170,$340 ,225,1,$340 ,225,28.46
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,219.06,,71.15,0.366,1.752,$599 ,125,1,$599 ,125,10.19
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,65.91,,18.1,0.11,0.527,$599 ,125,1,$599 ,125,16.55
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,279.58,,78.65,0.467,2.237,$599 ,125,1,$599 ,125,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,171.19,,45.8,0.52,1.37,$329 ,125,1,$329 ,125,9.14
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,62.45,,17.02,0.19,0.5,$329 ,125,1,$329 ,125,17.48
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,219.02,,52.56,0.666,1.752,$329 ,125,1,$329 ,125,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,121.813,,39.391,0.185,0.738,$658 ,165,1,$658 ,165,11.0425
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,71.229,,17.755,0.108,0.432,$658 ,165,1,$658 ,165,19.7132
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,175.049,,44.894,0.266,1.061,$658 ,165,1,$658 ,165,
@@ -253,25 +187,15 @@ Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,37.301,,19
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,92.646,,44.966,0.156,0.741,$594 ,125,1,$594 ,125,12.3153
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,28.537,,15.13,0.115,0.402,$249 ,71,1,$249 ,71,36.8888
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,301.215,,77.005,0.096,1.434,"$3,144 ",210,2,"$1,572 ",105,10.5711
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,3406.7,1879.09,1867.99,0.475,7.57,"$7,166 ",450,2,"$3,583 ",225,3.23
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,3406.7,,331.56,0.475,7.57,"$7,166 ",450,2,"$3,583 ",225,3.23
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,937.139,,225.776,0.055,2.286,"$16,954 ",410,2,"$8,477 ",205,5.6984
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,287.767,,73.617,0.144,1.151,"$2,004 ",250,2,"$1,002 ",125,11.1114
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,71.295,,18.482,0.152,2.546,$469 ,28,1,$469 ,28,15.8294
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,158.282,,36.884,0.337,5.653,$469 ,28,1,$469 ,28,10.6245
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,182.132,,44.198,0.388,6.505,$469 ,28,1,$469 ,28,
Inception-V4,OV-2022.3-8991,accel,Intel® Flex-170 GPU,2986.91,298.6,,1.552,19.913,"$1,925 ",150,1,"$1,925 ",150,3.968
Inception-V4,OV-2022.3-8991,accel,Intel® Arc A40 Pro,766.04,436.51,,,,,,1,,50,5.7
Inception-V4,OV-2022.3-8991,accel,Intel® Arc A50 Pro,877.68,495.62,,,,,,1,,75,5.88
Inception-V4,OV-2022.3-8991,accel,Intel® Arc A750,3259.61,1855.9,,13.582,14.487,$240 ,225,1,$240 ,225,3.75
Inception-V4,OV-2022.3-8991,accel,Intel® Arc A770,3417.37,2025.25,,10.051,15.188,$340 ,225,1,$340 ,225,3.56
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,1754.44,,664.82,2.929,14.036,$599 ,125,1,$599 ,125,1.4
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,528.03,,168.57,0.882,4.224,$599 ,125,1,$599 ,125,2.35
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,1568.83,,665.79,2.619,12.551,$599 ,125,1,$599 ,125,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,1240.01,,437.11,3.769,9.92,$329 ,125,1,$329 ,125,1.47
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,493.18,,157.94,1.499,3.945,$329 ,125,1,$329 ,125,2.43
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,1063.27,,454.21,3.232,8.506,$329 ,125,1,$329 ,125,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,1054.462,,346.546,1.603,6.391,$658 ,165,1,$658 ,165,1.4898
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,493.088,,145.503,0.749,2.988,$658 ,165,1,$658 ,165,2.472
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,1056.241,,361.472,1.605,6.401,$658 ,165,1,$658 ,165,
@@ -291,25 +215,15 @@ mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,315.107,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,774.346,,345.309,1.304,6.195,$594 ,125,1,$594 ,125,1.5452
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,233.43,,147.098,0.937,3.288,$249 ,71,1,$249 ,71,4.5879
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,2331.207,,691.743,0.741,11.101,"$3,144 ",210,2,"$1,572 ",105,1.4852
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,16445.75,8733.64,8626.42,2.295,36.546,"$7,166 ",450,2,"$3,583 ",225,0.65
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,16445.75,,2736.2,2.295,36.546,"$7,166 ",450,2,"$3,583 ",225,0.65
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,6691.915,,1796.357,0.395,16.322,"$16,954 ",410,2,"$8,477 ",205,1.0518
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,2225.935,,667.692,1.111,8.904,"$2,004 ",250,2,"$1,002 ",125,1.5444
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,579.307,,166.959,1.235,20.69,$469 ,28,1,$469 ,28,2.0215
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,582.636,,243.945,1.242,20.808,$469 ,28,1,$469 ,28,2.548
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,744.231,,292.071,1.587,26.58,$469 ,28,1,$469 ,28,
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3548.98,1412.68,,1.844,23.66,"$1,925 ",150,1,"$1,925 ",150,1.344
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,2676.23,1939.24,,,,,,1,,50,0.95
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,2874.1,1945.48,,,,,,1,,75,0.98
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Arc A750,6510.01,5188.87,,27.125,28.933,$240 ,225,1,$240 ,225,0.79
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Arc A770,6717.52,5312.93,,19.757,29.856,$340 ,225,1,$340 ,225,0.76
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,4041.77,,2123.33,6.748,32.334,$599 ,125,1,$599 ,125,0.66
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,978.61,,424.34,1.634,7.829,$599 ,125,1,$599 ,125,1.21
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,4630.44,,1944.62,7.73,37.044,$599 ,125,1,$599 ,125,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,3306.92,,1403.57,10.051,26.455,$329 ,125,1,$329 ,125,0.65
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,919.85,,384.42,2.796,7.359,$329 ,125,1,$329 ,125,1.36
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,3556.06,,1332.32,10.809,28.448,$329 ,125,1,$329 ,125,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,2446.221,,1003.129,3.718,14.826,$658 ,165,1,$658 ,165,0.7182
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,1265.969,,389.894,1.924,7.673,$658 ,165,1,$658 ,165,1.3894
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,2680.458,,1013.049,4.074,16.245,$658 ,165,1,$658 ,165,
@@ -329,25 +243,15 @@ mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,825.071,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,2067.162,,868.25,3.48,16.537,$594 ,125,1,$594 ,125,0.7363
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,594.283,,479.567,2.387,8.37,$249 ,71,1,$249 ,71,1.8531
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,5882.455,,1895.498,1.871,28.012,"$3,144 ",210,2,"$1,572 ",105,1.3871
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,28383.76,16212.74,16065.38,3.961,63.075,"$7,166 ",450,2,"$3,583 ",225,0.55
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,28383.76,,7254.28,3.961,63.075,"$7,166 ",450,2,"$3,583 ",225,0.55
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,15616.083,,4308.927,0.921,38.088,"$16,954 ",410,2,"$8,477 ",205,0.8685
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,5616.283,,1835.686,2.803,22.465,"$2,004 ",250,2,"$1,002 ",125,1.404
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,1463.21,,538.597,3.12,52.258,$469 ,28,1,$469 ,28,0.8864
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,2076.015,,544.641,4.426,74.143,$469 ,28,1,$469 ,28,1.7212
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,2677.374,,698.942,5.709,95.621,$469 ,28,1,$469 ,28,
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,18371.95,4738.33,,9.544,122.48,"$1,925 ",150,1,"$1,925 ",150,1.15
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,,,,,,,,1,,50,
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,,,,,,,,1,,75,
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Arc A750,,,,$0 ,0,$240 ,225,1,$240 ,225,
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Arc A770,,,,$0 ,0,$340 ,225,1,$340 ,225,
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,1495.77,,415.82,2.497,11.966,$599 ,125,1,$599 ,125,1.38
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,497.54,,150.99,0.831,3.98,$599 ,125,1,$599 ,125,2.19
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,1821.4,,615.14,3.041,14.571,$599 ,125,1,$599 ,125,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,1169.04,,336.09,3.553,9.352,$329 ,125,1,$329 ,125,1.5
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,467.43,,141.76,1.421,3.739,$329 ,125,1,$329 ,125,2.36
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,1443.42,,445.25,4.387,11.547,$329 ,125,1,$329 ,125,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,804.771,,212.574,1.223,4.877,$658 ,165,1,$658 ,165,1.3886
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,491.337,,146.839,0.747,2.978,$658 ,165,1,$658 ,165,2.2655
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,1180.984,,365.777,1.795,7.157,$658 ,165,1,$658 ,165,
@@ -367,25 +271,15 @@ resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,265.351,,130
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,654.533,,307.741,1.102,5.236,$594 ,125,1,$594 ,125,1.6723
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,198.189,,101.399,0.796,2.791,$249 ,71,1,$249 ,71,5.2039
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,2017.368,,547.47,0.642,9.607,"$3,144 ",210,2,"$1,572 ",105,1.2913
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,27331.02,16095.24,16009.04,3.814,60.736,"$7,166 ",450,2,"$3,583 ",225,0.38
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,27331.02,,2329.12,3.814,60.736,"$7,166 ",450,2,"$3,583 ",225,0.38
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,6320.391,,1582.817,0.373,15.416,"$16,954 ",410,2,"$8,477 ",205,0.667
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1940.935,,522.654,0.969,7.764,"$2,004 ",250,2,"$1,002 ",125,1.3451
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,480.992,,126.244,1.026,17.178,$469 ,28,1,$469 ,28,2.242
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,1061.591,,297.705,2.264,37.914,$469 ,28,1,$469 ,28,1.793
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,1237.94,,342.513,2.64,44.212,$469 ,28,1,$469 ,28,
resnet-18 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,27454.08,2264.67,,14.262,183.027,"$1,925 ",150,1,"$1,925 ",150,0.946
resnet-18 ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,6911.91,3812.93,,,,,,1,,50,0.69
resnet-18 ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,8440.86,4691.2,,,,,,1,,75,0.7
resnet-18 ,OV-2022.3-8991,accel,Intel® Arc A750,31437.04,17244.34,,130.988,139.720,$240 ,225,1,$240 ,225,0.54
resnet-18 ,OV-2022.3-8991,accel,Intel® Arc A770,35554.47,19135.31,,104.572,158.020,$340 ,225,1,$340 ,225,0.54
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,729.93,,240.59,1.219,5.839,$599 ,125,1,$599 ,125,2.91
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,238.44,,68.18,0.398,1.908,$599 ,125,1,$599 ,125,4.74
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,895.28,,255.91,1.495,7.162,$599 ,125,1,$599 ,125,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,576.86,,153.71,1.753,4.615,$329 ,125,1,$329 ,125,3.04
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,216.97,,64.36,0.659,1.736,$329 ,125,1,$329 ,125,5.3
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,717.91,,188.59,2.182,5.743,$329 ,125,1,$329 ,125,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,400.118,,133.834,0.608,2.425,$658 ,165,1,$658 ,165,3.0384
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,229.863,,66.122,0.349,1.393,$658 ,165,1,$658 ,165,5.2538
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,574.341,,155.749,0.873,3.481,$658 ,165,1,$658 ,165,
@@ -406,24 +300,14 @@ resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,317.744,,149.441,0
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,97.606,,52.17,0.392,1.375,$249 ,71,1,$249 ,71,10.851
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,980.813,,268.009,0.312,4.671,"$3,144 ",210,2,"$1,572 ",105,2.9838
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,2905.803,,748.583,0.405,6.457,"$7,166 ",450,2,"$3,583 ",225,1.475
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,11359.88,5494.15,5497.22,0.67,27.707,"$16,954 ",410,2,"$8,477 ",205,0.94
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,11359.88,,1118.97,0.67,27.707,"$16,954 ",410,2,"$8,477 ",205,0.94
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,937.572,,255.866,0.468,3.75,"$2,004 ",250,2,"$1,002 ",125,3.0985
resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,504.247,,125.407,1.075,18.009,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,595.133,,150.024,1.269,21.255,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,accel,Intel® Flex-170 GPU,10810.92,1005.16,,5.616,72.073,"$1,925 ",150,1,"$1,925 ",150,1.624
resnet-50,OV-2022.3-8991,accel,Intel® Arc A40 Pro,2831.48,1628.15,,,,,,1,,50,1.28
resnet-50,OV-2022.3-8991,accel,Intel® Arc A50 Pro,3233.61,1812.84,,,,,,1,,75,1.3
resnet-50,OV-2022.3-8991,accel,Intel® Arc A750,11449.86,6590.86,,47.708,50.888,$240 ,225,1,$240 ,225,1.03
resnet-50,OV-2022.3-8991,accel,Intel® Arc A770,12512.67,6958.31,,36.802,55.612,$340 ,225,1,$340 ,225,0.98
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,11.75,,4.24,0.02,0.094,$599 ,125,1,$599 ,125,162.07
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,4.5,,1.45,0.008,0.036,$599 ,125,1,$599 ,125,226.99
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,11.63,,4.24,0.019,0.093,$599 ,125,1,$599 ,125,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,8.21,,2.7,0.025,0.066,$329 ,125,1,$329 ,125,147.53
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,4.22,,1.36,0.013,0.034,$329 ,125,1,$329 ,125,241.92
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,8,,2.7,0.024,0.064,$329 ,125,1,$329 ,125,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,6.712,,2.394,0.01,0.041,$658 ,165,1,$658 ,165,175.7493
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,4.228,,1.262,0.006,0.026,$658 ,165,1,$658 ,165,241.7838
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,6.666,,2.393,0.01,0.04,$658 ,165,1,$658 ,165,
@@ -443,25 +327,15 @@ ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.04
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,4.871,,2.935,0.008,0.039,$594 ,125,1,$594 ,125,239.8346
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,1.55,,0.919,0.006,0.022,$249 ,71,1,$249 ,71,665.2714
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,15.706,,4.572,0.005,0.075,"$3,144 ",210,2,"$1,572 ",105,132.0319
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,152.74,144.16,144.02,0.021,0.339,"$7,166 ",450,2,"$3,583 ",225,14.48
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,152.74,,20.32,0.021,0.339,"$7,166 ",450,2,"$3,583 ",225,14.48
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,47.365,,14.722,0.003,0.116,"$16,954 ",410,2,"$8,477 ",205,44.387
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,14.966,,4.35,0.007,0.06,"$2,004 ",250,2,"$1,002 ",125,138.9625
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,3.556,,1.015,0.008,0.127,$469 ,28,1,$469 ,28,284.2379
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,8.239,,2.545,0.018,0.294,$469 ,28,1,$469 ,28,122.4561
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,3.565,,1.01,0.008,0.127,$469 ,28,1,$469 ,28,
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,132.44,18.19,,0.069,0.883,"$1,925 ",150,1,"$1,925 ",150,19.933
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,35.75,24.28,,,,,,1,,50,33.83
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,44.01,31.32,,,,,,1,,75,29.62
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Arc A750,136.84,107.27,,0.570,0.608,$240 ,225,1,$240 ,225,18.81
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Arc A770,153.43,116.15,,0.451,0.682,$340 ,225,1,$340 ,225,19.88
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,18.79,,6.86,0.031,0.15,$599 ,125,1,$599 ,125,99.01
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,7.59,,2.3,0.013,0.061,$599 ,125,1,$599 ,125,132.32
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,18.14,,7,0.03,0.145,$599 ,125,1,$599 ,125,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,12.91,,4.36,0.039,0.103,$329 ,125,1,$329 ,125,95.92
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,7.13,,2.16,0.022,0.057,$329 ,125,1,$329 ,125,140.88
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,16.63,,5.72,0.051,0.133,$329 ,125,1,$329 ,125,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,10.652,,3.873,0.016,0.065,$658 ,165,1,$658 ,165,111.0757
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,7.059,,2.154,0.011,0.043,$658 ,165,1,$658 ,165,142.0745
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,14.933,,4.935,0.023,0.091,$658 ,165,1,$658 ,165,
@@ -481,25 +355,15 @@ unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.90
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,7.413,,4.615,0.012,0.059,$594 ,125,1,$594 ,125,157.3622
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.386,,1.481,0.01,0.034,$249 ,71,1,$249 ,71,422.1157
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,29.251,,7.301,0.009,0.139,"$3,144 ",210,2,"$1,572 ",105,69.3596
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,381.85,151.97,151.98,0.053,0.849,"$7,166 ",450,2,"$3,583 ",225,7.95
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,381.85,,30.96,0.053,0.849,"$7,166 ",450,2,"$3,583 ",225,7.95
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,93.081,,21.382,0.005,0.227,"$16,954 ",410,2,"$8,477 ",205,22.9476
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,27.814,,6.966,0.014,0.111,"$2,004 ",250,2,"$1,002 ",125,72.9773
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,6.54,,1.677,0.014,0.234,$469 ,28,1,$469 ,28,152.602
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,15.391,,4.571,0.033,0.55,$469 ,28,1,$469 ,28,61.6002
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,17.962,,4.848,0.038,0.642,$469 ,28,1,$469 ,28,
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,218.12,35.2,,0.113,1.454,"$1,925 ",150,1,"$1,925 ",150,7.149
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Arc A40 Pro,51.45,33.45,,,,,,1,,50,
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Arc A50 Pro,61.08,40.36,,,,,,1,,75,
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Arc A750,212.93,151.71,,0.887,0.946,$240 ,225,1,$240 ,225,6.27
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Arc A770,246.87,165.05,,0.726,1.097,$340 ,225,1,$340 ,225,5.66
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,802.63,,252.57,1.34,6.421,$599 ,125,1,$599 ,125,2.69
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,249.5,,86.81,0.417,1.996,$599 ,125,1,$599 ,125,4.79
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,795.31,,247.17,1.328,6.362,$599 ,125,1,$599 ,125,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,638.25,,206.62,1.94,5.106,$329 ,125,1,$329 ,125,2.59
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,229.22,,81.49,0.697,1.834,$329 ,125,1,$329 ,125,5.22
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,631.71,,205.81,1.92,5.054,$329 ,125,1,$329 ,125,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,428.506,,162.077,0.651,2.597,$658 ,165,1,$658 ,165,2.4778
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,245.738,,84.457,0.373,1.489,$658 ,165,1,$658 ,165,3.8792
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,598.947,,195.608,0.91,3.63,$658 ,165,1,$658 ,165,
@@ -519,25 +383,15 @@ yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,147.041,,8
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,359.61,,173.635,0.605,2.877,$594 ,125,1,$594 ,125,2.9037
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,109.066,,64.87,0.438,1.536,$249 ,71,1,$249 ,71,9.3792
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,1058.322,,337.035,0.337,5.04,"$3,144 ",210,2,"$1,572 ",105,2.4971
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,7344.88,5212.9,5236.28,1.025,16.322,"$7,166 ",450,2,"$3,583 ",225,1.06
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,7344.88,,1405.51,1.025,16.322,"$7,166 ",450,2,"$3,583 ",225,1.06
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,2931.242,,901.832,0.173,7.149,"$16,954 ",410,2,"$8,477 ",205,1.215
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1015.77,,321.263,0.507,4.063,"$2,004 ",250,2,"$1,002 ",125,2.6076
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,258.05,,79.963,0.55,9.216,$469 ,28,1,$469 ,28,4.1833
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,492.645,,157.98,1.05,17.594,$469 ,28,1,$469 ,28,2.5788
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,606.117,,186.339,1.292,21.647,$469 ,28,1,$469 ,28,
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3634.16,1209.67,,1.888,24.228,"$1,925 ",150,1,"$1,925 ",150,1.293
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Arc A40 Pro,,,,,,,,1,,50,
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Arc A50 Pro,,,,,,,,1,,75,
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Arc A750,1557.21,1409.73,,6.488,6.921,$240 ,225,1,$240 ,225,1.89
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Arc A770,1659.92,1516.83,,4.882,7.377,$340 ,225,1,$340 ,225,1.83
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,37.15,,13.03,0.062,0.297,$599 ,125,1,$599 ,125,55.96
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,12.92,,4.26,0.022,0.103,$599 ,125,1,$599 ,125,78.73
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,37.16,,13.54,0.062,0.297,$599 ,125,1,$599 ,125,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,25.5,,8.36,0.078,0.204,$329 ,125,1,$329 ,125,53.79
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,12.15,,4,0.037,0.097,$329 ,125,1,$329 ,125,83.64
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,31.99,,10.82,0.097,0.256,$329 ,125,1,$329 ,125,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,21.833,,7.096,0.033,0.132,$658 ,165,1,$658 ,165,58.4745
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,11.956,,3.869,0.018,0.072,$658 ,165,1,$658 ,165,85.1633
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,26.693,,8.644,0.041,0.162,$658 ,165,1,$658 ,165,
@@ -557,15 +411,11 @@ yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,6.399,,3.765,0.
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,15.614,,7.925,0.026,0.125,$594 ,125,1,$594 ,125,71.631
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,4.674,,2.804,0.019,0.066,$249 ,71,1,$249 ,71,214.0957
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,47.338,,14.464,0.015,0.225,"$3,144 ",210,2,"$1,572 ",105,45.7699
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,252.03,228.55,228.67,0.035,0.56,"$7,166 ",450,2,"$3,583 ",225,15.01
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,252.03,,58.12,0.035,0.56,"$7,166 ",450,2,"$3,583 ",225,15.01
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,131.466,,41.001,0.008,0.321,"$16,954 ",410,2,"$8,477 ",205,19.2807
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,45.047,,13.741,0.022,0.18,"$2,004 ",250,2,"$1,002 ",125,48.0344
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,11.067,,3.259,0.024,0.395,$469 ,28,1,$469 ,28,92.2912
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,25.048,,7.384,0.053,0.895,$469 ,28,1,$469 ,28,39.1492
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,29.658,,8.32,0.063,1.059,$469 ,28,1,$469 ,28,
yolo_v4,OV-2022.3-8991,accel,Intel® Flex-170 GPU,454.49,56.78,,0.236,3.03,"$1,925 ",150,1,"$1,925 ",150,6.969
yolo_v4,OV-2022.3-8991,accel,Intel® Arc A40 Pro,,,,,,,,1,,50,
yolo_v4,OV-2022.3-8991,accel,Intel® Arc A50 Pro,,,,,,,,1,,75,
yolo_v4,OV-2022.3-8991,accel,Intel® Arc A750,288.51,229.91,,1.202,1.282,$240 ,225,1,$240 ,225,
yolo_v4,OV-2022.3-8991,accel,Intel® Arc A770,393.82,247.1,,1.158,1.750,$340 ,225,1,$340 ,225,
end_rec,,,,,,,,,,,,,,
1 Network model Release IE-Type Platform name Throughput-INT8 Throughput-FP16 Throughput-FP32 Value Efficiency Price TDP Sockets Price/socket TDP/socket Latency
2 begin_rec
bert-base-cased OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 163.72 57.83 0.273 1.31 $599 125 1 $599 125 15.53
bert-base-cased OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 56.07 23.3 0.094 0.449 $599 125 1 $599 125 19.62
bert-base-cased OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 210.17 85.83 0.351 1.681 $599 125 1 $599 125
bert-base-cased OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 128.05 45.94 0.389 1.024 $329 125 1 $329 125 12.71
bert-base-cased OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 53.03 21.9 0.161 0.424 $329 125 1 $329 125 20.81
bert-base-cased OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 163.33 64.74 0.496 1.307 $329 125 1 $329 125
3 bert-base-cased OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 96.06 35.627 0.146 0.582 $658 165 1 $658 165 17.1432
4 bert-base-cased OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 53.093 22.253 0.081 0.322 $658 165 1 $658 165 22.0002
5 bert-base-cased OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 108.306 44.797 0.165 0.656 $658 165 1 $658 165
19 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 69.053 40.243 0.116 0.552 $594 125 1 $594 125 18.309
20 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 23.402 14.614 0.094 0.33 $249 71 1 $249 71 44.8984
21 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 266.949 79.033 0.085 1.271 $3,144 210 2 $1,572 105 12.4065
22 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 2090.76 1372.2 1368.68 326.55 0.292 4.646 $7,166 450 2 $3,583 225 4.61
23 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 682.593 225.713 0.04 1.665 $16,954 410 2 $8,477 205 6.9035
24 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 256.994 75.502 0.128 1.028 $2,004 250 2 $1,002 125 13.0382
25 bert-base-cased OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 64.632 18.394 0.138 2.308 $469 28 1 $469 28 17.638
26 bert-base-cased OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 95.656 44.056 0.204 3.416 $469 28 1 $469 28 14.1005
27 bert-base-cased OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 128.005 50.592 0.273 4.572 $469 28 1 $469 28
28 bert-base-cased OV-2022.3-8991 accel Intel® Flex-170 GPU 906.3 348.52 0.471 6.042 $1,925 150 1 $1,925 150 7.381
bert-base-cased OV-2022.3-8991 accel Intel® Arc A40 Pro 289.24 152.74 1 50 7.14
bert-base-cased OV-2022.3-8991 accel Intel® Arc A50 Pro 343.57 180.89 1 75 6.64
bert-base-cased OV-2022.3-8991 accel Intel® Arc A750 993.22 1486.36 4.138 4.414 $240 225 1 $240 225 6.46
bert-base-cased OV-2022.3-8991 accel Intel® Arc A770 1088.69 1622.61 3.202 4.839 $340 225 1 $340 225 6.29
29 end_rec
30 begin_rec
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 51.72 17.6 0.086 0.414 $599 125 1 $599 125 49.13
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 19.05 6.88 0.032 0.152 $599 125 1 $599 125 55.82
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 53 19.95 0.088 0.424 $599 125 1 $599 125
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 35.31 11.04 0.107 0.282 $329 125 1 $329 125 41.56
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 17.93 6.46 0.054 0.143 $329 125 1 $329 125 59.37
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 43.42 16.19 0.132 0.347 $329 125 1 $329 125
31 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 7.714 3.093 0.012 0.047 $658 165 1 $658 165 155.3633
32 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 5.617 1.978 0.009 0.034 $658 165 1 $658 165 181.8303
33 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 10.602 3.753 0.016 0.064 $658 165 1 $658 165
47 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 4.801 2.729 0.008 0.038 $594 125 1 $594 125 200.0794
48 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 2.098 1.32 0.008 0.03 $249 71 1 $249 71 492.0938
49 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 21.062 7.021 0.007 0.1 $3,144 210 2 $1,572 105 101.4694
50 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 651.95 378.57 384.02 91.18 0.091 1.449 $7,166 450 2 $3,583 225 12.87
51 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 46.064 19.051 0.003 0.112 $16,954 410 2 $8,477 205 49.4869
52 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 20.014 6.726 0.01 0.08 $2,004 250 2 $1,002 125 105.9423
53 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 5.192 1.626 0.011 0.185 $469 28 1 $469 28 203.6311
54 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 10.476 3.914 0.022 0.374 $469 28 1 $469 28 95.6598
55 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 11.75 4.168 0.025 0.42 $469 28 1 $469 28
56 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 accel Intel® Flex-170 GPU 74.47 25.77 0.039 0.496 $1,925 150 1 $1,925 150 19.768
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 accel Intel® Arc A40 Pro 74.01 44.31 1 50 17.89
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 accel Intel® Arc A50 Pro 92.53 53.04 1 75 15.77
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 accel Intel® Arc A750 270.49 185.64 1.127 1.202 $240 225 1 $240 225 14.36
bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 accel Intel® Arc A770 337.47 205.46 0.993 1.500 $340 225 1 $340 225 13.97
57 end_rec
58 begin_rec
deeplabv3 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 184.93 63.79 0.309 1.479 $599 125 1 $599 125 10.31
deeplabv3 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 69.31 22.67 0.116 0.554 $599 125 1 $599 125 15.02
deeplabv3 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 191.48 62.99 0.32 1.532 $599 125 1 $599 125
deeplabv3 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 139.02 48.48 0.423 1.112 $329 125 1 $329 125 10.48
deeplabv3 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 65.55 21.24 0.199 0.524 $329 125 1 $329 125 16.12
deeplabv3 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 154.19 52.87 0.469 1.234 $329 125 1 $329 125
59 deeplabv3 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 99.078 36.552 0.151 0.6 $658 165 1 $658 165 11.269
60 deeplabv3 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 57.707 13.789 0.088 0.35 $658 165 1 $658 165 16.263
61 deeplabv3 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 115.59 39.82 0.176 0.701 $658 165 1 $658 165
75 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 79.42 21.03 0.134 0.635 $594 125 1 $594 125 12.8397
76 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 26.173 16.906 0.105 0.369 $249 71 1 $249 71 37.9245
77 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 248.049 81.667 0.079 1.181 $3,144 210 2 $1,572 105 8.9485
78 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 1139.5 702.28 699.04 271.62 0.159 2.532 $7,166 450 2 $3,583 225 2.47
79 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 632.113 168.65 0.037 1.542 $16,954 410 2 $8,477 205 4.0073
80 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 241.703 78.963 0.121 0.967 $2,004 250 2 $1,002 125 9.356
81 deeplabv3 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 64.13 18.519 0.137 2.29 $469 28 1 $469 28 16.6586
82 deeplabv3 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 104.926 24.592 0.224 3.747 $469 28 1 $469 28 9.1435
83 deeplabv3 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 121.441 30.498 0.259 4.337 $469 28 1 $469 28
84 deeplabv3 OV-2022.3-8991 accel Intel® Flex-170 GPU 882.04 98.95 0.458 5.88 $1,925 150 1 $1,925 150 2.674
deeplabv3 OV-2022.3-8991 accel Intel® Arc A40 Pro 246.48 197.01 1 50 4.8
deeplabv3 OV-2022.3-8991 accel Intel® Arc A50 Pro 281.31 221.77 1 75 4.74
deeplabv3 OV-2022.3-8991 accel Intel® Arc A750 813.12 626.48 3.388 3.614 $240 225 1 $240 225 1.9
deeplabv3 OV-2022.3-8991 accel Intel® Arc A770 763.91 595.5 2.247 3.395 $340 225 1 $340 225 1.83
85 end_rec
86 begin_rec
densenet-121 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 777.86 284.56 1.299 6.223 $599 125 1 $599 125 3.26
densenet-121 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 195.3 66.46 0.326 1.562 $599 125 1 $599 125 6.8
densenet-121 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 899.5 293.29 1.502 7.196 $599 125 1 $599 125
densenet-121 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 612.99 184.9 1.863 4.904 $329 125 1 $329 125 3.12
densenet-121 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 178.37 62.69 0.542 1.427 $329 125 1 $329 125 8.37
densenet-121 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 707.99 207.12 2.152 5.664 $329 125 1 $329 125
87 densenet-121 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 457.193 165.166 0.695 2.771 $658 165 1 $658 165 3.141
88 densenet-121 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 203.417 68.438 0.309 1.233 $658 165 1 $658 165 6.6728
89 densenet-121 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 575.442 179.858 0.875 3.488 $658 165 1 $658 165
103 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 360.501 182.543 0.607 2.884 $594 125 1 $594 125 3.6046
104 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 114.844 67.188 0.461 1.618 $249 71 1 $249 71 9.7609
105 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 1116.372 295.952 0.355 5.316 $3,144 210 2 $1,572 105 3.9606
106 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 8279.14 4856.54 4862.51 1137.41 1.155 18.398 $7,166 450 2 $3,583 225 2.39
107 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 3155.106 815.725 0.186 7.695 $16,954 410 2 $8,477 205 2.8831
108 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 1064.824 283.423 0.531 4.259 $2,004 250 2 $1,002 125 4.0689
109 densenet-121 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 265.167 74.501 0.565 9.47 $469 28 1 $469 28 4.7413
110 densenet-121 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 391.185 123.519 0.834 13.971 $469 28 1 $469 28 6.5259
111 densenet-121 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 526.12 150.35 1.122 18.79 $469 28 1 $469 28
112 densenet-121 OV-2022.3-8991 accel Intel® Flex-170 GPU 3440.18 1178.68 1.787 22.935 $1,925 150 1 $1,925 150 3.302
densenet-121 OV-2022.3-8991 accel Intel® Arc A40 Pro 779.8 650.48 1 50 2.97
densenet-121 OV-2022.3-8991 accel Intel® Arc A50 Pro 817.69 637.04 1 75 3.06
densenet-121 OV-2022.3-8991 accel Intel® Arc A750 2022.98 1666.6 8.429 8.991 $240 225 1 $240 225 2.56
densenet-121 OV-2022.3-8991 accel Intel® Arc A770 2076.41 1647.41 6.107 9.228 $340 225 1 $340 225 2.56
113 end_rec
114 begin_rec
efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 209.26 106.11 0.349 1.674 $599 125 1 $599 125 10.36
efficientdet-d0 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 82.04 47.85 0.137 0.656 $599 125 1 $599 125 22.35
efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 197.85 108.3 0.33 1.583 $599 125 1 $599 125
efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 155.65 90.91 0.473 1.245 $329 125 1 $329 125 9.92
efficientdet-d0 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 77.28 44.91 0.235 0.618 $329 125 1 $329 125 22.93
efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 172.54 95.94 0.524 1.38 $329 125 1 $329 125
115 efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 112.297 64.06 0.171 0.681 $658 165 1 $658 165 11.8265
116 efficientdet-d0 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 73.766 38.742 0.112 0.447 $658 165 1 $658 165 21.403
117 efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 128.735 76.62 0.196 0.78 $658 165 1 $658 165
131 efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 94.981 36.434 0.16 0.76 $594 125 1 $594 125 12.658
132 efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 35.831 27.306 0.144 0.505 $249 71 1 $249 71 30.9469
133 efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 239.06 161.224 0.076 1.138 $3,144 210 2 $1,572 105 13.9735
134 efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 875.53 495.04 492.93 560.48 0.122 1.946 $7,166 450 2 $3,583 225 5.07
135 efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 471.02 300.291 0.028 1.149 $16,954 410 2 $8,477 205 9.3866
136 efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 231.873 156.285 0.116 0.927 $2,004 250 2 $1,002 125 14.1605
137 efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 71.482 41.123 0.152 2.553 $469 28 1 $469 28 16.6952
138 efficientdet-d0 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 92.52 50.538 0.197 3.304 $469 28 1 $469 28 17.295
139 efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 107.688 56.901 0.23 3.846 $469 28 1 $469 28
140 efficientdet-d0 OV-2022.3-8991 accel Intel® Flex-170 GPU 463.67 295.13 0.241 3.091 $1,925 150 1 $1,925 150 5.603
efficientdet-d0 OV-2022.3-8991 accel Intel® Arc A40 Pro 106.04 142.82 1 50 12.31
efficientdet-d0 OV-2022.3-8991 accel Intel® Arc A50 Pro 110.64 142.38 1 75 11.98
efficientdet-d0 OV-2022.3-8991 accel Intel® Arc A750 496.2 672.57 2.068 2.205 $240 225 1 $240 225 5.17
efficientdet-d0 OV-2022.3-8991 accel Intel® Arc A770 497.52 680.16 1.463 2.211 $340 225 1 $340 225 5.03
141 end_rec
142 begin_rec
faster_rcnn_resnet50_coco OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 5.94 2.41 0.01 0.048 $599 125 1 $599 125 270.57
faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 2.3 0.71 0.004 0.018 $599 125 1 $599 125 437.94
faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 6.45 2.25 0.011 0.052 $599 125 1 $599 125
faster_rcnn_resnet50_coco OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 4.55 1.88 0.014 0.036 $329 125 1 $329 125 310.58
faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 2.17 0.67 0.007 0.017 $329 125 1 $329 125 465.03
faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 5.3 2.01 0.016 0.042 $329 125 1 $329 125
143 faster_rcnn_resnet50_coco OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 12.921 4.016 0.02 0.078 $658 165 1 $658 165 89.8929
144 faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 6.802 1.82 0.01 0.041 $658 165 1 $658 165 149.7396
145 faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 15.679 4.499 0.024 0.095 $658 165 1 $658 165
159 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 8.977 4.542 0.015 0.072 $594 125 1 $594 125 137.1747
160 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 2.867 1.464 0.012 0.04 $249 71 1 $249 71 353.2042
161 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 29.332 8.19 0.009 0.14 $3,144 210 2 $1,572 105 78.1722
162 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 19.71 282.45 18.01 18.15 32.43 0.003 0.044 $7,166 450 2 $3,583 225 129.2 12.03
163 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 85.213 22.066 0.005 0.208 $16,954 410 2 $8,477 205 30.4317
164 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 27.847 7.786 0.014 0.111 $2,004 250 2 $1,002 125 78.6604
165 faster_rcnn_resnet50_coco OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 7.027 1.855 0.015 0.251 $469 28 1 $469 28 151.8783
166 faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 13.823 3.545 0.029 0.494 $469 28 1 $469 28 70.7933
167 faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 16.898 4.191 0.036 0.604 $469 28 1 $469 28
168 faster_rcnn_resnet50_coco OV-2022.3-8991 accel Intel® Flex-170 GPU 216.3 23.42 0.112 1.442 $1,925 150 1 $1,925 150 9.137
faster_rcnn_resnet50_coco OV-2022.3-8991 accel Intel® Arc A40 Pro 9.24 7.38 1 50 110.7
faster_rcnn_resnet50_coco OV-2022.3-8991 accel Intel® Arc A50 Pro 10.67 8.36 1 75 96.79
faster_rcnn_resnet50_coco OV-2022.3-8991 accel Intel® Arc A750 37.58 27.13 0.157 0.167 $240 225 1 $240 225 29.08
faster_rcnn_resnet50_coco OV-2022.3-8991 accel Intel® Arc A770 38.19 27.28 0.112 0.170 $340 225 1 $340 225 28.46
169 end_rec
170 begin_rec
Inception-V4 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 219.06 71.15 0.366 1.752 $599 125 1 $599 125 10.19
Inception-V4 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 65.91 18.1 0.11 0.527 $599 125 1 $599 125 16.55
Inception-V4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 279.58 78.65 0.467 2.237 $599 125 1 $599 125
Inception-V4 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 171.19 45.8 0.52 1.37 $329 125 1 $329 125 9.14
Inception-V4 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 62.45 17.02 0.19 0.5 $329 125 1 $329 125 17.48
Inception-V4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 219.02 52.56 0.666 1.752 $329 125 1 $329 125
171 Inception-V4 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 121.813 39.391 0.185 0.738 $658 165 1 $658 165 11.0425
172 Inception-V4 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 71.229 17.755 0.108 0.432 $658 165 1 $658 165 19.7132
173 Inception-V4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 175.049 44.894 0.266 1.061 $658 165 1 $658 165
187 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 92.646 44.966 0.156 0.741 $594 125 1 $594 125 12.3153
188 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 28.537 15.13 0.115 0.402 $249 71 1 $249 71 36.8888
189 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 301.215 77.005 0.096 1.434 $3,144 210 2 $1,572 105 10.5711
190 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 3406.7 1879.09 1867.99 331.56 0.475 7.57 $7,166 450 2 $3,583 225 3.23
191 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 937.139 225.776 0.055 2.286 $16,954 410 2 $8,477 205 5.6984
192 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 287.767 73.617 0.144 1.151 $2,004 250 2 $1,002 125 11.1114
193 Inception-V4 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 71.295 18.482 0.152 2.546 $469 28 1 $469 28 15.8294
194 Inception-V4 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 158.282 36.884 0.337 5.653 $469 28 1 $469 28 10.6245
195 Inception-V4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 182.132 44.198 0.388 6.505 $469 28 1 $469 28
196 Inception-V4 OV-2022.3-8991 accel Intel® Flex-170 GPU 2986.91 298.6 1.552 19.913 $1,925 150 1 $1,925 150 3.968
Inception-V4 OV-2022.3-8991 accel Intel® Arc A40 Pro 766.04 436.51 1 50 5.7
Inception-V4 OV-2022.3-8991 accel Intel® Arc A50 Pro 877.68 495.62 1 75 5.88
Inception-V4 OV-2022.3-8991 accel Intel® Arc A750 3259.61 1855.9 13.582 14.487 $240 225 1 $240 225 3.75
Inception-V4 OV-2022.3-8991 accel Intel® Arc A770 3417.37 2025.25 10.051 15.188 $340 225 1 $340 225 3.56
197 end_rec
198 begin_rec
mobilenet-ssd OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 1754.44 664.82 2.929 14.036 $599 125 1 $599 125 1.4
mobilenet-ssd OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 528.03 168.57 0.882 4.224 $599 125 1 $599 125 2.35
mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 1568.83 665.79 2.619 12.551 $599 125 1 $599 125
mobilenet-ssd OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 1240.01 437.11 3.769 9.92 $329 125 1 $329 125 1.47
mobilenet-ssd OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 493.18 157.94 1.499 3.945 $329 125 1 $329 125 2.43
mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 1063.27 454.21 3.232 8.506 $329 125 1 $329 125
199 mobilenet-ssd OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 1054.462 346.546 1.603 6.391 $658 165 1 $658 165 1.4898
200 mobilenet-ssd OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 493.088 145.503 0.749 2.988 $658 165 1 $658 165 2.472
201 mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 1056.241 361.472 1.605 6.401 $658 165 1 $658 165
215 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 774.346 345.309 1.304 6.195 $594 125 1 $594 125 1.5452
216 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 233.43 147.098 0.937 3.288 $249 71 1 $249 71 4.5879
217 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 2331.207 691.743 0.741 11.101 $3,144 210 2 $1,572 105 1.4852
218 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 16445.75 8733.64 8626.42 2736.2 2.295 36.546 $7,166 450 2 $3,583 225 0.65
219 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 6691.915 1796.357 0.395 16.322 $16,954 410 2 $8,477 205 1.0518
220 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 2225.935 667.692 1.111 8.904 $2,004 250 2 $1,002 125 1.5444
221 mobilenet-ssd OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 579.307 166.959 1.235 20.69 $469 28 1 $469 28 2.0215
222 mobilenet-ssd OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 582.636 243.945 1.242 20.808 $469 28 1 $469 28 2.548
223 mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 744.231 292.071 1.587 26.58 $469 28 1 $469 28
224 mobilenet-ssd OV-2022.3-8991 accel Intel® Flex-170 GPU 3548.98 1412.68 1.844 23.66 $1,925 150 1 $1,925 150 1.344
mobilenet-ssd OV-2022.3-8991 accel Intel® Arc A40 Pro 2676.23 1939.24 1 50 0.95
mobilenet-ssd OV-2022.3-8991 accel Intel® Arc A50 Pro 2874.1 1945.48 1 75 0.98
mobilenet-ssd OV-2022.3-8991 accel Intel® Arc A750 6510.01 5188.87 27.125 28.933 $240 225 1 $240 225 0.79
mobilenet-ssd OV-2022.3-8991 accel Intel® Arc A770 6717.52 5312.93 19.757 29.856 $340 225 1 $340 225 0.76
225 end_rec
226 begin_rec
mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 4041.77 2123.33 6.748 32.334 $599 125 1 $599 125 0.66
mobilenet-v2 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 978.61 424.34 1.634 7.829 $599 125 1 $599 125 1.21
mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 4630.44 1944.62 7.73 37.044 $599 125 1 $599 125
mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 3306.92 1403.57 10.051 26.455 $329 125 1 $329 125 0.65
mobilenet-v2 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 919.85 384.42 2.796 7.359 $329 125 1 $329 125 1.36
mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 3556.06 1332.32 10.809 28.448 $329 125 1 $329 125
227 mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 2446.221 1003.129 3.718 14.826 $658 165 1 $658 165 0.7182
228 mobilenet-v2 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 1265.969 389.894 1.924 7.673 $658 165 1 $658 165 1.3894
229 mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 2680.458 1013.049 4.074 16.245 $658 165 1 $658 165
243 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 2067.162 868.25 3.48 16.537 $594 125 1 $594 125 0.7363
244 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 594.283 479.567 2.387 8.37 $249 71 1 $249 71 1.8531
245 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 5882.455 1895.498 1.871 28.012 $3,144 210 2 $1,572 105 1.3871
246 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 28383.76 16212.74 16065.38 7254.28 3.961 63.075 $7,166 450 2 $3,583 225 0.55
247 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 15616.083 4308.927 0.921 38.088 $16,954 410 2 $8,477 205 0.8685
248 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 5616.283 1835.686 2.803 22.465 $2,004 250 2 $1,002 125 1.404
249 mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 1463.21 538.597 3.12 52.258 $469 28 1 $469 28 0.8864
250 mobilenet-v2 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 2076.015 544.641 4.426 74.143 $469 28 1 $469 28 1.7212
251 mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 2677.374 698.942 5.709 95.621 $469 28 1 $469 28
252 mobilenet-v2 OV-2022.3-8991 accel Intel® Flex-170 GPU 18371.95 4738.33 9.544 122.48 $1,925 150 1 $1,925 150 1.15
mobilenet-v2 OV-2022.3-8991 accel Intel® Arc A40 Pro 1 50
mobilenet-v2 OV-2022.3-8991 accel Intel® Arc A50 Pro 1 75
mobilenet-v2 OV-2022.3-8991 accel Intel® Arc A750 $0 0 $240 225 1 $240 225
mobilenet-v2 OV-2022.3-8991 accel Intel® Arc A770 $0 0 $340 225 1 $340 225
253 end_rec
254 begin_rec
resnet-18 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 1495.77 415.82 2.497 11.966 $599 125 1 $599 125 1.38
resnet-18 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 497.54 150.99 0.831 3.98 $599 125 1 $599 125 2.19
resnet-18 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 1821.4 615.14 3.041 14.571 $599 125 1 $599 125
resnet-18 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 1169.04 336.09 3.553 9.352 $329 125 1 $329 125 1.5
resnet-18 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 467.43 141.76 1.421 3.739 $329 125 1 $329 125 2.36
resnet-18 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 1443.42 445.25 4.387 11.547 $329 125 1 $329 125
255 resnet-18 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 804.771 212.574 1.223 4.877 $658 165 1 $658 165 1.3886
256 resnet-18 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 491.337 146.839 0.747 2.978 $658 165 1 $658 165 2.2655
257 resnet-18 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 1180.984 365.777 1.795 7.157 $658 165 1 $658 165
271 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 654.533 307.741 1.102 5.236 $594 125 1 $594 125 1.6723
272 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 198.189 101.399 0.796 2.791 $249 71 1 $249 71 5.2039
273 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 2017.368 547.47 0.642 9.607 $3,144 210 2 $1,572 105 1.2913
274 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 27331.02 16095.24 16009.04 2329.12 3.814 60.736 $7,166 450 2 $3,583 225 0.38
275 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 6320.391 1582.817 0.373 15.416 $16,954 410 2 $8,477 205 0.667
276 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 1940.935 522.654 0.969 7.764 $2,004 250 2 $1,002 125 1.3451
277 resnet-18 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 480.992 126.244 1.026 17.178 $469 28 1 $469 28 2.242
278 resnet-18 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 1061.591 297.705 2.264 37.914 $469 28 1 $469 28 1.793
279 resnet-18 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 1237.94 342.513 2.64 44.212 $469 28 1 $469 28
280 resnet-18 OV-2022.3-8991 accel Intel® Flex-170 GPU 27454.08 2264.67 14.262 183.027 $1,925 150 1 $1,925 150 0.946
resnet-18 OV-2022.3-8991 accel Intel® Arc A40 Pro 6911.91 3812.93 1 50 0.69
resnet-18 OV-2022.3-8991 accel Intel® Arc A50 Pro 8440.86 4691.2 1 75 0.7
resnet-18 OV-2022.3-8991 accel Intel® Arc A750 31437.04 17244.34 130.988 139.720 $240 225 1 $240 225 0.54
resnet-18 OV-2022.3-8991 accel Intel® Arc A770 35554.47 19135.31 104.572 158.020 $340 225 1 $340 225 0.54
281 end_rec
282 begin_rec
resnet-50 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 729.93 240.59 1.219 5.839 $599 125 1 $599 125 2.91
resnet-50 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 238.44 68.18 0.398 1.908 $599 125 1 $599 125 4.74
resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 895.28 255.91 1.495 7.162 $599 125 1 $599 125
resnet-50 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 576.86 153.71 1.753 4.615 $329 125 1 $329 125 3.04
resnet-50 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 216.97 64.36 0.659 1.736 $329 125 1 $329 125 5.3
resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 717.91 188.59 2.182 5.743 $329 125 1 $329 125
283 resnet-50 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 400.118 133.834 0.608 2.425 $658 165 1 $658 165 3.0384
284 resnet-50 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 229.863 66.122 0.349 1.393 $658 165 1 $658 165 5.2538
285 resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 574.341 155.749 0.873 3.481 $658 165 1 $658 165
300 resnet-50 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 97.606 52.17 0.392 1.375 $249 71 1 $249 71 10.851
301 resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 980.813 268.009 0.312 4.671 $3,144 210 2 $1,572 105 2.9838
302 resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 2905.803 748.583 0.405 6.457 $7,166 450 2 $3,583 225 1.475
303 resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 11359.88 5494.15 5497.22 1118.97 0.67 27.707 $16,954 410 2 $8,477 205 0.94
304 resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 937.572 255.866 0.468 3.75 $2,004 250 2 $1,002 125 3.0985
305 resnet-50 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 235.061 63.241 0.501 8.395 $469 28 1 $469 28 4.7975
306 resnet-50 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 504.247 235.061 125.407 63.241 1.075 0.501 18.009 8.395 $469 28 1 $469 28 4.7975
307 resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 595.133 235.061 150.024 63.241 1.269 0.501 21.255 8.395 $469 28 1 $469 28 4.7975
308 resnet-50 OV-2022.3-8991 accel Intel® Flex-170 GPU 10810.92 1005.16 5.616 72.073 $1,925 150 1 $1,925 150 1.624
resnet-50 OV-2022.3-8991 accel Intel® Arc A40 Pro 2831.48 1628.15 1 50 1.28
resnet-50 OV-2022.3-8991 accel Intel® Arc A50 Pro 3233.61 1812.84 1 75 1.3
resnet-50 OV-2022.3-8991 accel Intel® Arc A750 11449.86 6590.86 47.708 50.888 $240 225 1 $240 225 1.03
resnet-50 OV-2022.3-8991 accel Intel® Arc A770 12512.67 6958.31 36.802 55.612 $340 225 1 $340 225 0.98
309 end_rec
310 begin_rec
ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 11.75 4.24 0.02 0.094 $599 125 1 $599 125 162.07
ssd-resnet34-1200 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 4.5 1.45 0.008 0.036 $599 125 1 $599 125 226.99
ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 11.63 4.24 0.019 0.093 $599 125 1 $599 125
ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 8.21 2.7 0.025 0.066 $329 125 1 $329 125 147.53
ssd-resnet34-1200 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 4.22 1.36 0.013 0.034 $329 125 1 $329 125 241.92
ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 8 2.7 0.024 0.064 $329 125 1 $329 125
311 ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 6.712 2.394 0.01 0.041 $658 165 1 $658 165 175.7493
312 ssd-resnet34-1200 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 4.228 1.262 0.006 0.026 $658 165 1 $658 165 241.7838
313 ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 6.666 2.393 0.01 0.04 $658 165 1 $658 165
327 ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 4.871 2.935 0.008 0.039 $594 125 1 $594 125 239.8346
328 ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 1.55 0.919 0.006 0.022 $249 71 1 $249 71 665.2714
329 ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 15.706 4.572 0.005 0.075 $3,144 210 2 $1,572 105 132.0319
330 ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 152.74 144.16 144.02 20.32 0.021 0.339 $7,166 450 2 $3,583 225 14.48
331 ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 47.365 14.722 0.003 0.116 $16,954 410 2 $8,477 205 44.387
332 ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 14.966 4.35 0.007 0.06 $2,004 250 2 $1,002 125 138.9625
333 ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 3.556 1.015 0.008 0.127 $469 28 1 $469 28 284.2379
334 ssd-resnet34-1200 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 8.239 2.545 0.018 0.294 $469 28 1 $469 28 122.4561
335 ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 3.565 1.01 0.008 0.127 $469 28 1 $469 28
336 ssd-resnet34-1200 OV-2022.3-8991 accel Intel® Flex-170 GPU 132.44 18.19 0.069 0.883 $1,925 150 1 $1,925 150 19.933
ssd-resnet34-1200 OV-2022.3-8991 accel Intel® Arc A40 Pro 35.75 24.28 1 50 33.83
ssd-resnet34-1200 OV-2022.3-8991 accel Intel® Arc A50 Pro 44.01 31.32 1 75 29.62
ssd-resnet34-1200 OV-2022.3-8991 accel Intel® Arc A750 136.84 107.27 0.570 0.608 $240 225 1 $240 225 18.81
ssd-resnet34-1200 OV-2022.3-8991 accel Intel® Arc A770 153.43 116.15 0.451 0.682 $340 225 1 $340 225 19.88
337 end_rec
338 begin_rec
unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 18.79 6.86 0.031 0.15 $599 125 1 $599 125 99.01
unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 7.59 2.3 0.013 0.061 $599 125 1 $599 125 132.32
unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 18.14 7 0.03 0.145 $599 125 1 $599 125
unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 12.91 4.36 0.039 0.103 $329 125 1 $329 125 95.92
unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 7.13 2.16 0.022 0.057 $329 125 1 $329 125 140.88
unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 16.63 5.72 0.051 0.133 $329 125 1 $329 125
339 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 10.652 3.873 0.016 0.065 $658 165 1 $658 165 111.0757
340 unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 7.059 2.154 0.011 0.043 $658 165 1 $658 165 142.0745
341 unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 14.933 4.935 0.023 0.091 $658 165 1 $658 165
355 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 7.413 4.615 0.012 0.059 $594 125 1 $594 125 157.3622
356 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 2.386 1.481 0.01 0.034 $249 71 1 $249 71 422.1157
357 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 29.251 7.301 0.009 0.139 $3,144 210 2 $1,572 105 69.3596
358 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 381.85 151.97 151.98 30.96 0.053 0.849 $7,166 450 2 $3,583 225 7.95
359 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 93.081 21.382 0.005 0.227 $16,954 410 2 $8,477 205 22.9476
360 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 27.814 6.966 0.014 0.111 $2,004 250 2 $1,002 125 72.9773
361 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 6.54 1.677 0.014 0.234 $469 28 1 $469 28 152.602
362 unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 15.391 4.571 0.033 0.55 $469 28 1 $469 28 61.6002
363 unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 17.962 4.848 0.038 0.642 $469 28 1 $469 28
364 unet-camvid--0001 OV-2022.3-8991 accel Intel® Flex-170 GPU 218.12 35.2 0.113 1.454 $1,925 150 1 $1,925 150 7.149
unet-camvid--0001 OV-2022.3-8991 accel Intel® Arc A40 Pro 51.45 33.45 1 50
unet-camvid--0001 OV-2022.3-8991 accel Intel® Arc A50 Pro 61.08 40.36 1 75
unet-camvid--0001 OV-2022.3-8991 accel Intel® Arc A750 212.93 151.71 0.887 0.946 $240 225 1 $240 225 6.27
unet-camvid--0001 OV-2022.3-8991 accel Intel® Arc A770 246.87 165.05 0.726 1.097 $340 225 1 $340 225 5.66
365 end_rec
366 begin_rec
yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 802.63 252.57 1.34 6.421 $599 125 1 $599 125 2.69
yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 249.5 86.81 0.417 1.996 $599 125 1 $599 125 4.79
yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 795.31 247.17 1.328 6.362 $599 125 1 $599 125
yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 638.25 206.62 1.94 5.106 $329 125 1 $329 125 2.59
yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 229.22 81.49 0.697 1.834 $329 125 1 $329 125 5.22
yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 631.71 205.81 1.92 5.054 $329 125 1 $329 125
367 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 428.506 162.077 0.651 2.597 $658 165 1 $658 165 2.4778
368 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 245.738 84.457 0.373 1.489 $658 165 1 $658 165 3.8792
369 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 598.947 195.608 0.91 3.63 $658 165 1 $658 165
383 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 359.61 173.635 0.605 2.877 $594 125 1 $594 125 2.9037
384 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 109.066 64.87 0.438 1.536 $249 71 1 $249 71 9.3792
385 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 1058.322 337.035 0.337 5.04 $3,144 210 2 $1,572 105 2.4971
386 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 7344.88 5212.9 5236.28 1405.51 1.025 16.322 $7,166 450 2 $3,583 225 1.06
387 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 2931.242 901.832 0.173 7.149 $16,954 410 2 $8,477 205 1.215
388 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 1015.77 321.263 0.507 4.063 $2,004 250 2 $1,002 125 2.6076
389 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 258.05 79.963 0.55 9.216 $469 28 1 $469 28 4.1833
390 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 492.645 157.98 1.05 17.594 $469 28 1 $469 28 2.5788
391 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 606.117 186.339 1.292 21.647 $469 28 1 $469 28
392 yolo_v3_tiny OV-2022.3-8991 accel Intel® Flex-170 GPU 3634.16 1209.67 1.888 24.228 $1,925 150 1 $1,925 150 1.293
yolo_v3_tiny OV-2022.3-8991 accel Intel® Arc A40 Pro 1 50
yolo_v3_tiny OV-2022.3-8991 accel Intel® Arc A50 Pro 1 75
yolo_v3_tiny OV-2022.3-8991 accel Intel® Arc A750 1557.21 1409.73 6.488 6.921 $240 225 1 $240 225 1.89
yolo_v3_tiny OV-2022.3-8991 accel Intel® Arc A770 1659.92 1516.83 4.882 7.377 $340 225 1 $340 225 1.83
393 end_rec
394 begin_rec
yolo_v4 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 37.15 13.03 0.062 0.297 $599 125 1 $599 125 55.96
yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 12.92 4.26 0.022 0.103 $599 125 1 $599 125 78.73
yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 37.16 13.54 0.062 0.297 $599 125 1 $599 125
yolo_v4 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 25.5 8.36 0.078 0.204 $329 125 1 $329 125 53.79
yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 12.15 4 0.037 0.097 $329 125 1 $329 125 83.64
yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 31.99 10.82 0.097 0.256 $329 125 1 $329 125
395 yolo_v4 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 21.833 7.096 0.033 0.132 $658 165 1 $658 165 58.4745
396 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 11.956 3.869 0.018 0.072 $658 165 1 $658 165 85.1633
397 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 26.693 8.644 0.041 0.162 $658 165 1 $658 165
411 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 15.614 7.925 0.026 0.125 $594 125 1 $594 125 71.631
412 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 4.674 2.804 0.019 0.066 $249 71 1 $249 71 214.0957
413 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 47.338 14.464 0.015 0.225 $3,144 210 2 $1,572 105 45.7699
414 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 252.03 228.55 228.67 58.12 0.035 0.56 $7,166 450 2 $3,583 225 15.01
415 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 131.466 41.001 0.008 0.321 $16,954 410 2 $8,477 205 19.2807
416 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 45.047 13.741 0.022 0.18 $2,004 250 2 $1,002 125 48.0344
417 yolo_v4 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 11.067 3.259 0.024 0.395 $469 28 1 $469 28 92.2912
418 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 25.048 7.384 0.053 0.895 $469 28 1 $469 28 39.1492
419 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 29.658 8.32 0.063 1.059 $469 28 1 $469 28
420 yolo_v4 OV-2022.3-8991 accel Intel® Flex-170 GPU 454.49 56.78 0.236 3.03 $1,925 150 1 $1,925 150 6.969
yolo_v4 OV-2022.3-8991 accel Intel® Arc A40 Pro 1 50
yolo_v4 OV-2022.3-8991 accel Intel® Arc A50 Pro 1 75
yolo_v4 OV-2022.3-8991 accel Intel® Arc A750 288.51 229.91 1.202 1.282 $240 225 1 $240 225
yolo_v4 OV-2022.3-8991 accel Intel® Arc A770 393.82 247.1 1.158 1.750 $340 225 1 $340 225
421 end_rec

View File

@@ -1,45 +0,0 @@
:root {
--atomic-primary: rgb(var(--ost-color-navbar-background));
--atomic-primary-light: rgb(var(--ost-color-sst-dropdown-background-active));
--atomic-border-radius-md: 0.1rem;
--atomic-border-radius-lg: 0.2rem;
--atomic-border-radius-xl: 0.3rem;
}
::part(result-list-grid-clickable-container) {
border: 1px solid lightgray;
border-radius: var(--atomic-border-radius-md);
}
.view-selector-container {
grid-area: atomic-section-facets;
display: flex;
align-items: center;
column-gap: 0.5rem;
}
.view-selector-container .view-selector,
.view-selector-container .view-selector:hover,
.view-selector-container .view-selector:active,
.view-selector-container .view-selector:focus {
border: none;
background-color: none;
background: none;
outline: none;
padding: 4px 12px;
font-size: 14px;
display: flex;
grid-gap: 8px;
align-items: center;
justify-content: center;
}
.view-selector-container .view-selector i {
margin: 0;
}
.view-selector-container .view-selector.selected {
border-bottom: 2px solid rgb(var(--ost-color-navbar-background));
font-weight: 700;
color: rgb(var(--ost-color-navbar-background));
}

View File

@@ -1,158 +1,108 @@
/* Misc */
/* misc */
/* =================================================== */
.switcher-set {
margin-bottom:1rem;
}
main img {
cursor: pointer;
}
.doxyrest-title-code-block {
margin-bottom: 0;
}
}
main .searchForm {
margin-bottom: 2rem;
margin-top: 2rem;
}
pre {
white-space: pre-wrap;
word-wrap: break-word;
}
/* cookie wap requirement */
a#wap_dns {display: none;}
/* Sphinx-design tabs override */
.sd-tab-set>input:checked+label {
border-color: var(--sd-color-tabs-underline-inactive);
color: var(--sd-color-info-text)!important;
background-color: rgb(0 104 181)!important;
}
.sd-tab-set>input:checked+label:hover {
color: --sd-color-info-text;
background-color: rgb(0,74,134)!important;
}
.sd-tab-set>input:not(:checked)+label:hover {
color: var(--sd-color-black)!important;
background-color: rgb(245, 245, 245)!important;
border-color: var(--sd-color-card-header)!important;
}
.sd-tab-set>label {
border-bottom: 0.125rem solid transparent;
margin-right: 10px!important;
margin-bottom: 8px;
color: var(--sd-color-black)!important;
border-color: var(--sd-color-tabs-underline-inactive);
cursor: pointer;
font-size: var(--sd-fontsize-tabs-label);
font-weight: 400!important;
padding: 5px 16px 2px!important;
transition: color 250ms;
width: auto;
z-index: 1;
}
.sd-tab-content {
box-shadow:none!important;
border-top: solid 2px var(--sd-color-tabs-overline)!important;
}
/* Navigation panels override */
/* navigation panels override */
/* =================================================== */
/* Hide home item in the top bar */
/* hide home item in the top bar */
ul#navbar-main-elements li:first-of-type {
display: none;
}
/* items on hover */
#bd-docs-nav div ul a:hover {
text-decoration: underline;
}
ul#navbar-main-elements > li:hover {
text-decoration: underline;
color: #fff;
}
/* First-level items in the side menu */
/* first-level items in the side menu */
#bd-docs-nav > div > ul > li {
padding-bottom: 15px;
}
#bd-docs-nav > div > ul > li > a {
color: #000000;
font-weight: bold;
}
/* Second level items */
/* second level items */
#bd-docs-nav > div > ul > li > ul {
padding-left: 0.3rem;
}
/* Overwrite menu chevron directions for open and closed states */
/* overwrite menu chevron directions for open and closed states */
.toctree-checkbox~label i {
transform: rotate(270deg);
}
.toctree-checkbox:checked~label i {
transform: rotate(0deg);
}
/* Doc version dropdown formatting override */
/* footer links */
/* =================================================== */
footer div.container div.footer-item p a {
float: left;
margin-right: 30px;
}
footer div.container div.footer-item p a:nth-child(1) {
margin-right: 50px;
}
footer div.container div.footer-item p:nth-child(2) {
clear: both;
}
/* doc version dropdown formatting override */
/* =================================================== */
[aria-labelledby="version-selector"] {
min-width: 125px!important;
overflow-x: hidden!important;
}
.sst-dropdown #version-selector {
min-width: 125px!important;
}
[aria-labelledby="version-selector"] .dropdown-item {
padding: 0.25rem 0.5rem!important;
}
/* Content in two columns */
/* =================================================== */
.row-two-col-content {
display: flex;
}
.column-two-col-content {
flex: 50%;
padding-right: 10px!important;
}
/* Code reference text formatting override */
/* code reference text formatting override */
/* =================================================== */
code {
color: black !important;
font-weight: bold;
}
/* Table Sort Button */
/* =================================================== */
.sort-header {
cursor: pointer;
}
.sort-btn {
content: "";
background-image:url('media/arrow-small-opposite-v.svg');
@@ -165,29 +115,23 @@ code {
position:relative;
top:0.5rem;
}
.sort-btn.sort-active.ascending,
.sort-btn.sort-active {
background-size: 100% 70%;
}
.sort-btn.sort-active.ascending {
background-image: url('media/union-down.svg');
}
.sort-btn.sort-active {
background-image: url('media/union-up.svg');
}
div.highlight {
margin-bottom: 1.15rem;
}
.highlight .err {
border:none;
color:inherit;
}
.opt-notice-wrapper {
position: fixed;
bottom:0;
@@ -197,7 +141,6 @@ div.highlight {
padding: 1rem;
z-index: 1000;
}
.opt-notice {
margin-bottom: 0;
position: absolute;
@@ -208,7 +151,6 @@ div.highlight {
color: #fff;
}
/* Transition banner */
/* =================================================== */
.transition-banner {
@@ -248,13 +190,11 @@ div.highlight {
text-shadow: 0 1px 0 #fff;
opacity: .5;
}
.hidden-banner {
display: none!important;
}
/* Responsiveness */
/* responsiveness */
/* =================================================== */
@media (max-width: 720px) {
.transition-banner {
@@ -277,43 +217,19 @@ div.highlight {
/* =================================================== */
.configure-graphs-header {
padding-left: 16px;
display: flex;
justify-content: space-between;
}
.configure-graphs-header h3 {
float: left;
}
.configure-graphs-content {
overflow: auto;
}
.header-inactive {
color: lightgray;
}
.configure-graphs-btn {
padding: 4px 20px;
background-color: #0054AE;
border-color: #0054AE;
color: #fefefe;
}
.chart-wrap {
display: grid;
grid-template-columns: minmax(0, 1fr) 4fr;
padding-left: 15px;
padding-right: 15px;
}
.graph-item {
display: flex;
flex-direction: column;
flex: 1;
min-width: 0;
}
.graph-chart-title-header {
font-size: 1.4rem;
line-height: 2rem;
@@ -321,7 +237,6 @@ div.highlight {
padding: 12px 0;
margin: 0;
}
.empty-chart-container {
height: 80px;
line-height: 80px;
@@ -330,82 +245,63 @@ div.highlight {
background-color: #f3f3f3;
border-radius: 5px;
}
.graph-chart-title {
vertical-align: middle;
padding: 12px 0;
}
.chart-graphs-container {
.chart-column-header-container {
padding-top: 8px;
display: flex;
flex-direction: row;
width: 100%;
min-width: 0;
}
.chart-column-title {
min-width: 20%;
flex-grow: 0 1;
white-space: nowrap;
display: flex;
flex-direction: row;
align-items: flex-start;
}
.chart-column-title .icon {
margin-top: 6px;
margin-right: 8px;
flex-grow: 0;
float: left;
}
.chart-column-title .chart-header {
flex-grow: 1;
float: left;
}
.chart-column-title .title {
font-size: 1rem;
font-weight: 400;
}
.chart-column-title .subtitle {
font-size: .8rem;
color: gray;
}
.chart-labels-container {
padding-top: 8px;
width: 18%;
}
.chart-labels-item {
width: 100%;
}
.chart-labels-item .title {
.chart-labels-container .title {
text-align: right;
text-overflow: ellipsis;
overflow: hidden;
white-space: nowrap;
display: block;
font-size: .8rem;
line-height: 55px;
height: 55px;
line-height: 3.42rem;
color: gray;
}
.chevron-right-btn {
content: url('media/chevron-right.svg');
vertical-align: middle;
padding-left: 8px;
}
.chevron-down-btn {
content: url('media/chevron-down.svg');
vertical-align: middle;
padding-left: 8px;
}
.chart {
height: 500px;
padding:0;
@@ -424,7 +320,7 @@ div.highlight {
.build-benchmark-section .title {
flex-grow: 1;
}
}
.build-benchmark-section h3 {
margin-top: 1rem;
@@ -461,21 +357,63 @@ div.highlight {
.efficiency-icon {
content: url('media/icon-efficiency.svg');
}
.latency-icon {
content: url('media/icon-latency.svg');
}
.throughput-icon {
content: url('media/icon-throughput.svg');
}
.value-icon {
content: url('media/icon-value.svg');
}
/* Modal */
/* The Close Button */
.modal-close {
color: #aaaaaa;
float: right;
font-size: 28px;
line-height: 24px;
padding-right: 4px;
}
.modal-close:hover,
.modal-close:focus {
color: #000;
text-decoration: none;
cursor: pointer;
}
.clear-all-btn {
float: right;
cursor: pointer;
line-height: 4rem;
}
.clear-all-btn-content {
border: 1.5px solid black;
padding: 6px 10px;
}
.edit-settings-btn {
float: right;
color: #0054AE;
font-size: 1.05rem;
cursor: pointer;
line-height: 4rem;
display: none;
}
.edit-settings-text {
vertical-align: middle;
}
.edit-settings-icon {
vertical-align: middle;
content: url('media/edit-settings.svg');
}
.modal {
display: block;
position: fixed;
@@ -488,6 +426,10 @@ div.highlight {
background-color: rgba(0, 0, 0, 0.4);
}
.modal .models-column-one label {
word-break: break-word;
}
.modal-content {
overflow: auto;
background-color: #fefefe;
@@ -495,8 +437,7 @@ div.highlight {
padding: 36px;
border: 1px solid #888;
width: 95%;
max-width: 1140px;
max-height: 85%;
max-height: 90%;
}
.modal-content h2 {
@@ -513,32 +454,21 @@ div.highlight {
padding-bottom: 1px;
}
.modal-header {
display: flex;
justify-content: space-between;
border: 0;
padding: 0;
}
.modal-configure-graphs,
.modal-display-graphs {
display: flex;
flex-direction: column;
min-height: 0;
.modal-header-grid-container {
display: grid;
padding: 12px 64px 2px 16px;
grid-template-columns: 40% 20% 20% 10% 10%;
column-gap: 16px;
}
.modal-content-grid-container {
display: grid;
padding: 0.75rem 4rem 0.125rem 1rem;
grid-template-columns: repeat(3, 1fr);
grid-template-rows: auto;
gap: 2rem 1rem;
}
.modal-content-grid {
display: grid;
padding-top: .5rem;
grid-template-columns: 1fr;
padding-left: 24px;
padding-right: 64px;
padding-top: 8px;
grid-template-columns: 20% 20% 20% 20% 10% 10%;
column-gap: 12px;
font-size: .78rem;
}
.modal-content-grid-container .column {
@@ -558,11 +488,29 @@ div.highlight {
margin-left: -14px;
}
.modal-content-grid-item h5 {
.modal-header-grid-item h5 {
font-weight: 530;
margin: 0;
}
.modal-grid-item h5 {
margin: 0;
}
.modal-build-graphs-btn {
margin-bottom: 10px;
margin-right: 3px;
padding: 4px 16px;
float: right;
border-color: #0054AE;
background-color: #0054AE;
color: #fff;
}
.modal-build-graphs-btn:disabled {
border-color: #8C8C8C;
background-color: lightgray;
}
.modal-footer {
display: none;
padding: 0;
@@ -573,31 +521,12 @@ div.highlight {
left: 0;
}
.modal-footer-content {
display: flex;
justify-content: space-between;
}
.modal-disclaimer-box {
padding-right: 0.5rem;
}
.modal-disclaimer-box p {
color: #00000098;
font-size: 0.8rem;
margin-bottom: 0rem;
}
.benchmark-graph-results-header {
display: flex;
justify-content: space-between;
align-items: center;
padding-left: 16px;
}
.graph-row {
display: flex;
flex-direction: column;
padding-top: 10px;
padding-bottom: 20px;
}
@@ -607,119 +536,7 @@ div.highlight {
}
.graph-row-column {
width: 100%;
}
.graph-legend-container {
display: flex;
flex-direction: column;
}
@media screen and (max-width:768px) {
.modal-content-grid-container {
grid-template-columns: repeat(2, 1fr);
grid-template-rows: auto;
padding-right: 1rem;
}
}
@media screen and (max-width: 530px) {
.modal-content {
width: 100vw;
height: 100vh;
max-height: 100%;
}
.buttons-nav {
margin-top: 0.125rem;
margin-bottom: 0.125rem;
flex-direction: column;
gap: .5rem;
}
.clear-all-btn {
padding: 0;
}
.modal-content-grid-container {
grid-template-columns: 1fr;
grid-template-rows: auto;
padding-right: 1rem;
}
}
@media screen and (min-width: 530px) {
.modal-content-grid--cols-2 {
display: grid;
padding-top: .5rem;
grid-template-columns: 1fr 1fr;
column-gap: 1rem;
}
.span-element-big {
grid-column: 1 / span 2;
}
}
/* Modal buttons */
.modal-close {
color: #aaaaaa;
float: right;
font-size: 28px;
line-height: 24px;
padding-right: 4px;
}
.modal-close:hover,
.modal-close:focus {
color: #000;
text-decoration: none;
cursor: pointer;
}
.buttons-nav {
display: flex;
justify-content: center;
align-items: center;
gap: 1rem;
}
.build-graphs-btn {
border-color: #0054AE;
background-color: #0054AE;
color: #fff;
}
.build-graphs-btn:disabled {
border-color: #8C8C8C;
background-color: lightgray;
}
.clear-all-btn {
cursor: pointer;
}
.clear-all-btn-content {
border: 1.5px solid black;
padding: 6px 10px;
}
.edit-settings-btn {
color: #0054AE;
font-size: 1.05rem;
cursor: pointer;
line-height: 4rem;
}
.edit-settings-text {
vertical-align: middle;
}
.edit-settings-icon {
vertical-align: middle;
content: url('media/edit-settings.svg');
width: 20%;
}
.close-btn {
@@ -728,11 +545,10 @@ div.highlight {
background-color: #0054AE;
color: #fefefe;
float: right;
align-self: flex-start;
}
/* Content formatting for the benchmark pages */
/* content formatting for the benchmark pages */
.picker-options {
margin: 15px 0;
}
@@ -823,7 +639,7 @@ div.highlight {
/* Create a custom checkbox */
.checkmark {
position: absolute;
top: 5px;
top: 2px;
left: 0;
height: 15px;
width: 15px;
@@ -844,11 +660,6 @@ div.highlight {
background-color: #0054AE;
}
.checkmark-container input:disabled ~ .checkmark {
background: #d3d3d3;
border: 2px solid #8C8C8C;
}
/* Create the checkmark/indicator (hidden when not checked) */
.checkmark:after {
content: "";
@@ -931,190 +742,4 @@ table#model-accuracy-and-perf-int8-fp32-table td.data {
#performance-information-frequently-asked-questions section table {
display: none;
padding-left: 30px;
}
/* Newsletter */
/* =================================================== */
#newsletterModal {
position: fixed;
z-index: 5000;
width: 100%;
height: 100%;
bottom: 0;
left: 0;
display: flex;
justify-content: center;
align-items: center;
background: rgba(255, 255, 255, .7);
}
.newsletter-shadow {
/* background: white;
box-shadow: 0 0 40px 40px rgba(255,255,255,1); */
padding: 10px;
max-width: 600px;
width: 90%;
box-sizing: border-box;
}
.newsletter-box {
max-width: 530px;
padding: 10px;
margin: auto;
}
.newsletter {
background: rgba(0, 104, 181, 1);
box-shadow: 0 0 20px 10px #a9a9a9c0;
width: 100%;
padding: 10px;
}
.newsletter-heading {
color: white;
margin: 0 0 1rem;
}
.newsletter-text {
color: white;
}
.form-group {
position: relative;
}
.newsletter-input {
box-sizing: border-box;
border: 1px solid white;
width: 100%;
transition: .4s;
line-height: 1.65rem;
height: 30px;
}
.newsletter-input:focus {
outline: 0;
box-shadow: 0 0 5px 2px white;
}
.newsletter-input.failed:focus {
outline: 0;
box-shadow: 0 0 5px 2px #a8a8a8;
}
.newsletter-submit-btn,
.newsletter-submit-btn:focus {
background: #cdedff;
color: rgba(0, 104, 181, 1);
border: 0;
position: absolute;
top: 1.5px;
right: 1.5px;
padding: 0 .8rem;
transition: .4s;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
max-width: 31%;
outline: none;
}
.newsletter-submit-btn:hover,
.newsletter-submit-btn:active {
background: #00A3F6;
color: white;
outline: none;
}
.newsletter-submit-btn:disabled {
background: #a8a8a8;
color: white;
}
.newsletter-submit-btn:before {
font-family: "Font Awesome 5 Free";
content: "\f0e0\00a0";
font-size: 1rem;
}
.newsletter-footer-text {
color: #76CEFF;
font-size: 0.7rem;
}
.newsletter-footer-text a {
color: #B4F0FF;
}
.message-box {
justify-content: center;
align-items: center;
font-size: 1.2rem;
text-align: center;
display: none;
color: white;
}
.newsletter-icon {
margin-left: -31px;
}
.newsletter-icon-background {
color: white;
top: 20px;
font-size: .9em;
}
.newsletter-submit--success {
color: #B1D272;
}
.newsletter-submit--failure {
color: #C81326;
}
.animated {
opacity: 0;
}
.fade-up {
animation: fade-up-anim .2s forwards;
}
.fade-in {
animation: fade-in-anim .2s forwards;
}
.animation-delay {
animation-delay: .3s;
}
.animation-delay--long {
animation-delay: .5s;
}
@keyframes fade-up-anim {
from {
opacity: 0;
transform: translateY(20px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
@keyframes fade-in-anim {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
input:-webkit-autofill {
-webkit-box-shadow: 0 0 0px 1000px white inset;
}
}

View File

@@ -5,14 +5,15 @@
display: none;
}
main img {
cursor: pointer;
img {
cursor: default;
}
/* === OPENVINO INTRO ================================================= */
.openvino-intro-text {
font-size: 1em;
}
/* === OPENVINO DIAGRAM ================================================= */

View File

@@ -1,77 +1,64 @@
<div>
<div class="modal-header">
<h2>Benchmark Graph Builder</h2>
<div style="width: 100%">
<span style="float: left"><h2>Benchmark Graph Builder</h2></span>
<span class="modal-close">&times;</span>
</div>
<div class="modal-line-divider"></div>
<section id="modal-configure-graphs" class="modal-configure-graphs">
<div id="configure-graphs-header" class="configure-graphs-header">
<h3>Configure Graphs</h3>
<div class="buttons-nav">
<div>
<button id="build-graphs-btn" disabled="disabled" class="build-graphs-btn">Build Graphs</button>
</div>
<div class="clear-all-btn">
<span class="clear-all-btn-content">Clear All</span>
</div>
<div id="configure-graphs-header" class="configure-graphs-header">
<div class="edit-settings-btn">
<div class="edit-settings-text">Edit Settings
<span class="edit-settings-icon"></span>
</div>
</div>
<div class="configure-graphs-content">
<div class="modal-content-grid-container">
<div class="modal-content-grid-item span-element-big">
<h5>Models</h5>
<div class="modal-line-divider"></div>
<div class="modal-content-grid modal-content-grid--cols-2">
<div class="models-column-one column"></div>
<div class="models-column-two column"></div>
</div>
</div>
<div class="modal-content-grid-item">
<h5>Platform Type</h5>
<div class="modal-line-divider"></div>
<div class="modal-content-grid">
<div class="ietype-column column"></div>
</div>
</div>
<div class="modal-content-grid-item">
<h5>Platforms</h5>
<div class="modal-line-divider"></div>
<div class="modal-content-grid">
<div class="client-platform-column column"></div>
</div>
</div>
<div class="modal-content-grid-item">
<h5>Parameters</h5>
<div class="modal-line-divider"></div>
<div class="modal-content-grid">
<div class="kpi-column column"></div>
</div>
</div>
<div class="modal-content-grid-item">
<h5>Precision</h5>
<div class="modal-line-divider"></div>
<div class="modal-content-grid">
<div class="precisions-column column"></div>
</div>
</div>
<span class="clear-all-btn">
<span class="clear-all-btn-content">Clear All</span>
</span>
<h3>STEP 1: Configure Graphs</h3>
</div>
<div class="configure-graphs-content">
<div class="modal-header-grid-container">
<div class="modal-header-grid-item">
<h5>Models</h5>
<div class="modal-line-divider"></div>
</div>
<div class="modal-header-grid-item">
<h5>Platform Type</h5>
<div class="modal-line-divider"></div>
</div>
<div class="modal-header-grid-item">
<h5>Platforms</h5>
<div class="modal-line-divider"></div>
</div>
<div class="modal-header-grid-item">
<h5>Parameters</h5>
<div class="modal-line-divider"></div>
</div>
<div class="modal-header-grid-item">
<h5>Precision</h5>
<div class="modal-line-divider"></div>
</div>
</div>
</section>
<section id="modal-display-graphs" class="modal-display-graphs">
<div class="benchmark-graph-results-header">
<h3>Graph Results</h3>
<div class="edit-settings-btn">
<div class="edit-settings-text">Edit Settings
<span class="edit-settings-icon"></span>
</div>
<div class="modal-content-grid-container">
<div class="models-column-one column"></div>
<div class="models-column-two column"></div>
<div class="ietype-column column"></div>
<div class="client-platform-column column">
</div>
<div class="kpi-column column"></div>
<div class="precisions-column column">
</div>
</div>
<div class="chart-placeholder"></div>
</section>
<div class="modal-footer">
<div class="modal-line-divider"></div>
<div class="modal-footer-content">
<div class="modal-disclaimer-box"></div>
<div class="modal-content-footer">
<button id="modal-build-graphs-btn" disabled="disabled" class="modal-build-graphs-btn">Build Graphs</button>
</div>
</div>
<div class="modal-line-divider"></div>
<div class="benchmark-graph-results-header">
<h3 class="header-inactive">STEP 2: Benchmark Graph Results</h3>
</div>
<div class="chart-placeholder"></div>
<div class="modal-line-divider"></div>
<div class="modal-footer">
<button class="close-btn">Close</button>
</div>
</div>

View File

@@ -1,290 +0,0 @@
<div class="newsletter-shadow animated fade-in">
<div class="newsletter-box">
<div class="newsletter">
<span class="modal-close">&times;</span>
<div class="newsletter-header">
<h3 class="newsletter-heading">Newsletter</h3>
<p class="newsletter-text">Be among the first to learn about everything new with the Intel® Distribution of OpenVINO™ toolkit.</p>
</div>
<form id="newsletterForm" class="animated fade-up animation-delay">
<input type="hidden" name="newsletter-elqSiteID" value="334284386">
<input type="hidden" name="newsletter-elqFormName" value="C-MKA-30146_T-MKA-36922">
<input type="hidden" name="newsletter-optinConsent" value="Yes">
<input type="hidden" name="newsletter-sourceid" value="iotg_WW_iotgaiie_FMOI_EN_2023_OVDocsShadow_C-MKA-30146_T-MKA-36922">
<input type="hidden" name="newsletter-tacticID" value="MKA-36922">
<input type="hidden" name="newsletter-interestArea" value="IoT">
<input type="hidden" name="newsletter-useCase" value="OpenVINO toolkit">
<input type="hidden" name="newsletter-mediaSource" value="NA">
<div class="form-group">
<select id="newsletterCountry" name="newsletter-country" class="newsletter-input">
<option value="Afghanistan">Afghanistan </option>
<option value="Aland Islands">Aland Islands </option>
<option value="Albania">Albania </option>
<option value="Algeria">Algeria </option>
<option value="American Samoa">American Samoa </option>
<option value="Andorra">Andorra </option>
<option value="Angola">Angola </option>
<option value="Anguilla">Anguilla </option>
<option value="Antarctica">Antarctica </option>
<option value="Antigua/Barbuda">Antigua/Barbuda </option>
<option value="Argentina">Argentina </option>
<option value="Armenia">Armenia </option>
<option value="Aruba">Aruba </option>
<option value="Australia">Australia </option>
<option value="Austria">Austria </option>
<option value="Azerbaijan">Azerbaijan </option>
<option value="Bahamas">Bahamas </option>
<option value="Bahrain">Bahrain </option>
<option value="Bangladesh">Bangladesh </option>
<option value="Barbados">Barbados </option>
<option value="Belarus">Belarus </option>
<option value="Belgium">Belgium </option>
<option value="Belize">Belize </option>
<option value="Benin">Benin </option>
<option value="Bermuda">Bermuda </option>
<option value="Bhutan">Bhutan </option>
<option value="Bolivia">Bolivia </option>
<option value="Bonaire">Bonaire </option>
<option value="Bosnia-Herz.">Bosnia-Herz. </option>
<option value="Botswana">Botswana </option>
<option value="Bouvet Islands">Bouvet Islands </option>
<option value="Brazil">Brazil </option>
<option value="Brit.Ind.Oc.Ter">Brit.Ind.Oc.Ter </option>
<option value="Brit.Virgin Is.">Brit.Virgin Is. </option>
<option value="Brunei">Brunei </option>
<option value="Bulgaria">Bulgaria </option>
<option value="Burkina Faso">Burkina Faso </option>
<option value="Burundi">Burundi </option>
<option value="C Africa Rpblic">C Africa Rpblic </option>
<option value="Cambodia">Cambodia </option>
<option value="Cameroon">Cameroon </option>
<option value="Canada">Canada </option>
<option value="Cape Verde">Cape Verde </option>
<option value="Cayman Islands">Cayman Islands </option>
<option value="Chad">Chad </option>
<option value="Chile">Chile </option>
<option value="China">China </option>
<option value="Christmas Islnd">Christmas Islnd </option>
<option value="Cocos Islands">Cocos Islands </option>
<option value="Colombia">Colombia </option>
<option value="Comoros">Comoros </option>
<option value="Congo">Congo </option>
<option value="Cooks Islands">Cooks Islands </option>
<option value="Costa Rica">Costa Rica </option>
<option value="Cote d'Ivoire">Cote d'Ivoire </option>
<option value="Croatia">Croatia </option>
<option value="Cuba">Cuba </option>
<option value="Curacao">Curacao </option>
<option value="Cyprus">Cyprus </option>
<option value="Czechia">Czechia </option>
<option value="Dem. Rep. Congo">Dem. Rep. Congo </option>
<option value="Denmark">Denmark </option>
<option value="Djibouti">Djibouti </option>
<option value="Dominica">Dominica </option>
<option value="Dominican Rep.">Dominican Rep. </option>
<option value="Ecuador">Ecuador </option>
<option value="Egypt">Egypt </option>
<option value="El Salvador">El Salvador </option>
<option value="Equatorial Guin">Equatorial Guin </option>
<option value="Eritrea">Eritrea </option>
<option value="Estonia">Estonia </option>
<option value="Eswatini">Eswatini </option>
<option value="Ethiopia">Ethiopia </option>
<option value="Falkland Islnds">Falkland Islnds </option>
<option value="Faroe Islands">Faroe Islands </option>
<option value="Fiji">Fiji </option>
<option value="Finland">Finland </option>
<option value="France">France </option>
<option value="French Guiana">French Guiana </option>
<option value="French Poly.">French Poly. </option>
<option value="French S. Terr.">French S. Terr. </option>
<option value="Gabon">Gabon </option>
<option value="Gambia">Gambia </option>
<option value="Georgia">Georgia </option>
<option value="Germany">Germany </option>
<option value="Ghana">Ghana </option>
<option value="Gibraltar">Gibraltar </option>
<option value="Greece">Greece </option>
<option value="Greenland">Greenland </option>
<option value="Grenada">Grenada </option>
<option value="Guadeloupe">Guadeloupe </option>
<option value="Guam">Guam </option>
<option value="Guatemala">Guatemala </option>
<option value="Guernsey">Guernsey </option>
<option value="Guinea">Guinea </option>
<option value="Guinea-Bissau">Guinea-Bissau </option>
<option value="Guyana">Guyana </option>
<option value="Haiti">Haiti </option>
<option value="Heard/McDon.Isl">Heard/McDon.Isl </option>
<option value="Honduras">Honduras </option>
<option value="Hong Kong">Hong Kong </option>
<option value="Hungary">Hungary </option>
<option value="Iceland">Iceland </option>
<option value="India">India </option>
<option value="Indonesia">Indonesia </option>
<option value="Iran">Iran </option>
<option value="Iraq">Iraq </option>
<option value="Ireland">Ireland </option>
<option value="Isle of Man">Isle of Man </option>
<option value="Israel">Israel </option>
<option value="Italy">Italy </option>
<option value="Jamaica">Jamaica </option>
<option value="Japan">Japan </option>
<option value="Jersey">Jersey </option>
<option value="Jordan">Jordan </option>
<option value="Kazakhstan">Kazakhstan </option>
<option value="Kenya">Kenya </option>
<option value="Kiribati">Kiribati </option>
<option value="Kuwait">Kuwait </option>
<option value="Kyrgyzstan">Kyrgyzstan </option>
<option value="Laos">Laos </option>
<option value="Latvia">Latvia </option>
<option value="Lebanon">Lebanon </option>
<option value="Lesotho">Lesotho </option>
<option value="Liberia">Liberia </option>
<option value="Libya">Libya </option>
<option value="Liechtenstein">Liechtenstein </option>
<option value="Lithuania">Lithuania </option>
<option value="Luxembourg">Luxembourg </option>
<option value="Macao SAR China">Macao SAR China </option>
<option value="Macedonia">Macedonia </option>
<option value="Madagascar">Madagascar </option>
<option value="Malawi">Malawi </option>
<option value="Malaysia">Malaysia </option>
<option value="Maldives">Maldives </option>
<option value="Mali">Mali </option>
<option value="Malta">Malta </option>
<option value="Marshall Islnds">Marshall Islnds </option>
<option value="Martinique">Martinique </option>
<option value="Mauritania">Mauritania </option>
<option value="Mauritius">Mauritius </option>
<option value="Mayotte">Mayotte </option>
<option value="Mexico">Mexico </option>
<option value="Micronesia">Micronesia </option>
<option value="Minor Outl.Isl.">Minor Outl.Isl. </option>
<option value="Moldova">Moldova </option>
<option value="Monaco">Monaco </option>
<option value="Mongolia">Mongolia </option>
<option value="Montenegro">Montenegro </option>
<option value="Montserrat">Montserrat </option>
<option value="Morocco">Morocco </option>
<option value="Mozambique">Mozambique </option>
<option value="Myanmar">Myanmar </option>
<option value="N.Mariana Islnd">N.Mariana Islnd </option>
<option value="Namibia">Namibia </option>
<option value="Nauru">Nauru </option>
<option value="Nepal">Nepal </option>
<option value="Netherlands">Netherlands </option>
<option value="New Caledonia">New Caledonia </option>
<option value="New Zealand">New Zealand </option>
<option value="Nicaragua">Nicaragua </option>
<option value="Niger">Niger </option>
<option value="Nigeria">Nigeria </option>
<option value="Niue">Niue </option>
<option value="Norfolk Islands">Norfolk Islands </option>
<option value="North Korea">North Korea </option>
<option value="Norway">Norway </option>
<option value="Oman">Oman </option>
<option value="Pakistan">Pakistan </option>
<option value="Palau">Palau </option>
<option value="Palestine, State">Palestine, State </option>
<option value="Panama">Panama </option>
<option value="Pap. New Guinea">Pap. New Guinea </option>
<option value="Paraguay">Paraguay </option>
<option value="Peru">Peru </option>
<option value="Philippines">Philippines </option>
<option value="Pitcairn">Pitcairn </option>
<option value="Poland">Poland </option>
<option value="Portugal">Portugal </option>
<option value="Puerto Rico">Puerto Rico </option>
<option value="Qatar">Qatar </option>
<option value="Reunion">Reunion </option>
<option value="Romania">Romania </option>
<option value="Russian Fed">Russian Fed </option>
<option value="Rwanda">Rwanda </option>
<option value="S. Sandwich Ins">S. Sandwich Ins </option>
<option value="S.Tome,Principe">S.Tome,Principe </option>
<option value="Saint Helena">Saint Helena </option>
<option value="Saint Lucia">Saint Lucia </option>
<option value="Saint Martin">Saint Martin </option>
<option value="Saint Pierre">Saint Pierre </option>
<option value="Samoa">Samoa </option>
<option value="San Marino">San Marino </option>
<option value="Saudi Arabia">Saudi Arabia </option>
<option value="Senegal">Senegal </option>
<option value="Serbia">Serbia </option>
<option value="Seychelles">Seychelles </option>
<option value="Sierra Leone">Sierra Leone </option>
<option value="Singapore">Singapore </option>
<option value="Sint Maarten">Sint Maarten </option>
<option value="Slovakia">Slovakia </option>
<option value="Slovenia">Slovenia </option>
<option value="Solomon Islands">Solomon Islands </option>
<option value="Somalia">Somalia </option>
<option value="South Africa">South Africa </option>
<option value="South Korea">South Korea </option>
<option value="South Sudan">South Sudan </option>
<option value="Spain">Spain </option>
<option value="Sri Lanka">Sri Lanka </option>
<option value="St Kitts&Nevis">St Kitts&Nevis </option>
<option value="St. Barthelemy">St. Barthelemy </option>
<option value="St. Vincent">St. Vincent </option>
<option value="Sudan">Sudan </option>
<option value="Suriname">Suriname </option>
<option value="Svalbard & JM">Svalbard & JM </option>
<option value="Sweden">Sweden </option>
<option value="Switzerland">Switzerland </option>
<option value="Syria">Syria </option>
<option value="Taiwan">Taiwan </option>
<option value="Tajikistan">Tajikistan </option>
<option value="Tanzania">Tanzania </option>
<option value="Thailand">Thailand </option>
<option value="Timor-Leste">Timor-Leste </option>
<option value="Togo">Togo </option>
<option value="Tokelau">Tokelau </option>
<option value="Tonga">Tonga </option>
<option value="Trinidad,Tobago">Trinidad,Tobago </option>
<option value="Tunisia">Tunisia </option>
<option value="Turkey">Turkey </option>
<option value="Turkmenistan">Turkmenistan </option>
<option value="Turksh Caicosin">Turksh Caicosin </option>
<option value="Tuvalu">Tuvalu </option>
<option value="Uganda">Uganda </option>
<option value="Ukraine">Ukraine </option>
<option value="United Kingdom">United Kingdom </option>
<option value="United States" selected>United States </option>
<option value="Uruguay">Uruguay </option>
<option value="Utd.Arab Emir.">Utd.Arab Emir. </option>
<option value="Uzbekistan">Uzbekistan </option>
<option value="Vanuatu">Vanuatu </option>
<option value="Vatican City">Vatican City </option>
<option value="Venezuela">Venezuela </option>
<option value="Vietnam">Vietnam </option>
<option value="Virgin Islands">Virgin Islands </option>
<option value="Wallis & Futuna">Wallis & Futuna </option>
<option value="Western Sahara">Western Sahara </option>
<option value="Yemen">Yemen </option>
<option value="Zambia">Zambia </option>
<option value="Zimbabwe">Zimbabwe </option>
</select>
</div>
<div class="form-group">
<input type="text" class="newsletter-input" name="newsletter-emailAddress" id="newsletterEmail" placeholder="Enter your email" required>
<button class="newsletter-submit-btn" type="submit">SUBMIT </button>
</div>
</form>
<div class="message-box" id="loader">
<svg version="1.1" id="loader" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 100 100" enable-background="new 0 0 0 0" xml:space="preserve" style="width: 100px; height: 100px">
<path fill="#a8a8a8" fill-opacity="0.4" d="M73,50c0-12.7-10.3-23-23-23S27,37.3,27,50 M30.9,50c0-10.5,8.5-19.1,19.1-19.1S69.1,39.5,69.1,50">
<animateTransform attributeName="transform" attributeType="XML" type="rotate" dur="1s" from="0 50 50" to="360 50 50" repeatCount="indefinite" />
</path>
</svg>
</div>
<div class="message-box animated fade-up" id="message"></div>
<div class="newsletter-footer animated fade-in animation-delay--long">
<p class="newsletter-footer-text">By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel's web sites and communications are subject to our <a href="https://intel.com/content/www/us/en/privacy/intel-privacy-notice.html" target="_blank"> Privacy Notice </a> and <a href="https://intel.com/content/www/us/en/legal/terms-of-use.html" target="_blank"> Terms of Use.</a></p>
</div>
</div>
</div>
</div>

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3e91bdf5dc737b1f56e6920eb9f6cbecca9b3186dc7aa049827ced5f12bd598c
size 102296

View File

@@ -34,10 +34,8 @@ function addLegalNotice() {
}
$(document).ready(function () {
addFooter();
createVersions();
updateTitleTag();
updateLanguageSelector();
init_col_sections();
init_switchers();
handleSwitcherParam();
@@ -55,12 +53,12 @@ $(document).ready(function () {
// Determine where we'd go if clicking on a version selector option
function getPageUrlWithVersion(version) {
const currentUrl = window.location.href;
const pattern = new RegExp('(?:http|https)\:\/\/.*?\/');
const newUrl = currentUrl.match(pattern) + version + '/index.html';
return encodeURI(newUrl);
var currentURL = window.location.href;
var newURL = currentURL.replace(getCurrentVersion(), version);
return encodeURI(newURL);
}
function createSphinxTabSets() {
var sphinxTabSets = $('.sphinxtabset');
var tabSetCount = 1000;
@@ -133,15 +131,9 @@ function createVersions() {
})
var downloadBtn = $('#download-zip-btn');
downloadBtn.attr('href', '/archives/' + currentVersion + '.zip')
}
function updateLanguageSelector() {
const currentVersion = getCurrentVersion();
$('[aria-labelledby="language-selector"]').find('a').each(function(){
const newUrl = $(this).attr('href').replace('latest', currentVersion);
$(this).attr('href', newUrl);
});
}
function addTableSort() {
var tables = $('table.table');
@@ -326,68 +318,4 @@ function initBenchmarkPickers() {
$('#performance-information-frequently-asked-questions section').find('h2').removeClass('expanded');
$('#performance-information-frequently-asked-questions section p, #performance-information-frequently-asked-questions section table').hide();
}
}
function addFooter() {
const footerAnchor = $('.footer');
fetch('/footer.html').then((response) => response.text()).then((text) => {
const footerContent = $(text);
footerAnchor.append(footerContent);
});
}
// ---------- COVEO SEARCH -----------
function selectResultViewType(type, gridButton, listButton) {
type === "grid" ? gridButton.click() : listButton.click();
}
function addViewTypeListeners() {
const resultViewTypeFromLs = window.localStorage.getItem('atomicResultViewType');
let list = document.getElementById("atomic-result-list");
var viewSelectorGrid = document.getElementById("view-selector-grid");
viewSelectorGrid.addEventListener('click', function () {
list.display = "grid";
window.localStorage.setItem('atomicResultViewType', "grid");
viewSelectorGrid.classList.add('selected');
viewSelectorList.classList.remove('selected');
selectResultViewType("grid", viewSelectorGrid, viewSelectorList);
});
var viewSelectorList = document.getElementById("view-selector-list");
viewSelectorList.addEventListener('click', function () {
list.display = "list";
window.localStorage.setItem('atomicResultViewType', "list");
viewSelectorList.classList.add('selected');
viewSelectorGrid.classList.remove('selected');
selectResultViewType("list", viewSelectorGrid, viewSelectorList);
});
selectResultViewType(resultViewTypeFromLs || "grid", viewSelectorGrid, viewSelectorList);
}
document.addEventListener('DOMContentLoaded', function () {
(async () => {
await customElements.whenDefined("atomic-search-interface");
const searchInterfaceSa = document.querySelector("#sa-search");
const searchInterface = document.querySelector("#search");
if (searchInterfaceSa) {
let ver = getCurrentVersion();
if (ver) {
searchInterfaceSa.innerHTML = searchInterfaceSa.innerHTML.replace('search.html', '/' + ver +'/search.html#f-ovversion=' + ver);
}
await searchInterfaceSa.initialize({
accessToken: "xx1f2aebd3-4307-4632-aeea-17c13378b237",
organizationId: "intelcorporationnonproduction2ybdyblf7",
});
searchInterfaceSa.executeFirstSearch();
}
if (searchInterface) {
await searchInterface.initialize({
accessToken: "xx1f2aebd3-4307-4632-aeea-17c13378b237",
organizationId: "intelcorporationnonproduction2ybdyblf7",
});
searchInterface.executeFirstSearch();
}
addViewTypeListeners();
})();
})
// -----------------------------------
}

View File

@@ -1,60 +1,3 @@
// =================== GENERAL OUTPUT CONFIG =========================
const chartDisclaimers = {
Value: 'Value: Performance/(No_of_sockets * Price_of_CPU_dGPU), where prices are in USD as of December 2022.',
Efficiency: 'Efficiency: Performance/(No_of_sockets * TDP_of_CPU_dGPU), where total power dissipation (TDP) is in Watt as of December 2022.'
}
const OVdefaultSelections = {
platforms: {name: 'platform',
data: [
'Intel® Core™ i9-12900K CPU-only',
'Intel® Core™ i9-13900K CPU-only',
'Intel® Core™ i5-10500TE CPU-only',
'Intel® Core™ i5-13600K CPU-only',
'Intel® Core™ i5-8500 CPU-only',
'Intel® Core™ i7-8700T CPU-only',
'Intel® Core™ i9-10900TE CPU-only',
'Intel® Core™ i7-1165G7 CPU-only'
]
},
platformFilters: {name: 'coretype', data: ['CPU']},
models: {name: 'networkmodel',
data: [
'bert-large-uncased-whole-word-masking-squad-0001 ',
'mobilenet-ssd ',
'resnet-50',
'yolo_v3_tiny'
]
},
parameters: {name: 'kpi', data: ['Throughput']},
pracision: {name: 'precision', data: ['INT8', 'FP32']}
}
const OVMSdefaultSelections = {
platforms: {name: 'platform',
data: [
'Intel® Core™ i3-10100 CPU-only',
'Intel® Core™ i5-8500 CPU-only',
'Intel® Core™ i7-8700T CPU-only',
'Intel® Core™ i9-10920X CPU-only',
]
},
models: {name: 'networkmodel',
data: [
'bert-small-uncased-whole-word-masking-squad-0002',
'mobilenet-ssd ',
'resnet-50',
'yolo_v3_tiny'
]
},
parameters: {name: 'kpi', data: ['Throughput']},
pracision: {name: 'precision', data: ['OV-INT8 (reference)', 'INT8']}
}
// ====================================================
class Filter {
// param: GraphData[], networkModels[]
@@ -94,11 +37,9 @@ class Filter {
});
}
}
class ExcelDataTransformer {
static transform(csvdata, version) {
static transform(csvdata) {
const entries = csvdata.filter((entry) => {
return !entry.includes('begin_rec') && !entry.includes('end_rec');
});
@@ -106,20 +47,17 @@ class ExcelDataTransformer {
// else generate
return entries.map((entry) => {
if (version == 'ovms')
return new GraphData(new OVMSExcelData(entry));
return new GraphData(new ExcelData(entry));
});
}
}
class ExcelData {
constructor(csvdataline) {
if (!csvdataline) {
return;
}
this.networkModel = csvdataline[0].toLowerCase();
this.networkModel = csvdataline[0];
this.release = csvdataline[1];
this.ieType = csvdataline[2];
this.platformName = csvdataline[3];
@@ -135,17 +73,21 @@ class ExcelData {
this.tdpPerSocket = csvdataline[13];
this.latency = csvdataline[14];
}
}
class OVMSExcelData extends ExcelData {
constructor(csvdataline) {
super(csvdataline);
this.throughputOVMSInt8 = csvdataline[5];
this.throughputInt8 = csvdataline[4];
this.throughputOVMSFP32 = csvdataline[7];
this.throughputFP32 = csvdataline[6];
}
networkModel = '';
release = '';
ieType = '';
platformName = '';
throughputInt8 = '';
throughputFP16 = '';
throughputFP32 = '';
value = '';
efficiency = '';
price = '';
tdp = '';
sockets = '';
pricePerSocket = '';
tdpPerSocket = '';
latency = '';
}
@@ -159,13 +101,7 @@ class GraphData {
this.ieType = excelData.ieType;
this.platformName = excelData.platformName;
this.kpi = new KPI(
{
'ovmsint8': excelData.throughputOVMSInt8,
'ovmsfp32': excelData.throughputOVMSFP32,
'int8': excelData.throughputInt8,
'fp16': excelData.throughputFP16,
'fp32': excelData.throughputFP32
},
new Precision(excelData.throughputInt8, excelData.throughputFP16, excelData.throughputFP32),
excelData.value,
excelData.efficiency,
excelData.latency);
@@ -176,9 +112,18 @@ class GraphData {
this.tdpPerSocket = excelData.tdpPerSocket;
this.latency = excelData.latency;
}
networkModel = '';
platformName = '';
release = '';
ieType = '';
kpi = new KPI();
price = '';
tdp = '';
sockets = '';
pricePerSocket = '';
tdpPerSocket = '';
}
class KPI {
constructor(precisions, value, efficiency, latency) {
this.throughput = precisions;
@@ -186,8 +131,22 @@ class KPI {
this.efficiency = efficiency;
this.latency = latency;
}
throughput = new Precision();
value = '';
efficiency = '';
latency = '';
}
class Precision {
constructor(int8, fp16, fp32) {
this.int8 = int8;
this.fp16 = fp16;
this.fp32 = fp32;
}
int8 = '';
fp16 = '';
fp32 = '';
}
class Modal {
static getIeTypeLabel(ietype) {
@@ -207,14 +166,10 @@ class Modal {
static getCoreTypesLabels() {
return ['CPU', 'iGPU', 'CPU+iGPU'];
}
static getKpisLabels(version) {
if (version == 'ovms')
return ['Throughput'];
static getKpisLabels() {
return ['Throughput', 'Value', 'Efficiency', 'Latency'];
}
static getPrecisionsLabels(version) {
if (version == 'ovms')
return ['OV-INT8 (reference)', 'INT8', 'OV-FP32 (reference)', 'FP32'];
static getPrecisionsLabels() {
return ['INT8', 'FP16', 'FP32'];
}
static getCoreTypes(labels) {
@@ -234,10 +189,6 @@ class Modal {
static getPrecisions(labels) {
return labels.map((label) => {
switch (label) {
case 'OV-INT8 (reference)':
return 'ovmsint8';
case 'OV-FP32 (reference)':
return 'ovmsfp32';
case 'INT8':
return 'int8';
case 'FP16':
@@ -251,7 +202,6 @@ class Modal {
}
}
class Graph {
constructor(data) {
this.data = data;
@@ -331,16 +281,12 @@ class Graph {
static getPrecisionConfig(precision) {
switch (precision) {
case 'ovmsint8':
return { data: null, color: '#FF8F51', label: 'FPS (OV Ref. INT8)' };
case 'ovmsfp32':
return { data: null, color: '#B24501', label: 'FPS (OV Ref. FP32)' };
case 'int8':
return { data: null, color: '#00C7FD', label: 'FPS (INT8)' };
case 'fp16':
return { data: null, color: '#009fca', label: 'FPS (FP16)' };
return { data: null, color: '#0068B5', label: 'FPS (FP16)' };
case 'fp32':
return { data: null, color: '#007797', label: 'FPS (FP32)' };
return { data: null, color: '#00C7FD', label: 'FPS (FP32)' };
default:
return {};
}
@@ -362,42 +308,39 @@ class Graph {
}
}
class ChartDisplay {
constructor(mode, numberOfCharts) {
this.mode = mode;
this.numberOfChartsInRow = numberOfCharts;
}
}
$(document).ready(function () {
$('.ov-toolkit-benchmark-results').on('click', () => showModal('ov'));
$('.ovms-toolkit-benchmark-results').on('click', () => showModal('ovms'));
$('.ov-toolkit-benchmark-results').on('click', showModal);
function clickBuildGraphs(graph, networkModels, ietype, platforms, kpis, precisions) {
renderData(graph, networkModels, ietype, platforms, kpis, precisions);
$('.edit-settings-btn').show();
$('.clear-all-btn').hide();
$('.modal-footer').show();
$('#modal-display-graphs').show();
$('.configure-graphs-header h3').addClass('header-inactive');
$('.benchmark-graph-results-header h3').removeClass('header-inactive');
$('.edit-settings-btn').on('click', (event) => {
$('#modal-configure-graphs').show();
$('#modal-display-graphs').hide();
$('.configure-graphs-content').show();
$('.edit-settings-btn').hide();
$('.clear-all-btn').show();
$('.modal-footer').hide();
$('.configure-graphs-header h3').removeClass('header-inactive');
$('.benchmark-graph-results-header h3').addClass('header-inactive');
$('.chart-placeholder').empty();
});
$('.graph-chart-title-header').on('click', (event) => {
var parent = event.target.parentElement;
if ($(parent).children('.chart-wrap,.empty-chart-container').is(":visible")) {
$(parent).children('.chart-wrap,.empty-chart-container').hide();
if ($(parent).children('.chart-wrap.container,.empty-chart-container').is(":visible")) {
$(parent).children('.chart-wrap.container,.empty-chart-container').hide();
$(parent).children('.chevron-right-btn').show();
$(parent).children('.chevron-down-btn').hide();
$
} else {
$(parent).children('.chart-wrap,.empty-chart-container').show();
$(parent).children('.chart-wrap.container,.empty-chart-container').show();
$(parent).children('.chevron-down-btn').show();
$(parent).children('.chevron-right-btn').hide();
}
@@ -405,24 +348,26 @@ $(document).ready(function () {
}
function hideModal() {
$('#graphModal').remove();
$('#graphModal').hide();
$('body').css('overflow', 'auto');
}
function showModal(version) {
function showModal() {
$('body').css('overflow', 'hidden');
if ($('#graphModal').length) {
$('#graphModal').show();
return;
}
let dataPath = '_static/benchmarks_files/OV-benchmark-data.csv';
if (version == 'ovms')
dataPath = '_static/benchmarks_files/OVMS-benchmark-data.csv';
const dataPath = '_static/benchmarks_files/benchmark-data.csv';
Papa.parse(dataPath, {
download: true,
complete: (result) => renderModal(result, version)
complete: renderModal
});
}
function getSelectedNetworkModels() {
return $('.models-column-one input:checked, .models-column-two input:checked').not('[data-networkmodel="Select All"]').map(function () {
return $('.models-column-one input:checked, .models-column-two input:checked').map(function () {
return $(this).data('networkmodel');
}).get();
}
@@ -447,7 +392,7 @@ $(document).ready(function () {
}).get();
}
function getSelectedPrecisions() {
return $('.precisions-column input:checked').map(function () {
return $('.precisions-column .selected').map(function () {
return $(this).data('precision');
}).get();
}
@@ -459,22 +404,22 @@ $(document).ready(function () {
&& getSelectedKpis().length > 0) {
if (getSelectedKpis().includes('Throughput')) {
if (getSelectedPrecisions().length > 0) {
$('#build-graphs-btn').prop('disabled', false);
$('#modal-build-graphs-btn').prop('disabled', false);
return;
}
$('#build-graphs-btn').prop('disabled', true);
$('#modal-build-graphs-btn').prop('disabled', true);
return;
}
$('#build-graphs-btn').prop('disabled', false);
$('#modal-build-graphs-btn').prop('disabled', false);
return;
}
$('#build-graphs-btn').prop('disabled', true);
$('#modal-build-graphs-btn').prop('disabled', true);
}
function renderModal(result, version) {
function renderModal(result) {
// remove header from csv line
result.data.shift();
var graph = new Graph(ExcelDataTransformer.transform(result.data, version));
var graph = new Graph(ExcelDataTransformer.transform(result.data));
var networkModels = Graph.getNetworkModels(graph.data);
var ieTypes = Graph.getIeTypes(graph.data);
@@ -491,15 +436,12 @@ $(document).ready(function () {
modalContent.addClass('modal-content');
modal.append(modalContent);
const models = networkModels.map((networkModel) => createCheckMark(networkModel, 'networkmodel'));
const selectAllModelsButton = createCheckMark('Select All', 'networkmodel')
modal.find('.models-column-one').append(selectAllModelsButton).append(models.slice(0, models.length / 2));
modal.find('.models-column-two').append(models.slice(models.length / 2));
// hide edit settings button
$('.edit-settings-btn').hide();
const precisions = Modal.getPrecisionsLabels(version).map((precision) => createCheckMark(precision, 'precision'));
modal.find('.precisions-column').append(precisions);
selectAllCheckboxes(precisions);
disableAllCheckboxes(precisions);
const models = networkModels.map((networkModel) => createCheckMark(networkModel, 'networkmodel'));
modal.find('.models-column-one').append(models.slice(0, models.length / 2));
modal.find('.models-column-two').append(models.slice(models.length / 2));
const types = ieTypes.map((ieType) => {
var labelText = Modal.getIeTypeLabel(ieType);
@@ -514,75 +456,52 @@ $(document).ready(function () {
return item;
}
});
modal.find('#modal-display-graphs').hide();
modal.find('.ietype-column').append(types);
modal.find('.ietype-column input').first().prop('checked', true);
const kpiLabels = Modal.getKpisLabels(version).map((kpi) => createCheckMark(kpi, 'kpi'));
const kpiLabels = Modal.getKpisLabels().map((kpi) => createCheckMark(kpi, 'kpi'));
modal.find('.kpi-column').append(kpiLabels);
$('body').prepend(modal);
renderClientPlatforms(graph.data, modal, version, true);
preselectDefaultSettings(graph.data, modal, version);
var fPlatforms = filterClientPlatforms(graph.data, getSelectedNetworkModels(), getSelectedIeType(), Modal.getCoreTypes(getSelectedCoreTypes()));
renderClientPlatforms(modal, Graph.getPlatformNames(fPlatforms));
$('.clear-all-btn').on('click', clearAll);
$('#build-graphs-btn').on('click', () => {
$('#modal-configure-graphs').hide();
$('.clear-all-btn').on('click', () => {
$('.modal-content-grid-container input:checkbox').each((index, object) => $(object).prop('checked', false));
$('.precisions-column').empty();
modal.find('.ietype-column input').first().prop('checked', true);
validateSelections();
});
$('#modal-build-graphs-btn').on('click', () => {
$('.configure-graphs-content').hide();
clickBuildGraphs(graph, getSelectedNetworkModels(), getSelectedIeType(), getSelectedClientPlatforms(), getSelectedKpis(), Modal.getPrecisions(getSelectedPrecisions()));
});
$('.modal-close').on('click', hideModal);
$('.close-btn').on('click', hideModal);
modal.find('.models-column-one input[data-networkmodel="Select All"]').on('click', function() {
if ($(this).prop('checked'))
selectAllCheckboxes(models);
else deSelectAllCheckboxes(models);
modal.find('.ietype-column input').on('click', function (event) {
if (getSelectedIeType() === 'core') {
showCoreSelectorTypes(Modal.getCoreTypesLabels(), graph.data, modal);
}
else {
hideCoreSelectorTypes();
}
var fPlatforms = filterClientPlatforms(graph.data, getSelectedNetworkModels(), getSelectedIeType(), Modal.getCoreTypes(getSelectedCoreTypes()));
renderClientPlatforms(modal, Graph.getPlatformNames(fPlatforms));
});
modal.find('.kpi-column input').on('click', function (event) {
if (getSelectedKpis().includes('Throughput')) {
showPrecisionSelectorTypes(Modal.getPrecisionsLabels());
}
else {
hidePrecisionSelectorTypes();
}
});
modal.find('.ietype-column input').on('click', () => renderClientPlatforms(graph.data, modal, version, true));
modal.find('.kpi-column input').on('click', validateThroughputSelection);
modal.find('input').on('click', validateSelections);
});
}
function validateThroughputSelection() {
const precisions = $('.precisions-column').find('input')
if (getSelectedKpis().includes('Throughput')) {
precisions.prop('disabled', false);
}
else {
precisions.prop('disabled', true);
}
}
function clearAll() {
$('.modal-content-grid-container input:checkbox').each((index, object) => $(object).prop('checked', false));
// Uncomment if you want the Clear All button to reset the Platform Type column as well
// modal.find('.ietype-column input').first().prop('checked', true);
validateThroughputSelection();
validateSelections();
}
function preselectDefaultSettings(data, modal, version) {
const defaultSelections = (version == 'ov') ? OVdefaultSelections : OVMSdefaultSelections;
if (defaultSelections.platformFilters) {
const filters = modal.find('.selectable-box-container').children('.selectable-box');
filters.removeClass('selected');
defaultSelections.platformFilters.data.forEach(selection => {
filters.filter(`[data-${defaultSelections.platformFilters.name}="${selection}"]`).addClass('selected');
});
renderClientPlatforms(data, modal, version);
}
clearAll();
for (setting in defaultSelections) {
let name = defaultSelections[setting].name;
defaultSelections[setting].data.forEach(selection => {
$(`input[data-${name}="${selection}"]`).prop('checked', true);
});
}
validateThroughputSelection();
validateSelections();
}
function showCoreSelectorTypes(coreTypes, graphDataArr, modal) {
if ($('.client-platform-column').find('.selectable-box-container').length) {
@@ -605,7 +524,7 @@ $(document).ready(function () {
$(this).addClass('selected');
}
var fPlatforms = filterClientPlatforms(graphDataArr, getSelectedNetworkModels(), getSelectedIeType(), Modal.getCoreTypes(getSelectedCoreTypes()));
renderClientPlatformsItems(modal, Graph.getPlatformNames(fPlatforms), true);
renderClientPlatforms(modal, Graph.getPlatformNames(fPlatforms));
validateSelections();
});
}
@@ -614,6 +533,36 @@ $(document).ready(function () {
$('.client-platform-column').find('.selectable-box-container').hide();
}
function showPrecisionSelectorTypes(precisions) {
if ($('.precisions-column').find('.selectable-box-container').length) {
$('.precisions-column').find('.selectable-box-container').show();
return;
}
var container = $('<div>');
container.addClass('selectable-box-container');
precisions.forEach((prec) => {
var box = $('<div>' + prec + '</div>');
box.attr('data-precision', prec);
box.addClass('selectable-box');
container.append(box);
});
$('.precisions-column').prepend(container);
$('.precisions-column .selectable-box').on('click', function () {
if ($(this).hasClass('selected')) {
$(this).removeClass('selected');
} else {
$(this).addClass('selected');
}
validateSelections();
});
}
function hidePrecisionSelectorTypes() {
$('.precisions-column').find('.selectable-box-container').hide();
}
function filterClientPlatforms(data, networkModels, ietype, coreTypes) {
// No longer filtering on the network type, if at some point we want the network type as a filter, uncomment this
// var first = Filter.FilterByNetworkModel(data, networkModels);
@@ -626,24 +575,10 @@ $(document).ready(function () {
return Array.from(optionMap.values());
}
function renderClientPlatforms(data, modal, version, preselectEveryItem) {
if (getSelectedIeType() === 'core') {
showCoreSelectorTypes(Modal.getCoreTypesLabels(), data, modal);
if (version === 'ovms')
hideCoreSelectorTypes();
}
else {
hideCoreSelectorTypes();
}
var fPlatforms = filterClientPlatforms(data, getSelectedNetworkModels(), getSelectedIeType(), Modal.getCoreTypes(getSelectedCoreTypes()));
renderClientPlatformsItems(modal, Graph.getPlatformNames(fPlatforms), preselectEveryItem);
}
function renderClientPlatformsItems(modal, platformNames, preselectEveryItem) {
function renderClientPlatforms(modal, platformNames) {
$('.client-platform-column .checkmark-container').remove();
const clientPlatforms = platformNames.map((platform) => createCheckMark(platform, 'platform'));
if (preselectEveryItem)
selectAllCheckboxes(clientPlatforms);
selectAllCheckboxes(clientPlatforms);
modal.find('.client-platform-column').append(clientPlatforms);
modal.find('.client-platform-column input').on('click', validateSelections);
}
@@ -662,115 +597,15 @@ $(document).ready(function () {
// receives a jquery list of items and selects all input checkboxes
function selectAllCheckboxes(items) {
items.forEach((item) => {
item.find(':input').prop('checked', true);
item.find(':input').attr('checked', true);
});
}
function enableAllCheckboxes(items) {
items.forEach((item) => {
item.find(':input').prop('disabled', false);
})
}
function disableAllCheckboxes(items) {
items.forEach((item) => {
item.find(':input').prop('disabled', true);
})
}
function deSelectAllCheckboxes(items) {
items.forEach((item) => {
item.find(':input').prop('checked', false);
});
}
// =================== HTMLLEGEND =========================
const getOrCreateLegendList = (chart, id) => {
const legendContainer = document.getElementById(id);
let listContainer = legendContainer.querySelector('ul');
if (!listContainer) {
listContainer = document.createElement('ul');
listContainer.style.display = 'flex';
listContainer.style.flexDirection = 'column';
listContainer.style.margin = 0;
listContainer.style.padding = 0;
listContainer.style.paddingLeft = '10px';
legendContainer.appendChild(listContainer);
}
return listContainer;
};
const htmlLegendPlugin = {
id: 'htmlLegend',
afterUpdate(chart, args, options) {
const ul = getOrCreateLegendList(chart, chart.options.plugins.htmlLegend.containerID);
// Remove old legend items
while (ul.firstChild) {
ul.firstChild.remove();
}
// Reuse the built-in legendItems generator
const items = chart.legend.legendItems;
items.forEach(item => {
const li = document.createElement('li');
li.style.alignItems = 'center';
li.style.display = 'flex';
li.style.flexDirection = 'row';
li.style.marginLeft = '10px';
li.onclick = () => {
const {type} = chart.config;
if (type === 'pie' || type === 'doughnut') {
// Pie and doughnut charts only have a single dataset and visibility is per item
chart.toggleDataVisibility(item.index);
} else {
chart.setDatasetVisibility(item.datasetIndex, !chart.isDatasetVisible(item.datasetIndex));
}
chart.update();
};
// Color box
const boxSpan = document.createElement('span');
boxSpan.style.background = item.fillStyle;
boxSpan.style.borderColor = item.strokeStyle;
boxSpan.style.borderWidth = item.lineWidth + 'px';
boxSpan.style.display = 'inline-block';
boxSpan.style.height = '12px';
boxSpan.style.marginRight = '10px';
boxSpan.style.width = '30px';
// Text
const textContainer = document.createElement('p');
textContainer.style.color = item.fontColor;
textContainer.style.margin = 0;
textContainer.style.padding = 0;
// textContainer.style.fontFamily = 'Roboto';
textContainer.style.fontSize = '0.8rem';
textContainer.style.textDecoration = item.hidden ? 'line-through' : '';
const text = document.createTextNode(item.text);
textContainer.appendChild(text);
li.appendChild(boxSpan);
li.appendChild(textContainer);
ul.appendChild(li);
});
}
};
// ====================================================
function getChartOptions(title, containerId) {
function getChartOptions(title) {
return {
responsive: true,
maintainAspectRatio: false,
legend: {display: false},
legend: { display: true, position: 'bottom' },
title: {
display: false,
text: title
@@ -789,9 +624,17 @@ $(document).ready(function () {
}]
},
plugins: {
htmlLegend: {
// ID of the container to put the legend in
containerID: containerId,
datalabels: {
color: "#4A4A4A",
anchor: "end",
align: "end",
clamp: false,
offset: 0,
display: true,
font: {
size: 8,
family: 'Roboto'
}
}
}
}
@@ -816,10 +659,8 @@ $(document).ready(function () {
function renderData(graph, networkModels, ietype, platforms, kpis, precisions) {
$('.chart-placeholder').empty();
$('.modal-disclaimer-box').empty();
const display = new ChartDisplay(getChartsDisplayMode(kpis.length), kpis.length);
networkModels.forEach((networkModel) => {
// graph title
var chartName = networkModel;
var chartSlug = chartName.replace(')', '').replace(' (', '-');
var chartContainer = $('<div>');
@@ -827,13 +668,13 @@ $(document).ready(function () {
var chevronDown = '<span class="chevron-down-btn"></span>';
var chevronRight = '<span style="display:none" class="chevron-right-btn"></span>';
$(chevronRight).hide();
var chartContainerHeader = $(chevronDown + chevronRight + '<span class="graph-chart-title">' + networkModel + '</span>');
var chartContainerHeader = $('<span class="graph-chart-title">' + networkModel + '</span>' + chevronDown + chevronRight);
chartContainerHeader.addClass('graph-chart-title-header');
chartContainer.prepend(chartContainerHeader);
chartContainer.attr('id', 'ov-chart-container-' + chartSlug);
chartContainer.addClass('chart-container');
chartContainer.addClass('container');
var filteredNetworkModels = Filter.FilterByNetworkModel(graph.data, [networkModel]);
var filteredIeTypes = Filter.FilterByIeType(filteredNetworkModels, ietype);
@@ -841,30 +682,25 @@ $(document).ready(function () {
$('.chart-placeholder').append(chartContainer);
if (filteredGraphData.length > 0) {
createChartWithNewData(filteredGraphData, chartContainer, kpis, ietype, precisions, display);
createChartWithNewData(filteredGraphData, chartContainer, kpis, ietype, precisions);
} else {
createEmptyChartContainer(chartContainer);
}
})
for (let kpi of kpis) {
if (chartDisclaimers[kpi])
$('.modal-disclaimer-box').append($('<p>').text(chartDisclaimers[kpi]))
}
$(window).off('resize');
$(window).resize(() => resetChartsDisplay(display));
})
};
function createEmptyChartContainer(chartContainer) {
chartContainer.append($('<div>').addClass('empty-chart-container').text('No data for this configuration.'));
}
// this function should take the final data set and turn it into graphs
// params: GraphData, unused, chartContainer
function createChartWithNewData(model, chartContainer, kpis, ietype, precisions, display) {
function createChartWithNewData(model, chartContainer, kpis, ietype, precisions) {
var chartWrap = $('<div>');
chartWrap.addClass('chart-wrap');
chartWrap.addClass('container');
chartContainer.append(chartWrap);
var labels = Graph.getPlatformNames(model);
@@ -883,20 +719,12 @@ $(document).ready(function () {
return config;
});
// get the client platform labels and create labels for all the graphs
var labelsContainer = $('<div>');
labelsContainer.addClass('chart-labels-container');
chartWrap.append(labelsContainer);
// get the kpi title's and create headers for the graphs
var chartGraphsContainer = $('<div>');
chartGraphsContainer.addClass('chart-graphs-container');
chartWrap.append(chartGraphsContainer);
graphConfigs.forEach((graphConfig, index) => {
const id = getRandomNumber();
var graphItem = $(`<div id=${id}>`);
graphItem.addClass('graph-item');
var chartColumnHeaderContainer = $('<div>');
chartColumnHeaderContainer.addClass('chart-column-header-container');
chartColumnHeaderContainer.append($('<div class="chart-column-title"></div>'));
graphConfigs.forEach((graphConfig) => {
var columnHeaderContainer = $('<div>');
columnHeaderContainer.addClass('chart-column-title');
var columnIcon = $('<div class="icon">');
@@ -906,134 +734,53 @@ $(document).ready(function () {
columnHeader.append($('<div class="title">' + graphConfig.chartTitle + '</div>'));
columnHeader.append($('<div class="title">' + Graph.getGraphPlatformText(ietype) + '</div>'));
columnHeader.append($('<div class="subtitle">' + graphConfig.chartSubtitle + '</div>'));
columnHeaderContainer.append(columnHeader);
chartGraphsContainer.append(graphItem);
var graphClass = $('<div>');
graphClass.addClass('graph-row');
graphItem.append(columnHeaderContainer);
graphItem.append(graphClass);
processMetricNew(labels, graphConfig.datasets, graphConfig.chartTitle, graphClass, 'graph-row-column', id);
window.setTimeout(() => {
const topPadding = getLabelsTopPadding(display.mode);
const labelsHeight = (labels.length * 55);
const chartHeight = $(graphItem).outerHeight();
const bottomPadding = (chartHeight - (topPadding + labelsHeight));
var labelsItem = $('<div>');
labelsItem.addClass('chart-labels-item');
labels.forEach((label) => {
labelsItem.append($('<div class="title">' + label + '</div>'));
});
labelsItem.css('padding-top', topPadding + 'px');
labelsItem.css('padding-bottom', bottomPadding + 'px');
setInitialItemsVisibility(labelsItem, index, display.mode);
labelsContainer.append(labelsItem);
});
chartColumnHeaderContainer.append(columnHeaderContainer);
});
setChartsDisplayDirection(display.mode);
adjustHeaderIcons(display.mode);
// get the client platform labels and create labels for all the graphs
var labelsContainer = $('<div>');
labelsContainer.addClass('chart-labels-container');
labels.forEach((label) => {
labelsContainer.append($('<div class="title">' + label + '</div>'));
});
// get the legend and create legends for each graph
var graphClass = $('<div>');
graphClass.addClass('graph-row');
chartWrap.append(chartColumnHeaderContainer);
graphClass.append(labelsContainer);
chartWrap.append(graphClass);
graphConfigs.forEach((graphConfig) => {
processMetricNew(labels, graphConfig.datasets, graphConfig.chartTitle, graphClass, 'graph-row-column');
});
// might need this line for multiple graphs on a page
// var displayWidth = $(window).width();
}
function processMetricNew(labels, datasets, chartTitle, container, widthClass, id) {
function processMetricNew(labels, datasets, chartTitle, container, widthClass, displayLabels) {
// ratio for consistent chart label height
var heightRatio = (30 + (labels.length * 55));
var heightRatio = ((labels.length * 55 + 20) / labels.length) + (labels.length * 55);
var chart = $('<div>');
const containerId = `legend-container-${id}`;
const legend = $(`<div id="${containerId}">`);
legend.addClass('graph-legend-container');
chart.addClass('chart');
chart.addClass(widthClass);
chart.height(heightRatio);
var canvas = $('<canvas>');
chart.append(canvas);
container.append(chart);
container.append(legend);
var context = canvas.get(0).getContext('2d');
context.canvas.height = heightRatio;
window.setTimeout(() => {
new Chart(context, {
new Chart(context, {
type: 'horizontalBar',
data: getChartDataNew(labels, datasets),
options: getChartOptions(chartTitle, containerId),
plugins: [htmlLegendPlugin]
});
options: getChartOptions(chartTitle, displayLabels)
});
}
function getRandomNumber() {
return Math.floor(Math.random() * 100000);
}
function resetChartsDisplay(currentDisplay) {
const newDisplayMode = getChartsDisplayMode(currentDisplay.numberOfChartsInRow);
if (currentDisplay.mode != newDisplayMode) {
currentDisplay.mode = newDisplayMode;
setChartsDisplayDirection(currentDisplay.mode);
adjustLabels(currentDisplay.mode);
adjustHeaderIcons(currentDisplay.mode);
}
}
function adjustLabels(displayMode) {
const firstLabels = $('.chart-labels-container').find('.chart-labels-item:first-child');
const labels = $('.chart-labels-container').find('.chart-labels-item');
labels.css('padding-top', getLabelsTopPadding(displayMode));
if (displayMode == 'column') {
labels.show();
}
else {
labels.hide()
firstLabels.show();
}
}
function adjustHeaderIcons(displayMode) {
const icons = $('.graph-item').find('.chart-column-title');
if (displayMode == 'rowCompact')
icons.css('flex-direction', 'column')
else
icons.css('flex-direction', 'row')
}
function getLabelsTopPadding(displayMode) {
return (displayMode == 'rowCompact') ? 105.91 : 83.912;
}
function setChartsDisplayDirection(displayMode) {
const container = $('.chart-placeholder').find('.chart-graphs-container');
if (displayMode == 'column') {
container.css('flex-direction', 'column');
}
else {
container.css('flex-direction', 'row');
}
}
function setInitialItemsVisibility(item, count, displayMode) {
if (count == 0 || displayMode == 'column') item.show();
else item.hide();
}
function getChartsDisplayMode(numberOfCharts) {
switch (numberOfCharts) {
case 4:
return window.matchMedia('(max-width: 721px)').matches ? 'column'
: window.matchMedia('(max-width: 830px)').matches ? 'rowCompact'
: 'row';
case 3:
return window.matchMedia('(max-width: 569px)').matches ? 'column'
: window.matchMedia('(max-width: 649px)').matches ? 'rowCompact'
: 'row';
case 2:
return window.matchMedia('(max-width: 500px)').matches ? 'column'
: 'row';
default:
return 'row';
}
}
});

View File

@@ -154,8 +154,11 @@ function addVersionTabs(selectedVersion, query) {
var tab_versions = [{'version': 'ALL'}];
var latestVersion;
if (versions.length) {
tab_versions = [{'version': 'ALL'}].concat(versions.slice(0, -1));
latestVersion = tab_versions[1].version;
tab_versions = [{'version': 'ALL'}].concat(versions.slice(1));
latestVersion = tab_versions[2].version;
if (selectedVersion === 'latest') {
selectedVersion = versions[2].version;
}
}
for (var i = 0; i < tab_versions.length; i++) {
var href;
@@ -348,6 +351,9 @@ $(document).ready(function() {
var page = trim(getURLParameter('page')) || 1;
var selectedVersion = trim(getURLParameter('version'));
if (versionExists(selectedVersion)) {
if (versions[1] && selectedVersion === versions[1].version) {
selectedVersion = 'latest';
}
if (window.location.pathname.startsWith('/cn')) {
selectedVersion = 'cn/' + selectedVersion;
}

View File

@@ -1,142 +0,0 @@
const eloquaUrl = 'https://s334284386.t.eloqua.com/e/f2'
newsletterFieldPrefix = 'newsletter-'
// debug url
// const eloquaUrl = 'https://httpbingo.org/post'
const currentPath = window.location.pathname.slice(1).split('/');
const newsletterModalPathVersion = (['cn', 'jp'].includes(currentPath[0])) ?
`/${currentPath[0]}/${currentPath[1]}` :
`/${currentPath[0]}`;
const newsletterModalPath = newsletterModalPathVersion + '/_static/html/newsletter.html';
$(document).ready(function () {
const waitForElement = async selector => {
while (document.querySelector(selector) === null) {
await new Promise(resolve => requestAnimationFrame(resolve))
}
return document.querySelector(selector);
};
waitForElement('#newsletterTrigger').then((trigger) => {
$(trigger).on('click', showForm);
})
// trigger with iframe
// $('iframe').on('load', function() {
// $('iframe').contents().find('#newsletterTrigger').on('click', showForm);
// });
function showForm() {
fetch(newsletterModalPath).then((response) => response.text()).then((text) => {
const newsletter = $('<div>');
newsletter.attr('id', 'newsletterModal');
newsletter.addClass('newsletterContainer');
const newsletterContent = $(text);
newsletter.append(newsletterContent);
$('body').prepend(newsletter);
$('#newsletterEmail').focus();
$('.modal-close').on('click', closeForm);
$('#newsletterEmail').on('keyup', validate);
$("#newsletterForm").submit(function(event) {
event.preventDefault();
const formHeight = $(this).outerHeight()
$(this).removeClass('animated fade-up')
$(this).animate({opacity: 0}, 200, 'linear', () => {
$(this).hide()
const loader = $('#loader');
loader.css({'height': formHeight + 16, 'display': 'flex'});
const currentUrl = window.location.protocol + '//' + window.location.hostname + window.location.pathname
$(this).append(`<input type="hidden" name="newsletter-pageSource" value="${currentUrl}">`)
const rawFormData = $(this).serializeArray()
const filteredFormData = [];
for (var entry of rawFormData) {
if (entry['name'].startsWith(newsletterFieldPrefix)) {
entry['name'] = entry['name'].replace(newsletterFieldPrefix, '');
filteredFormData.push(entry)
}
}
$.post(eloquaUrl, $.param(filteredFormData))
.done(function(data) {
// ---------- debug request data
// console.log('#############');
// console.log('Origin: ' + data.headers['Origin'][0]);
// console.log('Url: ' + data.url);
// console.log('Form data:');
// for (key in data.form) {
// console.log(`-- ${key}: ${data.form[key]}`);
// }
// ----------
displayMessage(formHeight, 'pass');
})
.fail(function(error) {
displayMessage(formHeight, 'error', error.status);
});
});
})
})
}
function closeForm() {
$('#newsletterModal').animate({opacity: 0}, 200, 'linear', function() {
this.remove();
});
}
function validate() {
let value = $('#newsletterEmail').val();
const emailPattern = /^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$/;
if (emailPattern.test(value)) {
$('#newsletterEmail').removeClass('failed');
$('.newsletter-submit-btn').prop('disabled', false);
}
else {
$('#newsletterEmail').addClass('failed');
$('.newsletter-submit-btn').prop('disabled', true);
}
}
function displayMessage(boxHeight, status, errorCode) {
$('#loader').hide();
let message = '';
const messageBox = $('#message');
const icon = $('<div class="fa-stack fa-2x newsletter-icon">');
const iconBackground = $('<i class="fas fa-square fa-stack-2x newsletter-icon-background">');
const iconMain = $('<i class="fas fa-stack-1x">');
icon.append(iconBackground);
icon.append(iconMain);
messageBox.css({'height': boxHeight + 16, 'display': 'flex'});
switch(status) {
case 'pass':
iconMain.addClass('fa-check-square');
messageBox.addClass('newsletter-submit--success')
message = 'REGISTRATION SUCCESSFUL'
break;
case 'error':
iconMain.addClass('fa-window-close');
iconMain.addClass('newsletter-submit--failure')
switch(errorCode) {
case 400:
message = 'ALREADY REGISTERED';
break;
default:
message = 'REGISTRATION FAILED';
break;
}
}
window.setTimeout(() => {
messageBox.append(icon);
messageBox.append(message);
});
window.setTimeout(closeForm, 1500);
}
});

View File

@@ -2,12 +2,8 @@
{% block css %}
{{ super() }}
<script type="module" src="https://static.cloud.coveo.com/atomic/v2/atomic.esm.js"></script>
<link rel="stylesheet" href="https://static.cloud.coveo.com/atomic/v2/themes/coveo.css">
<link rel="stylesheet" href="{{ pathto('_static/css/viewer.min.css', 1) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/css/custom.css', 1) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/css/coveo_custom.css', 1) }}" type="text/css" />
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-datalabels"></script>
@@ -22,7 +18,7 @@
{% block docs_navbar %}
{{ super() }}
<div id="info-banner" class="transition-banner">
<p>OpenVINO 2022.1 has introduced OpenVINO API 2.0. For more information on transition steps from the previous API, see the <a href="https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html">transition guide</a></p>
<p>OpenVINO 2022.1 introduces a new version of OpenVINO API (API 2.0). For more information on the changes and transition steps, see the <a href="https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html">transition guide</a></p>
<button type="button" class="close-banner" onclick="closeTransitionBanner()">
<span aria-hidden="true">&times;</span>
</button>

View File

@@ -1,6 +1,4 @@
<div>
<atomic-search-interface id="sa-search">
<atomic-search-box redirection-url="search.html">
</atomic-search-box>
</atomic-search-interface>
</div>
<form class="searchForm bd-search d-flex align-items-center" action="{{ pathto('search') }}" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="query" id="search-input" placeholder="{{ _(theme_search_bar_text) }}" aria-label="{{ theme_search_bar_text }}" autocomplete="off" >
</form>

View File

@@ -1,143 +1,31 @@
{%- extends "layout.html" %}
{% set title = _('Search') %}
{%- block content %}
{# Added to support a banner with an alert #}
<div class="container-fluid" id="banner"></div>
{% block docs_navbar %}
{%- block scripts %}
{{ super() }}
<div id="info-banner" class="transition-banner">
<p>OpenVINO 2022.1 has introduced OpenVINO API 2.0. For more information on transition steps from the previous API, see the <a href="https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html">transition guide</a></p>
<button type="button" class="close-banner" onclick="closeTransitionBanner()">
<span aria-hidden="true">&times;</span>
</button>
</div>
<script src="{{ pathto('_static/js/hide_banner.js', 1) }}"></script>
{% endblock %}
{% block body %}
<atomic-search-interface id="search"
fields-to-include='["ovversion", "ovdoctype", "filetype", "date", "source", "author", "sourcetype", "language", "description"]'>
<atomic-search-layout>
<atomic-layout-section section="search">
<atomic-search-box>
<atomic-search-box-query-suggestions></atomic-search-box-query-suggestions>
</atomic-search-box>
</atomic-layout-section>
<!-- ADDITIONAL FILTERS SECTION-->
<atomic-layout-section section="facets">
<div class="view-selector-container">
<button id="view-selector-grid" class="view-selector">
<i class="fas fa-th"></i> Grid
</button>
<button id="view-selector-list" class="view-selector">
<i class="fas fa-list"></i> List
</button>
</div>
<atomic-facet-manager>
<atomic-facet field="ovversion" label="Version" sort-criteria="alphanumericDescending"></atomic-facet>
<!-- <atomic-facet field="ovdoctype" label="Document type"></atomic-facet> -->
<!-- <atomic-facet field="language" label="Language"></atomic-facet> -->
</atomic-facet-manager>
</atomic-layout-section>
<atomic-layout-section section="main">
<atomic-layout-section section="status">
<!-- RESULTS SUMMARY SECTION -->
<atomic-breadbox></atomic-breadbox>
<atomic-query-summary></atomic-query-summary>
<atomic-refine-toggle></atomic-refine-toggle>
<!-- SORT SECTION -->
<atomic-sort-dropdown>
<atomic-sort-expression label="relevance" expression="relevancy"></atomic-sort-expression>
<atomic-sort-expression label="most-recent" expression="date descending"></atomic-sort-expression>
</atomic-sort-dropdown>
<atomic-did-you-mean></atomic-did-you-mean>
<atomic-notifications></atomic-notifications>
</atomic-layout-section>
<atomic-layout-section section="results">
<atomic-result-list id="atomic-result-list" display="grid">
<atomic-result-template>
<template>
<!-- RESULT TOP BADGES SECTION -->
<atomic-result-section-badges>
<atomic-result-badge>
<atomic-result-multi-value-text field="ovversion"></atomic-result-multi-value-text>
</atomic-result-badge>
<atomic-result-badge
icon="https://raw.githubusercontent.com/Rush/Font-Awesome-SVG-PNG/master/black/svg/language.svg">
<atomic-result-multi-value-text field="language"></atomic-result-multi-value-text>
</atomic-result-badge>
<atomic-field-condition must-match-is-recommendation="true">
<atomic-result-badge label="Recommended"></atomic-result-badge>
</atomic-field-condition>
<atomic-field-condition must-match-is-top-result="true">
<atomic-result-badge label="Top Result"></atomic-result-badge>
</atomic-field-condition>
</atomic-result-section-badges>
<!-- RESULT ICON SECTION -->
<atomic-result-section-visual>
<atomic-icon class="icon" icon="assets://gform.svg"></atomic-result-icon>
<!-- EXAMPLE OF CHANGING ICON -->
<!-- <atomic-field-condition must-match-ovversion="2021.4">
<atomic-icon class="icon" icon="assets://gsheet.svg"></atomic-icon>
</atomic-field-condition>
<atomic-field-condition must-match-ovversion="2022.2">
<atomic-icon class="icon" icon="assets://html.svg"></atomic-icon>
</atomic-field-condition>
<atomic-field-condition must-not-match-ovversion="2022.2, 2021.4">
<atomic-icon class="icon" icon="assets://gform.svg"></atomic-icon>
</atomic-field-condition> -->
</atomic-result-section-visual>
<atomic-result-section-title>
<atomic-result-link target="_blank"></atomic-result-link>
</atomic-result-section-title>
<atomic-result-section-excerpt>
<atomic-result-text field="excerpt"></atomic-result-text>
</atomic-result-section-excerpt>
<atomic-result-section-bottom-metadata>
<atomic-field-condition class="field" if-defined="description">
<atomic-result-text field="description"></atomic-result-text>
</atomic-field-condition>
</atomic-result-section-bottom-metadata>
</template>
</atomic-result-template>
</atomic-result-list>
<atomic-query-error></atomic-query-error>
<atomic-no-results></atomic-no-results>
</atomic-layout-section>
<atomic-layout-section section="pagination">
<atomic-load-more-results></atomic-load-more-results>
</atomic-layout-section>
</atomic-layout-section>
</atomic-search-layout>
</atomic-search-interface>
{% endblock %}
{%- block scripts_end %}
{{ _webpack.body_post() }}
{%- endblock %}
<link rel="stylesheet" href="{{ pathto('_static/css/gsearch.css', 1) }}" type="text/css" />
<script src="https://apis.google.com/js/api.js"></script>
<script src="{{ pathto('_static/js/gsearch.js', 1) }}"></script>
{%- endblock %}
{% block body %}
<h1 id="search-documentation">{{ _('Search') }}</h1>
<p id="searchinfo">
Search Results...
</p>
<div id="gs-tabs-area"></div>
{% block scriptwarning %}
<div id="fallback" class="admonition warning">
<script>$('#fallback').hide();</script>
</div>
{% endblock %}
{% block searchbox %}
<form class="searchForm" action="" method="get">
<input id="searchfield" type="text" name="query" aria-labelledby="search-documentation" value="" />
<input type="submit" value="{{ _('search') }}" />
<span id="search-progress" style="padding-left: 10px"></span>
</form>
{% endblock %}
{% block searchresults %}
<div id="gs-tabs-area"></div>
<div id="searchresults"></div>
{% endblock %}
{% endblock %}

View File

@@ -6,173 +6,21 @@
:maxdepth: 1
:hidden:
openvino_docs_performance_benchmarks_faq
openvino_docs_performance_int8_vs_fp32
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2022.3/_static/benchmarks_files/OV-2022.3-Performance-Data.xlsx>
openvino_docs_performance_benchmarks_openvino
openvino_docs_MO_DG_Getting_Performance_Numbers
This page presents benchmark results for `Intel® Distribution of OpenVINO™ toolkit <https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html>`__
and :doc:`OpenVINO Model Server <ovms_what_is_openvino_model_server>`, for a representative selection of public neural networks and Intel® devices.
The results may help you decide which hardware to use in your applications or plan AI workload for the hardware you have already implemented in your solutions.
Click the buttons below to see the chosen benchmark data.
.. grid:: 1 1 2 2
:gutter: 4
.. grid-item::
.. button-link:: #
:class: ov-toolkit-benchmark-results
:color: primary
:outline:
:expand:
:material-regular:`bar_chart;1.4em` OpenVINO Benchmark Graphs
.. grid-item::
.. button-link:: #
:class: ovms-toolkit-benchmark-results
:color: primary
:outline:
:expand:
:material-regular:`bar_chart;1.4em` OVMS Benchmark Graphs
For a successful deep learning inference application, the following four key metrics need to be considered:
.. tab:: :material-regular:`keyboard_double_arrow_right;1.4em` Throughput
Measures the number of inferences delivered within a latency threshold
(for example, number of Frames Per Second - FPS). When deploying a system with
deep learning inference, select the throughput that delivers the best trade-off
between latency and power for the price and performance that meets your requirements.
.. tab:: :material-regular:`attach_money;1.4em` Value
While throughput is important, what is more critical in edge AI deployments is
the performance efficiency or performance-per-cost. Application performance in
throughput per dollar of system cost is the best measure of value. The value KPI is
calculated as “Throughput measured as inferences per second / price of inference engine”.
This means for a 2 socket system 2x the price of a CPU is used. Prices are as per
date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
.. tab:: :material-regular:`flash_on;1.4em` Efficiency
System power is a key consideration from the edge to the data center. When selecting
deep learning solutions, power efficiency (throughput/watt) is a critical factor to consider.
Intel designs provide excellent power efficiency for running deep learning workloads.
The efficiency KPI is calculated as “Throughput measured as inferences per second / TDP of
inference engine”. This means for a 2 socket system 2x the power dissipation (TDP) of a CPU is used.
TDP-values are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
.. tab:: :material-regular:`hourglass_empty;1.4em` Latency
This measures the synchronous execution of inference requests and is reported in milliseconds.
Each inference request (for example: preprocess, infer, postprocess) is allowed to complete before
the next is started. This performance metric is relevant in usage scenarios where a single image
input needs to be acted upon as soon as possible. An example would be the healthcare sector where
medical personnel only request analysis of a single ultra sound scanning image or in real-time or
near real-time applications for example an industrial robot's response to actions in its environment
or obstacle avoidance for autonomous vehicles.
Platforms, Configurations, Methodology
###########################################################
For a listing of all platforms and configurations used for testing, refer to the following:
.. grid:: 1 1 2 2
:gutter: 4
.. grid-item::
.. button-link:: _static/benchmarks_files/platform_list_22.3.pdf
:color: primary
:outline:
:expand:
:material-regular:`download;1.5em` Click for Hardware Platforms [PDF]
.. button-link:: _static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx
:color: primary
:outline:
:expand:
:material-regular:`download;1.5em` Click for Configuration Details [XLSX]
.. the files above need to be updated with OVMS !!!
The OpenVINO benchmark setup includes a single system with OpenVINO™, as well as the benchmark application installed.
It measures the time spent on actual inference (excluding any pre or post processing) and then reports on the inferences
per second (or Frames Per Second).
OpenVINO™ Model Server (OVMS) employs the Intel® Distribution of OpenVINO™ toolkit runtime libraries and exposes a set of
models via a convenient inference API over gRPC or HTTP/REST. Its benchmark results are measured with the configuration of
multiple-clients-single-server, using two hardware platforms connected by ethernet. Network bandwidth depends on both, platforms
and models under investigation. It is set not to be a bottleneck for workload intensity. The connection is dedicated
only to measuring performance.
.. dropdown:: See more details about OVMS benchmark setup
The benchmark setup for OVMS consists of four main parts:
.. image:: _static/images/performance_benchmarks_ovms_02.png
:alt: OVMS Benchmark Setup Diagram
* **OpenVINO™ Model Server** is launched as a docker container on the server platform and it listens (and answers on)
requests from clients. OpenVINO™ Model Server is run on the same machine as the OpenVINO™ toolkit benchmark application
in corresponding benchmarking. Models served by OpenVINO™ Model Server are located in a local file system mounted into
the docker container. The OpenVINO™ Model Server instance communicates with other components via ports over a dedicated docker network.
* **Clients** are run in separated physical machine referred to as client platform. Clients are implemented in Python3
programming language based on TensorFlow* API and they work as parallel processes. Each client waits for a response from OpenVINO™
Model Server before it will send a new next request. The role played by the clients is also verification of responses.
* **Load balancer** works on the client platform in a docker container. HAProxy is used for this purpose. Its main role is
counting of requests forwarded from clients to OpenVINO™ Model Server, estimating its latency, and sharing this information by
Prometheus service. The reason of locating the load balancer on the client site is to simulate real life scenario that includes
impact of physical network on reported metrics.
* **Execution Controller** is launched on the client platform. It is responsible for synchronization of the whole measurement process,
downloading metrics from the load balancer, and presenting the final report of the execution.
Test performance yourself
####################################
You can also test performance for your system yourself, following the guide on :doc:`getting performance numbers <openvino_docs_MO_DG_Getting_Performance_Numbers>`.
Performance of a particular application can also be evaluated virtually using `Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__.
It is a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit.
To learn more about it, visit `the website <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/overview.html>`__
or `create an account <https://www.intel.com/content/www/us/en/secure/forms/devcloud-enrollment/account-provisioning.html>`__.
Disclaimers
####################################
* Intel® Distribution of OpenVINO™ toolkit performance results are based on release 2022.3, as of December 13, 2022.
* OpenVINO Model Server performance results are based on release 2022.3, as of December 13, 2022.
The results may not reflect all publicly available updates. Intel technologies features and benefits depend on system configuration
and may require enabled hardware, software, or service activation. Learn more at intel.com, or from the OEM or retailer.
See configuration disclosure for details. No product can be absolutely secure.
Performance varies by use, configuration and other factors. Learn more at `www.intel.com/PerformanceIndex <https://www.intel.com/PerformanceIndex>`__.
Your costs and results may vary.
Intel optimizations, for Intel compilers or other products, may not optimize to the same degree for non-Intel products.
@endsphinxdirective
The[Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html)helps accelerate deep learning inference across a variety of Intel® processors and accelerators.
The benchmark results below demonstrate high performance gains on several public neural networks on multipleIntel® CPUs, GPUs and VPUscovering a broad performance range. The results may be helpful when deciding which hardware is best for your applications or to plan AI workload on the Intel computing already included in your solutions.
Benchmarks are available for:
* [Intel® Distribution of OpenVINO™ toolkit](performance_benchmarks_openvino.md).
You can also test performance for your system yourself, following the guide on [getting performance numbers](../MO_DG/prepare_model/Getting_performance_numbers.md).
Performance of a particular application can also be evaluated virtually using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/). It is a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. To learn more about it, visit [the website](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/overview.html) or [create an account](https://www.intel.com/content/www/us/en/forms/idz/devcloud-registration.html?tgt=https://www.intel.com/content/www/us/en/secure/forms/devcloud-enrollment/account-provisioning.html).

View File

@@ -0,0 +1,89 @@
# Intel® Distribution of OpenVINO™ toolkit Benchmark Results {#openvino_docs_performance_benchmarks_openvino}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_performance_benchmarks_faq
openvino_docs_performance_int8_vs_fp32
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2022.3/_static/benchmarks_files/OV-2022.3-Performance-Data.xlsx>
Click the "Benchmark Graphs" button to see the OpenVINO™ benchmark graphs. Select the models, the hardware platforms (CPU SKUs),
precision and performance index from the lists and click the “Build Graphs” button.
.. button-link:: #
:class: ov-toolkit-benchmark-results
:color: primary
:outline:
:material-regular:`bar_chart;1.4em` Benchmark Graphs
Measuring inference performance involves many variables and is extremely use-case and application dependent.
Below are four parameters for measurements, which are key elements to consider for a successful deep learning inference application:
.. tab:: :material-regular:`keyboard_double_arrow_right;1.4em` Throughput
Measures the number of inferences delivered within a latency threshold (for example, number of Frames Per Second - FPS). When deploying a system with deep learning inference, select the throughput that delivers the best trade-off between latency and power for the price and performance that meets your requirements.
.. tab:: :material-regular:`attach_money;1.4em` Value
While throughput is important, what is more critical in edge AI deployments is the performance efficiency or performance-per-cost. Application performance in throughput per dollar of system cost is the best measure of value. The value KPI is calculated as “Throughput measured as inferences per second / price of inference engine”. This means for a 2 socket system 2x the price of a CPU is used. Prices are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
.. tab:: :material-regular:`flash_on;1.4em` Efficiency
System power is a key consideration from the edge to the data center. When selecting deep learning solutions, power efficiency (throughput/watt) is a critical factor to consider. Intel designs provide excellent power efficiency for running deep learning workloads. The efficiency KPI is calculated as “Throughput measured as inferences per second / TDP of inference engine”. This means for a 2 socket system 2x the power dissipation (TDP) of a CPU is used. TDP-values are as per date of benchmarking and sources can be found as links in the Hardware Platforms (PDF) description below.
.. tab:: :material-regular:`hourglass_empty;1.4em` Latency
This measures the synchronous execution of inference requests and is reported in milliseconds. Each inference request (for example: preprocess, infer, postprocess) is allowed to complete before the next is started. This performance metric is relevant in usage scenarios where a single image input needs to be acted upon as soon as possible. An example would be the healthcare sector where medical personnel only request analysis of a single ultra sound scanning image or in real-time or near real-time applications for example an industrial robot's response to actions in its environment or obstacle avoidance for autonomous vehicles.
Platform & Configurations
####################################
For a listing of all platforms and configurations used for testing, refer to the following:
.. button-link:: _static/benchmarks_files/platform_list_22.3.pdf
:color: primary
:outline:
:material-regular:`download;1.5em` Click for Hardware Platforms [PDF]
.. button-link:: _static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx
:color: primary
:outline:
:material-regular:`download;1.5em` Click for Configuration Details [XLSX]
This benchmark setup includes a single machine on which both the benchmark application and the OpenVINO™ installation reside. The presented performance benchmark numbers are based on the release 2022.3 of the Intel® Distribution of OpenVINO™ toolkit.
The benchmark application loads the OpenVINO™ Runtime and executes inferences on the specified hardware (CPU, GPU or GNA).
It measures the time spent on actual inference (excluding any pre or post processing) and then reports on the inferences per second (or Frames Per Second).
Disclaimers
####################################
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2022.3.
Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of December 13, 2022 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
Performance varies by use, configuration and other factors. Learn more at :ref:`www.intel.com/PerformanceIndex<https://www.intel.com/PerformanceIndex>`.
Your costs and results may vary.
Intel optimizations, for Intel compilers or other products, may not optimize to the same degree for non-Intel products.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
@endsphinxdirective

View File

@@ -0,0 +1,236 @@
@sphinxdirective
:orphan:
@endsphinxdirective
# OpenVINO™ Model Server Benchmark Results {#openvino_docs_performance_benchmarks_ovms}
OpenVINO™ Model Server is an open-source, production-grade inference platform that exposes a set of models via a convenient inference API over gRPC or HTTP/REST. It employs the OpenVINO™ Runtime libraries from the Intel® Distribution of OpenVINO™ toolkit to extend workloads across Intel® hardware including CPU, GPU and others.
![OpenVINO™ Model Server](../img/performance_benchmarks_ovms_01.png)
## Measurement Methodology
OpenVINO™ Model Server is measured in multiple-client-single-server configuration using two hardware platforms connected by ethernet network. The network bandwidth depends on the platforms as well as models under investigation and it is set to not be a bottleneck for workload intensity. This connection is dedicated only to the performance measurements. The benchmark setup is consists of four main parts:
![OVMS Benchmark Setup Diagram](../img/performance_benchmarks_ovms_02.png)
* **OpenVINO™ Model Server** is launched as a docker container on the server platform and it listens (and answers on) requests from clients. OpenVINO™ Model Server is run on the same machine as the OpenVINO™ toolkit benchmark application in corresponding benchmarking. Models served by OpenVINO™ Model Server are located in a local file system mounted into the docker container. The OpenVINO™ Model Server instance communicates with other components via ports over a dedicated docker network.
* **Clients** are run in separated physical machine referred to as client platform. Clients are implemented in Python3 programming language based on TensorFlow* API and they work as parallel processes. Each client waits for a response from OpenVINO™ Model Server before it will send a new next request. The role played by the clients is also verification of responses.
* **Load balancer** works on the client platform in a docker container. HAProxy is used for this purpose. Its main role is counting of requests forwarded from clients to OpenVINO™ Model Server, estimating its latency, and sharing this information by Prometheus service. The reason of locating the load balancer on the client site is to simulate real life scenario that includes impact of physical network on reported metrics.
* **Execution Controller** is launched on the client platform. It is responsible for synchronization of the whole measurement process, downloading metrics from the load balancer, and presenting the final report of the execution.
## bert-small-uncased-whole-word-masking-squad-002 (INT8)
![](../_static/benchmarks_files/ovms/bert-small-uncased-whole-word-masking-squad-002-int8.png)
## bert-small-uncased-whole-word-masking-squad-002 (FP32)
![](../_static/benchmarks_files/ovms/bert-small-uncased-whole-word-masking-squad-002-fp32.png)
## densenet-121 (INT8)
![](../_static/benchmarks_files/ovms/densenet-121-int8.png)
## densenet-121 (FP32)
![](../_static/benchmarks_files/ovms/densenet-121-fp32.png)
## efficientdet-d0 (INT8)
![](../_static/benchmarks_files/ovms/efficientdet-d0-int8.png)
## efficientdet-d0 (FP32)
![](../_static/benchmarks_files/ovms/efficientdet-d0-fp32.png)
## inception-v4 (INT8)
![](../_static/benchmarks_files/ovms/inception-v4-int8.png)
## inception-v4 (FP32)
![](../_static/benchmarks_files/ovms/inception-v4-fp32.png)
## mobilenet-ssd (INT8)
![](../_static/benchmarks_files/ovms/mobilenet-ssd-int8.png)
## mobilenet-ssd (FP32)
![](../_static/benchmarks_files/ovms/mobilenet-ssd-fp32.png)
## mobilenet-v2 (INT8)
![](../_static/benchmarks_files/ovms/mobilenet-v2-int8.png)
## mobilenet-v2 (FP32)
![](../_static/benchmarks_files/ovms/mobilenet-v2-fp32.png)
## resnet-18 (INT8)
![](../_static/benchmarks_files/ovms/resnet-18-int8.png)
## resnet-18 (FP32)
![](../_static/benchmarks_files/ovms/resnet-18-fp32.png)
## resnet-50 (INT8)
![](../_static/benchmarks_files/ovms/resnet-50-int8.png)
## resnet-50 (FP32)
![](../_static/benchmarks_files/ovms/resnet-50-fp32.png)
## ssd-resnt34-1200 (INT8)
![](../_static/benchmarks_files/ovms/ssd-resnt34-1200-int8.png)
## ssd-resnt34-1200 (FP32)
![](../_static/benchmarks_files/ovms/ssd-resnt34-1200-fp32.png)
## unet-camvid-onnx-001 (INT8)
![](../_static/benchmarks_files/ovms/unet-camvid-onnx-001-int8.png)
## unet-camvid-onnx-001 (FP32)
![](../_static/benchmarks_files/ovms/unet-camvid-onnx-001-fp32.png)
## yolo-v3-tiny (INT8)
![](../_static/benchmarks_files/ovms/yolo-v3-tiny-int8.png)
## yolo-v3-tiny (FP32)
![](../_static/benchmarks_files/ovms/yolo-v3-tiny-fp32.png)
## yolo-v4 (INT8)
![](../_static/benchmarks_files/ovms/yolo-v4-int8.png)
## yolo-v4 (FP32)
![](../_static/benchmarks_files/ovms/yolo-v4-fp32.png)
## Platform Configurations
OpenVINO™ Model Server performance benchmark numbers are based on release 2022.2. Performance results are based on testing as of November 16, 2022 and may not reflect all publicly available updates.
@sphinxdirective
.. dropdown:: Platform with Intel® Xeon® Platinum 8260M
.. table::
:widths: 25 25 50
+--------------------------+-------------------------------------------+----------------------------------------+
| | Server Platform | Client Platform |
+==========================+===========================================+========================================+
| Motherboard | Inspur YZMB-00882-104 NF5280M5 | Inspur YZMB-00882-104 NF5280M5 |
+--------------------------+-------------------------------------------+----------------------------------------+
| Memory | Samsung 16 x 16GB @ 2666 MT/s DDR4 | Kingston 16 x 16GB @ 2666 MT/s DDR4 |
+--------------------------+-------------------------------------------+----------------------------------------+
| CPU | Intel® Xeon® Platinum 8260M CPU @ 2.40GHz | Intel® Xeon® Gold 6238M CPU @ 2.10GHz |
+--------------------------+-------------------------------------------+----------------------------------------+
| Selected CPU Flags | Hyper Threading, Turbo Boost, DL Boost | Hyper Threading, Turbo Boost, DL Boost |
+--------------------------+-------------------------------------------+----------------------------------------+
| CPU Thermal Design Power | 162W | 150W |
+--------------------------+-------------------------------------------+----------------------------------------+
| Operating System | Ubuntu 20.04.4 LTS | Ubuntu 20.04.4 LTS |
+--------------------------+-------------------------------------------+----------------------------------------+
| Kernel Version | 5.4.0-107-generic | 5.4.0-107-generic |
+--------------------------+-------------------------------------------+----------------------------------------+
| BIOS Vendor | American Megatrends Inc. | AMI |
+--------------------------+-------------------------------------------+----------------------------------------+
| BIOS Version & Release | 4.1.16; date: 06/23/2020 | 4.1.16; date: 06/23/2020 |
+--------------------------+-------------------------------------------+----------------------------------------+
| Docker Version | 20.10.3 | 20.10.3 |
+--------------------------+-------------------------------------------+----------------------------------------+
| Network Speed | 40 Gb/s | 40 Gb/s |
+--------------------------+-------------------------------------------+----------------------------------------+
.. dropdown:: Platform with 6238M
.. table::
:widths: 25 25 50
+--------------------------+-------------------------------------------+--------------------------------------------+
| | Server Platform | Client Platform |
+==========================+===========================================+============================================+
| Motherboard | Inspur YZMB-00882-104 NF5280M5 | Inspur YZMB-00882-104 NF5280M5 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Memory | Kingston 16 x 16GB @ 2666 MT/s DDR4 | Samsung 16 x 16GB @ 2666 MT/s DDR4 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU | Intel® Xeon® Gold 6238M CPU @ 2.10GHz | Intel® Xeon® Platinum 8260M CPU @ 2.40GHz |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Selected CPU Flags | Hyper Threading, Turbo Boost, DL Boost | Hyper Threading, Turbo Boost, DL Boost |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU Thermal Design Power | 150W | 162W |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Operating System | Ubuntu 20.04.4 LTS | Ubuntu 20.04.4 LTS |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Kernel Version | 5.4.0-107-generic | 5.4.0-107-generic |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Vendor | AMI | American Megatrends Inc. |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Version & Release | 4.1.16; date: 06/23/2020 | 4.1.16; date: 06/23/2020 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Docker Version | 20.10.3 | 20.10.3 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Network Speed | 40 Gb/s | 40 Gb/s |
+--------------------------+-------------------------------------------+--------------------------------------------+
.. dropdown:: Platform with Intel® Core™ i9-10920X
.. table::
:widths: 25 25 50
+--------------------------+-------------------------------------------+--------------------------------------------+
| | Server Platform | Client Platform |
+==========================+===========================================+============================================+
| Motherboard | ASUSTeK COMPUTER INC. PRIME X299-A II | ASUSTeK COMPUTER INC. PRIME Z370-P |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Memory | Corsair 4 x 16GB @ 2666 MT/s DDR4 | Corsair 4 x 16GB @ 2133 MT/s DDR4 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU | Intel® Core™ i9-10920X CPU @ 3.50GHz | Intel® Core™ i7-8700T CPU @ 2.40GHz |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Selected CPU Flags | Hyper Threading, Turbo Boost, DL Boost | Hyper Threading, Turbo Boost, DL Boost |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU Thermal Design Power | 165W | 35 W |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Operating System | Ubuntu 20.04.4 LTS | Ubuntu 20.04.4 LTS |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Kernel Version | 5.4.0-107-generic | 5.4.0-107-generic |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Vendor | American Megatrends Inc. | American Megatrends Inc. |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Version & Release | 0702; date: 06/10/2020 | 2401; date: 07/15/2019 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Docker Version | 19.03.13 | 19.03.14 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Network Speed | 10 Gb/s | 10 Gb/s |
+--------------------------+-------------------------------------------+--------------------------------------------+
.. dropdown:: Platform with Intel® Core™ i7-8700T
.. table::
:widths: 25 25 50
+--------------------------+-------------------------------------------+--------------------------------------------+
| | Server Platform | Client Platform |
+==========================+===========================================+============================================+
| Motherboard | ASUSTeK COMPUTER INC. PRIME Z370-P | ASUSTeK COMPUTER INC. PRIME X299-A II |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Memory | Corsair 4 x 16GB @ 2133 MT/s DDR4 | Corsair 4 x 16GB @ 2666 MT/s DDR4 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Selected CPU Flags | Hyper Threading, Turbo Boost | Hyper Threading, Turbo Boost |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU Thermal Design Power | 35W | 165 W |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Operating System | Ubuntu 20.04.4 LTS | Ubuntu 20.04.4 LTS |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Kernel Version | 5.4.0-107-generic | 5.4.0-107-generic |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Vendor | American Megatrends Inc. | American Megatrends Inc. |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Version & Release | 2401; date: 07/15/2019 | 0702; date: 06/10/2020 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Docker Version | 19.03.14 | 19.03.13 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Network Speed | 10 Gb/s | 10 Gb/s |
+--------------------------+-------------------------------------------+--------------------------------------------+
.. dropdown:: Platform with Intel® Core™ i5-8500
.. table::
:widths: 25 25 50
+--------------------------+-------------------------------------------+--------------------------------------------+
| | Server Platform | Client Platform |
+==========================+===========================================+============================================+
| Motherboard | ASUSTeK COMPUTER INC. PRIME Z370-A | Gigabyte Technology Co., Ltd. Z390 UD |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Memory | Corsair 2 x 16GB @ 2133 MT/s DDR4 | 029E 4 x 8GB @ 2400 MT/s DDR4 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i3-8100 CPU @ 3.60GHz |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Selected CPU Flags | Turbo Boost | |
+--------------------------+-------------------------------------------+--------------------------------------------+
| CPU Thermal Design Power | 65W | 65 W |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Operating System | Ubuntu 20.04.4 LTS | Ubuntu 20.04.1 LTS |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Kernel Version | 5.4.0-113-generic | 5.4.0-52-generic |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Vendor | American Megatrends Inc. | American Megatrends Inc. |
+--------------------------+-------------------------------------------+--------------------------------------------+
| BIOS Version & Release | 3004; date: 07/12/2021 | F10j; date: 09/16/2020 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Docker Version | 19.03.13 | 20.10.0 |
+--------------------------+-------------------------------------------+--------------------------------------------+
| Network Speed | 40 Gb/s | 40 Gb/s |
+--------------------------+-------------------------------------------+--------------------------------------------+
@endsphinxdirective

View File

@@ -28,7 +28,6 @@ copyright = '2022, Intel®'
author = 'Intel®'
language = 'en'
version_name = '2022.3'
# -- General configuration ---------------------------------------------------
@@ -44,31 +43,12 @@ extensions = [
'cpplexer',
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'openvino_custom_sphinx_sitemap'
'sphinx_sitemap'
]
html_baseurl = 'https://docs.openvino.ai/canonical/'
# -- Sitemap configuration ---------------------------
html_baseurl = 'https://docs.openvino.ai/latest/'
sitemap_url_scheme = "{link}"
site_url = f'https://docs.openvino.ai/{version_name}/'
ov_sitemap_urlset = [
("xmlns", "http://www.sitemaps.org/schemas/sitemap/0.9"),
("xmlns:xsi", "http://www.w3.org/2001/XMLSchema-instance"),
("xmlns:coveo", "https://www.coveo.com/en/company/about-us"),
("xsi:schemaLocation", "http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd")
]
ov_sitemap_meta = [
('coveo:metadata', {
'ovversion': version_name,
})
]
# ----------------------------------------------------
html_favicon = '_static/favicon.ico'
autodoc_default_flags = ['members']
@@ -201,6 +181,6 @@ def setup(app):
app.connect('build-finished',replace_index_with_redirect)
app.add_js_file('js/custom.js')
app.add_js_file('js/graphs.js')
app.add_js_file('js/newsletter.js')
app.add_js_file('js/graphs_ov_tf.js')
app.add_js_file('js/open_sidebar.js')

View File

@@ -68,7 +68,7 @@ OpenVINO provides features to improve your models performance, optimize your
#### Model Compression and Quantization
Use OpenVINOs model compression tools to reduce your models latency and memory footprint while maintaining good accuracy.
* Tutorial - <a href="notebooks/111-yolov5-quantization-migration-with-output.html">OpenVINO Post-Training Model Quantization</a>
* Tutorial - <a href="notebooks/111-detection-quantization-with-output.html">OpenVINO Post-Training Model Quantization</a>
* Tutorial - <a href="notebooks/305-tensorflow-quantization-aware-training-with-output.html">Quantization-Aware Training in TensorFlow with OpenVINO NNCF</a>
* Tutorial - <a href="notebooks/302-pytorch-quantization-aware-training-with-output.html">Quantization-Aware Training in PyTorch with NNCF</a>
* <a href="notebooks/openvino_docs_model_optimization_guide.html">Model Optimization Guide</a>

View File

@@ -378,7 +378,6 @@ The following two examples show how to run the same sample using GPU or MYRIAD a
#### Running Inference on MYRIAD
> **NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires [additional hardware configuration steps](../install_guides/configurations-for-ncs2.md), as described earlier on this page.
@sphinxdirective

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d86125db1e295334c04e92d0645c773f679d21bf52e25dce7c887fdf972b7a28
size 19154

View File

@@ -7,5 +7,5 @@ OpenVINO™ Documentation
Install <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html>
Blog <https://blog.openvino.ai/>
Forum <https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit>
Support <https://www.intel.com/content/www/us/en/support/products/96066/software/development-software/openvino-toolkit.html>
Training <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/learn/certification.html>
GitHub <https://github.com/openvinotoolkit>

View File

@@ -1,7 +1,5 @@
# Configurations for IEI Mustang-V100-MX8-R10 Card {#openvino_docs_install_guides_movidius_setup_guide}
@sphinxdirective
.. note:: These steps are only required for **IEI Mustang-V100-MX8-R10** card. **IEI Mustang-V100-MX8-R11** card doesn't require any additional steps and it's completely configured using the :doc:`general guidance <openvino_docs_install_guides_installing_openvino_ivad_vpu>`

View File

@@ -5,114 +5,58 @@
.. _gpu guide:
To use the OpenVINO™ GPU plug-in and transfer the inference to the graphics of the Intel® processor (GPU), the Intel® graphics driver must be properly configured on the system.
@endsphinxdirective
Linux
##########
In case if you are intended to use OpenVINO GPU plugin and offload network inference to Intel® graphics processor, the Intel Graphics Driver should be properly configured on your system.
To use a GPU device for OpenVINO inference, you must install OpenCL runtime packages.
If it is already installed, and you want to keep it, you can skip the installation steps.
If you are using a discrete GPU (for example Arc 770), you must also be using a supported Linux kernel as per `documentation. <https://dgpu-docs.intel.com/driver/kernel-driver-types.html>`__
## Linux
- For Arc GPU, kernel 6.2 or higher is recommended.
- For Max and Flex GPU, or Arc with kernel version lower than 6.2, you must also install the ``intel-i915-dkms`` and ``xpu-smi`` kernel modules as described in the installation documentation for `Max/Flex <https://dgpu-docs.intel.com/driver/installation.html>`__ or `Arc. <https://dgpu-docs.intel.com/driver/client/overview.html>`__
To install the latest available **Intel® Graphics Compute Runtime for OpenCL™** for your OS, see the [Install Guides](https://github.com/intel/compute-runtime/releases/latest).
Below are the instructions on how to install the OpenCL packages on supported Linux distributions. These instructions install the `Intel(R) Graphics Compute Runtime for oneAPI Level Zero and OpenCL(TM) Driver <https://github.com/intel/compute-runtime/releases/tag/23.22.26516.18>`__ and its dependencies:
> **NOTE**: If you use RedHat 8 OS please install OpenCL library as prerequisite via following command line:
> ```sh rpm -ivh http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/ocl-icd-2.2.12-1.el8.x86_64.rpm```
- `Intel Graphics Memory Management Library <https://github.com/intel/gmmlib>`__
- `Intel® Graphics Compiler for OpenCL™ <https://github.com/intel/intel-graphics-compiler>`__
- `OpenCL ICD loader package <https://github.com/KhronosGroup/OpenCL-ICD-Loader>`__
> **NOTE**: For instructions specific to discrete graphics platforms, refer to [the dgpu guide](https://dgpu-docs.intel.com/installation-guides/index.html) (Intel® Arc™ A-Series Graphics, Intel® Data Center GPU Flex Series, Intel® Data Center GPU MAX Series, Intel® processor graphics Gen12, and Intel® Iris Xe MAX codename DG1).
.. tab-set::
You may consider installing one of the earlier versions of the driver, based on your particular setup needs.
.. tab-item:: Ubuntu 22.04 LTS
:sync: ubuntu-22
It is recommended that you refer to the [Intel® Graphics Compute Runtime Github page](https://github.com/intel/compute-runtime/) for instructions and recommendations on GPU driver installation specific to particular releases, including the list of supported hardware platforms.
Download and install the `deb` packages published `here <https://github.com/intel/compute-runtime/releases/latest>`__ and install the apt package `ocl-icd-libopencl1` with the OpenCl ICD loader.
Alternatively, you can add the apt repository by following the `installation guide <https://dgpu-docs.intel.com/driver/installation.html#ubuntu-install-steps>`__. Then install the `ocl-icd-libopencl1`, `intel-opencl-icd`, `intel-level-zero-gpu` and `level-zero` apt packages:
.. code-block:: sh
apt-get install -y ocl-icd-libopencl1 intel-opencl-icd intel-level-zero-gpu level-zero
.. tab-item:: Ubuntu 20.04 LTS
:sync: ubuntu-20
Ubuntu 20.04 LTS is not updated with the latest driver versions. You can install the updated versions up to the version 22.43 from apt:
.. code-block:: sh
apt-get update && apt-get install -y --no-install-recommends curl gpg gpg-agent && \
curl https://repositories.intel.com/graphics/intel-graphics.key | gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg && \
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/graphics/ubuntu focal-legacy main' | tee /etc/apt/sources.list.d/intel.gpu.focal.list && \
apt-get update
apt-get update && apt-get install -y --no-install-recommends intel-opencl-icd intel-level-zero-gpu level-zero
Alternatively, download older `deb` version from `here <https://github.com/intel/compute-runtime/releases>`__. Note that older driver version might not include some of the bug fixes and might be not supported on some latest platforms. Check the supported hardware for the versions you are installing.
.. tab-item:: RedHat UBI 8
:sync: redhat-8
Follow the `guide <https://dgpu-docs.intel.com/driver/installation.html#rhel-install-steps>`__ to add Yum repository.
Install following packages:
.. code-block:: sh
yum install intel-opencl level-zero intel-level-zero-gpu intel-igc-core intel-igc-cm intel-gmmlib intel-ocloc
Install the OpenCL ICD Loader via:
.. code-block:: sh
rpm -ivh http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/Packages/ocl-icd-2.2.12-1.el8.x86_64.rpm
@sphinxdirective
.. _gpu guide windows:
Windows
##########
@endsphinxdirective
To install the Intel Graphics Driver for Windows on your system, follow the `driver installation guide <https://www.intel.com/content/www/us/en/support/articles/000005629/graphics.html>`_.
## Windows
To install the Intel Graphics Driver for Windows on your hardware, please proceed with the [instruction](https://www.intel.com/content/www/us/en/support/articles/000005629/graphics.html).
To check if you have this driver installed:
1. Type **device manager** in your **Search Windows** box and press Enter. The **Device Manager** opens.
2. Click the drop-down arrow to view the **Display adapters**. You can see the adapter that is installed in your computer:
.. image:: _static/images/DeviceManager.PNG
:width: 400
![](../img/DeviceManager.PNG)
3. Right-click the adapter name and select **Properties**.
4. Click the **Driver** tab to see the driver version.
.. image:: _static/images/DeviceDriverVersion.PNG
:width: 400
![](../img/DeviceDriverVersion.PNG)
You are done updating your device driver and ready to use your GPU.
You are done updating your device driver and are ready to use your GPU.
Additional info
####################
## Additional info
For your reference, the following versions of Intel® Graphics Driver were used in the OpenVINO internal validation:
+------------------+-------------------------------------------------------------------------------------------+
| Operation System | Driver version |
+==================+===========================================================================================+
| Ubuntu 22.04 | `22.43.24595.30 <https://github.com/intel/compute-runtime/releases/tag/22.43.24595.30>`__ |
+------------------+-------------------------------------------------------------------------------------------+
| Ubuntu 20.04 | `22.35.24055 <https://github.com/intel/compute-runtime/releases/tag/22.35.24055>`__ |
+------------------+-------------------------------------------------------------------------------------------+
| Ubuntu 18.04 | `21.38.21026 <https://github.com/intel/compute-runtime/releases/tag/21.38.21026>`__ |
+------------------+-------------------------------------------------------------------------------------------+
| CentOS 7 | `19.41.14441 <https://github.com/intel/compute-runtime/releases/tag/19.41.14441>`__ |
+------------------+-------------------------------------------------------------------------------------------+
| RHEL 8 | `22.28.23726 <https://github.com/intel/compute-runtime/releases/tag/22.28.23726>`__ |
+------------------+-------------------------------------------------------------------------------------------+
@endsphinxdirective
In the internal OpenVINO validation the following versions of Intel Graphics Driver were used:
Operation System | Driver version
--- |-------------------------
Ubuntu 20.04 | [22.35.24055](https://github.com/intel/compute-runtime/releases/tag/22.35.24055)
Ubuntu 18.04 | [21.38.21026](https://github.com/intel/compute-runtime/releases/tag/21.38.21026)
CentOS 7 | [19.41.14441](https://github.com/intel/compute-runtime/releases/tag/19.41.14441)
RHEL 8 | [22.28.23726](https://github.com/intel/compute-runtime/releases/tag/22.28.23726)
## Whats Next?

View File

@@ -1,9 +1,5 @@
# Configurations for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs {#openvino_docs_install_guides_installing_openvino_ivad_vpu}
@sphinxdirective
.. _vpu guide:
@@ -25,7 +21,7 @@ For troubleshooting issues, please see the [Troubleshooting Guide](troubleshooti
For Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the following additional installation steps are required.
> **NOTE**: If you have installed OpenVINO™ Runtime to the non-default install directory, replace `/opt/intel` with the proper path.
> **NOTE**: If you installed OpenVINO™ Runtime to the non-default install directory, replace `/opt/intel` with the directory in which you installed the software.
1. Set the environment variables:
```sh

View File

@@ -34,13 +34,11 @@ Once you have OpenVINO™ Runtime installed, follow these steps to be able to wo
You've completed all required configuration steps to perform inference on Intel® Neural Compute Stick 2.
@sphinxdirective
.. _ncs guide macos:
@endsphinxdirective
## macOS
These steps are required only if you want to perform inference on Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU.

View File

@@ -160,7 +160,7 @@ Note that the commands are different for a Python installation and a C++ install
@endsphinxdirective
For more details on the openvino-dev PyPI package, see https://pypi.org/project/openvino-dev/2022.3.1/.
For more details on the openvino-dev PyPI package, see https://pypi.org/project/openvino-dev/.
### Step 5. Test the Installation

View File

@@ -10,8 +10,7 @@ Installing OpenVINO Runtime from APT is recommended for C++ developers. If you a
.. warning::
By downloading and using this container and the included software, you agree to the terms and conditions of the `software license agreements <https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf>`__.
By downloading and using this container and the included software, you agree to the terms and conditions of the `software license agreements <https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf>`_.
@endsphinxdirective
@@ -231,8 +230,8 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
* Try the `C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`_ for step-by-step instructions on building and running a basic image classification C++ application.
.. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
:width: 400
.. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
:width: 400
* Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
* `Basic object detection with the Hello Reshape SSD C++ sample <openvino_inference_engine_samples_hello_reshape_ssd_README.html>`_

View File

@@ -2,55 +2,58 @@
@sphinxdirective
.. note::
With the OpenVINO™ 2022.3 release, you can install OpenVINO Runtime on macOS and Linux via `Homebrew <https://brew.sh/>`_. OpenVINO™ Development Tools can be installed via PyPI only. See :ref:`Installing Additional Components <intall additional components brew>` for more information.
Installing OpenVINO Runtime from Homebrew is recommended for C++ developers.
If you work with Python, consider :doc:`installing OpenVINO from PyPI <openvino_docs_install_guides_installing_openvino_pip>`
See the `Release Notes <https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html>`_ for more information on updates in the latest release.
Importantly, Homebrew always distributes the most recent package. You cannot use it to install previous versions of OpenVINO.
The current Homebrew package provides inference support for CPU (under macOS x86_64, macOS arm64, Linux x86_64), as well as GPU (under Linux x86_64 only).
Installing OpenVINO Runtime from Homebrew is recommended for C++ developers. If you are working with Python, the PyPI package has everything needed for Python development and deployment on CPU and GPUs. Visit the :doc:`Install OpenVINO from PyPI <openvino_docs_install_guides_installing_openvino_pip>` page for instructions on how to install OpenVINO Runtime for Python using PyPI.
.. note::
You can use `Homebrew <https://brew.sh/>`__ to install OpenVINO Runtime on macOS and Linux.
OpenVINO™ Development Tools can be installed via PyPI only.
See `Installing Additional Components <#optional-installing-additional-components>`__ for more information.
Only CPU is supported for inference if you install OpenVINO via HomeBrew.
.. warning::
By downloading and using this container and the included software, you agree to the terms and conditions of the
`software license agreements <https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf>`_.
By downloading and using this container and the included software, you agree to the terms and conditions of the `software license agreements <https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf>`_.
@endsphinxdirective
.. tab:: System Requirements
## Prerequisites
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`__
### System Requirements
.. tab:: Software Requirements
@sphinxdirective
.. tab:: macOS
Full requirement listing is available on the `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`_
* `Homebrew <https://brew.sh/>`_
* `CMake 3.13 or higher <https://cmake.org/download/>`_ (choose "macOS 10.13 or later"). Add `/Applications/CMake.app/Contents/bin` to path (for default installation).
* `Python 3.7 - 3.10 <https://www.python.org/downloads/mac-osx/>`_ (choose 3.7 - 3.10). Install and add it to path.
* Apple Xcode Command Line Tools. In the terminal, run `xcode-select --install` from any directory to install it.
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
@endsphinxdirective
.. tab:: Linux
### Software Requirements
* `Homebrew <https://brew.sh/>`_
* `CMake 3.13 or higher, 64-bit <https://cmake.org/download/>`__
* GCC 7.5.0 (for Ubuntu 18.04) or GCC 9.3.0 (for Ubuntu 20.04)
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/>`__
@sphinxdirective
Installing OpenVINO Runtime
################################
.. tab:: macOS
* `Homebrew <https://brew.sh/>`_
* `CMake 3.13 or higher <https://cmake.org/download/>`_ (choose "macOS 10.13 or later"). Add `/Applications/CMake.app/Contents/bin` to path (for default installation).
* `Python 3.7 - 3.10 <https://www.python.org/downloads/mac-osx/>`_ (choose 3.7 - 3.10). Install and add it to path.
* Apple Xcode Command Line Tools. In the terminal, run `xcode-select --install` from any directory to install it.
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
1. Make sure that you have installed Homebrew on your system. If not, follow the instructions on `the Homebrew website <https://brew.sh/>`_ to install and configure it.
.. tab:: Linux
* `Homebrew <https://brew.sh/>`_
* `CMake 3.13 or higher, 64-bit <https://cmake.org/download/>`_
* GCC 7.5.0 (for Ubuntu 18.04) or GCC 9.3.0 (for Ubuntu 20.04)
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/>`_
@endsphinxdirective
## Installing OpenVINO Runtime
@sphinxdirective
1. Make sure that you have installed HomeBrew on your system. If not, follow the instructions on `the Homebrew website <https://brew.sh/>`_ to install and configure it.
2. Open a command prompt terminal window, and run the following command to install OpenVINO Runtime:
@@ -58,39 +61,27 @@ Installing OpenVINO Runtime
brew install openvino
3. Check if the installation was successful by listing all Homebrew packages:
.. code-block:: sh
brew list
Congratulations, you've finished the installation!
.. _intall additional components brew:
@endsphinxdirective
## (Optional) Installing Additional Components
(Optional) Installing Additional Components
#############################################
@sphinxdirective
OpenVINO Development Tools is a set of utilities for working with OpenVINO and OpenVINO models. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you installed OpenVINO Runtime using Homebrew, OpenVINO Development Tools must be installed separately.
OpenVINO Development Tools is a set of utilities for working with OpenVINO and OpenVINO models.
It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader.
If you installed OpenVINO Runtime using Homebrew, OpenVINO Development Tools must be installed separately.
See the **For C++ Developers** section on the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` page for instructions.
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the `instructions on GitHub <https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO>`__.
See **For C++ Developers** section on the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` page for instructions.
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the `instructions on GitHub <https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO>`_.
@endsphinxdirective
## Uninstalling OpenVINO
To uninstall OpenVINO via Homebrew, use the following command:
To uninstall OpenVINO via HomeBrew, use the following command:
```sh
brew uninstall openvino
```
@@ -106,7 +97,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things:
* See pre-trained deep learning models in our :doc:`Open Model Zoo <model_zoo>`.
* Learn more about :doc:`Inference with OpenVINO Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <openvino_docs_OV_UG_Samples_Overview>`.
* Check out the OpenVINO product home page: https://software.intel.com/en-us/openvino-toolkit.
* Take a glance at the OpenVINO product home page: https://software.intel.com/en-us/openvino-toolkit.
@endsphinxdirective

View File

@@ -1,88 +1,10 @@
# Install OpenVINO™ Runtime from Conda Forge {#openvino_docs_install_guides_installing_openvino_conda}
# Install OpenVINO™ Runtime from Anaconda Cloud
@sphinxdirective
* [Install OpenVINO Runtime from an Archive File](installing-openvino-from-archive-linux.md)
* [Install OpenVINO from PyPI](installing-openvino-pip.md)
* [Install OpenVINO with Docker](installing-openvino-docker-linux.md)
* [Build From Source](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode)
.. note::
Installing OpenVINO Runtime from Conda Forge is recommended for C++ developers, as it provides only the C++ Runtime API.
If you work with Python, consider :doc:`installing OpenVINO from PyPI <openvino_docs_install_guides_installing_openvino_pip>`
.. tab:: System Requirements
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`__
.. tab:: Software
There are many ways to work with Conda. Before you proceed, learn more about it on the
`Anaconda distribution page <https://www.anaconda.com/products/individual/>`__
Installing OpenVINO Runtime with Anaconda Package Manager
############################################################
1. Set up the Anaconda environment (Python 3.7 used as an example):
.. code-block:: sh
conda create --name py37 python=3.7
.. code-block:: sh
conda activate py37
2. Update it to the latest version:
.. code-block:: sh
conda update --all
3. Install the OpenVINO Runtime package:
.. code-block:: sh
conda install -c conda-forge openvino=2022.3.1
Congratulations! You have finished installing OpenVINO Runtime.
Uninstalling OpenVINO™ Runtime
###########################################################
Once OpenVINO Runtime is installed via Conda, you can remove it using the following command,
with the proper OpenVINO version number:
.. code-block:: sh
conda remove openvino=2022.3.1
What's Next?
############################################################
Now that you've installed OpenVINO Runtime, you are ready to run your own machine learning applications!
To learn more about how to integrate a model in OpenVINO applications, try out some tutorials and sample applications.
Try the :doc:`C++ Quick Start Example <openvino_docs_get_started_get_started_demos>` for step-by-step instructions
on building and running a basic image classification C++ application.
.. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
:width: 400
Visit the :doc:`Samples <openvino_docs_OV_UG_Samples_Overview>` page for other C++ example applications to get you started with OpenVINO, such as:
* `Basic object detection with the Hello Reshape SSD C++ sample <openvino_inference_engine_samples_hello_reshape_ssd_README.html>`__
* `Automatic speech recognition C++ sample <openvino_inference_engine_samples_speech_sample_README.html>`__
Additional Resources
###########################################################
* `OpenVINO Runtime Conda Forge <https://anaconda.org/conda-forge/openvino>`__
* :doc:`OpenVINO™ Toolkit Samples Overview <openvino_docs_OV_UG_Samples_Overview>`
* `OpenVINO Installation Selector Tool <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html>`__
@endsphinxdirective
The other installation methods are temporarily unavailable.
For a full selection of distribution channels, see the [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)

View File

@@ -26,11 +26,11 @@ This guide provides steps on creating a Docker image with Intel® Distribution o
To launch a Linux image on WSL2 when trying to run inferences on a GPU, make sure that the following requirements are met:
- Only Windows 10 with 21H2 update or above installed and Windows 11 are supported.
- Intel GPU driver for Windows, version 30.0.100.9684 or newer needs to be installed. For more details, refer to
`this article at intel.com <https://www.intel.com/content/www/us/en/artificial-intelligence/harness-the-power-of-intel-igpu-on-your-machine.html#articleparagraph_983312434>`__.
- Currently, the Docker images contain preinstalled recommended version of OpenCL Runtime with WSL2 support.
- Intel GPU driver on Windows host with version 30.0.100.9684 or above need be installed. Please see `this article`_ for more details.
- From 2022.1 release, the Docker images contain preinstalled recommended version of OpenCL Runtime with WSL2 support.
.. _this article: https://www.intel.com/content/www/us/en/artificial-intelligence/harness-the-power-of-intel-igpu-on-your-machine.html#articleparagraph_983312434
@endsphinxdirective
## Installation Flow
@@ -63,20 +63,60 @@ You can also try our [Tutorials](https://github.com/openvinotoolkit/docker_ci/tr
## <a name="configure-image-docker-linux"></a>Configuring the Image for Different Devices
If you want to run inference on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.
If you want to run inferences on a CPU or Intel® Neural Compute Stick 2, no extra configuration is needed. Go to <a href="#run-image-docker-linux">Running the image on different devices</a> for the next step.
### Configuring Docker Image for GPU
@sphinxdirective
If you want to run inference on a GPU, follow the instructions provided in the guide on
:doc:`Configuration for Intel GPU <openvino_docs_install_guides_configurations_for_intel_gpu>`
@endsphinxdirective
By default, the distributed Docker image for OpenVINO has the recommended version of Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL Driver for the operating system installed inside. If you want to build an image with a custom version of OpenCL Runtime included, you need to modify the Dockerfile using the lines below (the 19.41.14441 version is used as an example) and build the image manually:
**Ubuntu 18.04/20.04**:
```sh
WORKDIR /tmp/opencl
RUN useradd -ms /bin/bash -G video,users openvino && \
chown openvino -R /home/openvino
RUN apt-get update && \
apt-get install -y --no-install-recommends ocl-icd-libopencl1 && \
rm -rf /var/lib/apt/lists/* && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-gmmlib_19.3.2_amd64.deb" --output "intel-gmmlib_19.3.2_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-core_1.0.2597_amd64.deb" --output "intel-igc-core_1.0.2597_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-igc-opencl_1.0.2597_amd64.deb" --output "intel-igc-opencl_1.0.2597_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-opencl_19.41.14441_amd64.deb" --output "intel-opencl_19.41.14441_amd64.deb" && \
curl -L "https://github.com/intel/compute-runtime/releases/download/19.41.14441/intel-ocloc_19.41.14441_amd64.deb" --output "intel-ocloc_19.04.12237_amd64.deb" && \
dpkg -i /tmp/opencl/*.deb && \
ldconfig && \
rm /tmp/opencl
```
**RHEL 8**:
```sh
WORKDIR /tmp/opencl
RUN useradd -ms /bin/bash -G video,users openvino && \
chown openvino -R /home/openvino
RUN groupmod -g 44 video
RUN yum update -y && yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm && \
yum update -y && yum install -y ocl-icd ocl-icd-devel && \
yum clean all && rm -rf /var/cache/yum && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-gmmlib-19.3.2-1.el7.x86_64.rpm/download -o intel-gmmlib-19.3.2-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm/download -o intel-gmmlib-devel-19.3.2-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-core-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-core-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-opencl-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm/download -o intel-igc-opencl-devel-1.0.2597-1.el7.x86_64.rpm && \
curl -L https://sourceforge.net/projects/intel-compute-runtime/files/19.41.14441/centos-7/intel-opencl-19.41.14441-1.el7.x86_64.rpm/download -o intel-opencl-19.41.14441-1.el7.x86_64.rpm \
rpm -ivh ${TEMP_DIR}/*.rpm && \
ldconfig && \
rm -rf ${TEMP_DIR} && \
yum remove -y epel-release
```
### <a name="set-up-hddldaemon"></a>Configuring Docker Image for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
> **NOTE**: When building the Docker image, create a user in the Dockerfile that has the same UID (User Identifier) and GID (Group Identifier) as the user which that runs hddldaemon on the host, and then run the application in the Docker image with this user. This step is necessary to run the container as a non-root user.
To s:use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, do the following:
To use the Docker container for inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, do the following steps:
1. Set up the environment on the host machine to be used for running Docker. It is required to execute `hddldaemon`, which is responsible for communication between the HDDL plugin and the board. To learn how to set up the environment (the OpenVINO package or HDDL package must be pre-installed), see [Configuration guide for HDDL device](https://github.com/openvinotoolkit/docker_ci/blob/master/install_guide_vpu_hddl.md) or [Configurations for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs on Linux](configurations-for-ivad-vpu.md).
2. Run `hddldaemon` on the host in a separate terminal session using the following command:

View File

@@ -4,7 +4,7 @@ With the OpenVINO™ 2022.3 release, you can download and use archive files to i
Installing OpenVINO Runtime from archive files is recommended for C++ developers. If you are working with Python, the PyPI package has everything needed for Python development and deployment on CPU and GPUs. See the [Install OpenVINO from PyPI](installing-openvino-pip.md) page for instructions on how to install OpenVINO Runtime for Python using PyPI.
> **NOTE**: Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter can be installed via [pypi.org](https://pypi.org/project/openvino-dev/2022.3.1/) only.
> **NOTE**: Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter can be installed via [pypi.org](https://pypi.org/project/openvino-dev/) only.
See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html) for more information on updates in the latest release.
@@ -14,7 +14,7 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
.. tab:: System Requirements
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`__
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`_
.. tab:: Processor Notes
@@ -25,8 +25,8 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
.. tab:: Software
* `CMake 3.13 or higher, 64-bit <https://cmake.org/download/>`__
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/>`__
* `CMake 3.13 or higher, 64-bit <https://cmake.org/download/>`_
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/>`_
* GCC:
.. tab:: Ubuntu 18.04
@@ -84,66 +84,52 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
cd <user_home>/Downloads
4. Download the `OpenVINO Runtime archive file for your system <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/>`__, extract the files, rename the extracted folder and move it to the desired path:
4. Download the `OpenVINO Runtime archive file for your system <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/>`_, extract the files, rename the extracted folder and move it to the desired path:
.. tab:: Ubuntu 20.04
.. code-block:: sh
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_ubuntu20_2022.3.1.9227.cf2c7da5689_x86_64.tgz --output openvino_2022.3.1.tgz
tar -xf openvino_2022.3.1.tgz
sudo mv l_openvino_toolkit_ubuntu20_2022.3.1.9227.cf2c7da5689_x86_64 /opt/intel/openvino_2022.3.1
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu20_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
tar -xf openvino_2022.3.0.tgz
sudo mv l_openvino_toolkit_ubuntu20_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
.. tab:: Ubuntu 18.04
.. code-block:: sh
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_ubuntu18_2022.3.1.9227.cf2c7da5689_x86_64.tgz --output openvino_2022.3.1.tgz
tar -xf openvino_2022.3.1.tgz
sudo mv l_openvino_toolkit_ubuntu18_2022.3.1.9227.cf2c7da5689_x86_64 /opt/intel/openvino_2022.3.1
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
tar -xf openvino_2022.3.0.tgz
sudo mv l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
.. tab:: RHEL 8
.. code-block:: sh
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_rhel8_2022.3.1.9227.cf2c7da5689_x86_64.tgz --output openvino_2022.3.1.tgz
tar -xf openvino_2022.3.1.tgz
sudo mv l_openvino_toolkit_rhel8_2022.3.1.9227.cf2c7da5689_x86_64 /opt/intel/openvino_2022.3.1
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_rhel8_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
tar -xf openvino_2022.3.0.tgz
sudo mv l_openvino_toolkit_rhel8_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
.. tab:: CentOS 7
.. code-block:: sh
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_centos7_2022.3.1.9227.cf2c7da5689_x86_64.tgz --output openvino_2022.3.1.tgz
tar -xf openvino_2022.3.1.tgz
sudo mv l_openvino_toolkit_centos7_2022.3.1.9227.cf2c7da5689_x86_64 /opt/intel/openvino_2022.3.1
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_centos7_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
tar -xf openvino_2022.3.0.tgz
sudo mv l_openvino_toolkit_centos7_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
5. Install required system dependencies on Linux. To do this, OpenVINO provides a script in the extracted installation directory. Run the following command:
.. code-block:: sh
cd /opt/intel/openvino_2022.3.1
cd /opt/intel/openvino_2022.3.0/
sudo -E ./install_dependencies/install_openvino_dependencies.sh
6. (Optional) Install *numpy* Python Library:
.. note::
This step is required only when you decide to use Python API.
You can use the ``requirements.txt`` file from the ``/opt/intel/openvino_2022.3.1/python/python.<x>`` folder:
6. For simplicity, it is useful to create a symbolic link as below:
.. code-block:: sh
cd /opt/intel/openvino_2022.3.1
python3 -m pip install -r ./python/python3.<x>/requirements.txt
7. For simplicity, it is useful to create a symbolic link as below:
.. code-block:: sh
cd /opt/intel
sudo ln -s openvino_2022.3.1 openvino_2022
sudo ln -s openvino_2022.3.0 openvino_2022
.. note::
If you have already installed a previous release of OpenVINO 2022, a symbolic link to the ``openvino_2022`` folder may already exist. Unlink the previous link with ``sudo unlink openvino_2022``, and then re-run the command above.

View File

@@ -6,19 +6,19 @@ Installing OpenVINO Runtime from archive files is recommended for C++ developers
See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html) for more information on updates in the latest release.
> **NOTE**: Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter can be installed via [pypi.org](https://pypi.org/project/openvino-dev/2022.3.1/) only.
> **NOTE**: Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter can be installed via [pypi.org](https://pypi.org/project/openvino-dev/) only.
@sphinxdirective
.. tab:: System Requirements
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`__
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`_
.. tab:: Software Requirements
* `CMake 3.13 or higher <https://cmake.org/download/>`__ (choose "macOS 10.13 or later"). Add `/Applications/CMake.app/Contents/bin` to path (for default install).
* `Python 3.7 - 3.10 <https://www.python.org/downloads/mac-osx/>`__ (choose 3.7 - 3.10). Install and add to path.
* `CMake 3.13 or higher <https://cmake.org/download/>`_ (choose "macOS 10.13 or later"). Add `/Applications/CMake.app/Contents/bin` to path (for default install).
* `Python 3.7 - 3.10 <https://www.python.org/downloads/mac-osx/>`_ (choose 3.7 - 3.10). Install and add to path.
* Apple Xcode Command Line Tools. In the terminal, run `xcode-select --install` from any directory
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
@@ -47,42 +47,29 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
cd <user_home>/Downloads
4. Download the `OpenVINO Runtime archive file for macOS <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/macos/>`__, extract the files, rename the extracted folder and move it to the desired path:
4. Download the `OpenVINO Runtime archive file for macOS <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/macos/>`_, extract the files, rename the extracted folder and move it to the desired path:
.. tab:: x86, 64-bit
.. code-block:: sh
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/macos/m_openvino_toolkit_macos_10_15_2022.3.1.9227.cf2c7da5689_x86_64.tgz --output openvino_2022.3.1.tgz
tar -xf openvino_2022.3.1.tgz
sudo mv m_openvino_toolkit_macos_10_15_2022.3.1.9227.cf2c7da5689_x86_64 /opt/intel/openvino_2022.3.1
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/macos/m_openvino_toolkit_macos_10_15_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
tar -xf openvino_2022.3.0.tgz
sudo mv m_openvino_toolkit_macos_10_15_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
.. tab:: ARM, 64-bit
.. code-block:: sh
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/macos/m_openvino_toolkit_macos_11_0_2022.3.1.9227.cf2c7da5689_arm64.tgz --output openvino_2022.3.1.tgz
tar -xf openvino_2022.3.1.tgz
sudo mv m_openvino_toolkit_macos_11_0_2022.3.1.9227.cf2c7da5689_arm64 /opt/intel/openvino_2022.3.1
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/macos/m_openvino_toolkit_macos_11_0_2022.3.0.9052.9752fafe8eb_arm64.tgz --output openvino_2022.3.0.tgz
tar -xf openvino_2022.3.0.tgz
sudo mv m_openvino_toolkit_macos_11_0_2022.3.0.9052.9752fafe8eb_arm64 /opt/intel/openvino_2022.3.0
5. (Optional) Install *numpy* Python Library:
.. note::
This step is required only when you decide to use Python API.
You can use the ``requirements.txt`` file from the ``opt/intel/openvino_2022.3.1/python/python.<x>`` folder:
5. For simplicity, it is useful to create a symbolic link as below:
.. code-block:: sh
cd /opt/intel/openvino_2022.3.1
python3 -m pip install -r ./python/python3.<x>/requirements.txt
6. For simplicity, it is useful to create a symbolic link as below:
.. code-block:: sh
sudo ln -s openvino_2022.3.1 openvino_2022
sudo ln -s openvino_2022.3.0 openvino_2022
.. note::
@@ -135,21 +122,21 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2022.3/notebooks/002-openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2022.3/notebooks/001-hello-world-with-output.html>`__
* `Convert a PyTorch model and use it for image background removal <https://docs.openvino.ai/2022.3/notebooks/205-vision-background-removal-with-output.html>`__
* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2022.3/notebooks/002-openvino-api-with-output.html>`_
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2022.3/notebooks/001-hello-world-with-output.html>`_
* `Convert a PyTorch model and use it for image background removal <https://docs.openvino.ai/2022.3/notebooks/205-vision-background-removal-with-output.html>`_
.. tab:: Get started with C++
Try the `C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
Try the `C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`_ for step-by-step instructions on building and running a basic image classification C++ application.
.. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
:width: 400
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
* `Basic object detection with the Hello Reshape SSD C++ sample <openvino_inference_engine_samples_hello_reshape_ssd_README.html>`__
* `Automatic speech recognition C++ sample <openvino_inference_engine_samples_speech_sample_README.html>`__
* `Basic object detection with the Hello Reshape SSD C++ sample <openvino_inference_engine_samples_hello_reshape_ssd_README.html>`_
* `Automatic speech recognition C++ sample <openvino_inference_engine_samples_speech_sample_README.html>`_
@endsphinxdirective

View File

@@ -4,7 +4,7 @@ With the OpenVINO™ 2022.3 release, you can download and use archive files to i
Installing OpenVINO Runtime from archive files is recommended for C++ developers. If you are working with Python, the PyPI package has everything needed for Python development and deployment on CPU and GPUs. See the [Install OpenVINO from PyPI](installing-openvino-pip.md) page for instructions on how to install OpenVINO Runtime for Python using PyPI.
> **NOTE**: Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter can be installed via [pypi.org](https://pypi.org/project/openvino-dev/2022.3.1/) only.
> **NOTE**: Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter can be installed via [pypi.org](https://pypi.org/project/openvino-dev/) only.
See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html) for more information on updates in the latest release.
@@ -14,7 +14,7 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
.. tab:: System Requirements
| Full requirement listing is available in:
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`__
| `System Requirements Page <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`_
.. tab:: Processor Notes
@@ -25,18 +25,18 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
.. tab:: Software
* `Microsoft Visual Studio 2019 with MSBuild <https://visualstudio.microsoft.com/vs/older-downloads/>`__ or `Microsoft Visual Studio 2022 <http://visualstudio.microsoft.com/downloads/>`__
* `CMake 3.14 or higher, 64-bit <https://cmake.org/download/>`__ (optional, only required for building sample applications)
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/windows/>`__
* `Microsoft Visual Studio 2019 with MSBuild <https://visualstudio.microsoft.com/vs/older-downloads/>`_ or `Microsoft Visual Studio 2022 <http://visualstudio.microsoft.com/downloads/>`_
* `CMake 3.14 or higher, 64-bit <https://cmake.org/download/>`_ (optional, only required for building sample applications)
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/windows/>`_
.. note::
To install Microsoft Visual Studio 2019, follow the `Microsoft Visual Studio installation guide <https://docs.microsoft.com/en-us/visualstudio/install/install-visual-studio?view=vs-2019>`__. You can choose to download the Community version. During installation in the **Workloads** tab, choose **Desktop development with C++**.
To install Microsoft Visual Studio 2019, follow the `Microsoft Visual Studio installation guide <https://docs.microsoft.com/en-us/visualstudio/install/install-visual-studio?view=vs-2019>`_. You can choose to download the Community version. During installation in the **Workloads** tab, choose **Desktop development with C++**.
.. note::
You can either use ``cmake<version>.msi`` which is the installation wizard or ``cmake<version>.zip`` where you have to go into the ``bin`` folder and then manually add the path to environmental variables.
You can either use `cmake<version>.msi` which is the installation wizard or `cmake<version>.zip` where you have to go into the `bin` folder and then manually add the path to environmental variables.
.. important::
When installing Python, make sure you click the option **Add Python 3.x to PATH** to `add Python <https://docs.python.org/3/using/windows.html#installation-steps>`__ to your ``PATH`` environment variable.
When installing Python, make sure you click the option **Add Python 3.x to PATH** to `add Python <https://docs.python.org/3/using/windows.html#installation-steps>`_ to your `PATH` environment variable.
@endsphinxdirective
@@ -44,76 +44,42 @@ See the [Release Notes](https://www.intel.com/content/www/us/en/developer/articl
### <a name="install-openvino-archive-windows"></a>Step 1: Download and Install OpenVINO Core Components
@sphinxdirective
1. Create an `Intel` folder in the `C:\Program Files (x86)\` directory. Skip this step if the folder already exists.
1. Create an ``Intel`` folder in the ``C:\Program Files (x86)\`` directory. Skip this step if the folder already exists.
You can also do this via command-lines. Open a new command prompt window as administrator by right-clicking **Command Prompt** from the Start menu and select **Run as administrator**, and then run the following command:
```sh
mkdir "C:\Program Files (x86)\Intel"
```
> **NOTE**: `C:\Program Files (x86)\Intel` is the recommended folder. You may also use a different path if desired or if you don't have administrator privileges on your computer.
You can also do this via command-lines. Open a new command prompt window as an administrator by right-clicking **Command Prompt** from the Start menu and select **Run as administrator**, and then run the following command:
.. code-block:: sh
mkdir "C:\Program Files (x86)\Intel"
.. note::
``C:\Program Files (x86)\Intel`` is the recommended folder. You may also use a different path if desired or if you don't have administrator privileges on your computer.
2. Download the `OpenVINO Runtime archive file for Windows <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/windows/>`__ to your local ``Downloads`` folder.
2. Download the [OpenVINO Runtime archive file for Windows](https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/windows/) to your local `Downloads` folder.
If you prefer using command-lines, run the following commands in the command prompt window you opened:
.. code-block:: sh
cd <user_home>/Downloads
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/windows/w_openvino_toolkit_windows_2022.3.1.9227.cf2c7da5689_x86_64.zip --output openvino_2022.3.1.zip
```sh
cd <user_home>/Downloads
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/windows/w_openvino_toolkit_windows_2022.3.0.9052.9752fafe8eb_x86_64.zip --output openvino_2022.3.0.zip
```
> **NOTE**: A `.sha256` file is provided together with the archive file to validate your download process. To do that, download the `.sha256` file from the same repository and run `CertUtil -hashfile openvino_2022.3.0.zip SHA256`. Compare the returned value in the output with what's in the `.sha256` file: if the values are the same, you have downloaded the correct file successfully; if not, create a Support ticket [here](https://www.intel.com/content/www/us/en/support/contact-intel.html).
.. note::
A ``.sha256`` file is provided together with the archive file to validate your download process. To do that, download the ``.sha256`` file from the same repository and run ``CertUtil -hashfile openvino_2022.3.1.zip SHA256``. Compare the returned value in the output with what's in the ``.sha256`` file: if the values are the same, you have downloaded the correct
file successfully; if not, create a Support ticket `here <https://www.intel.com/content/www/us/en/support/contact-intel.html>`__.
3. Use your favorite tool to extract the archive file, rename the extracted folder, and move it to the ``C:\Program Files (x86)\Intel`` directory.
3. Use your favorite tool to extract the archive file, rename the extracted folder, and move it to the `C:\Program Files (x86)\Intel` directory.
To do this step using command-lines, run the following commands in the command prompt window you opened:
```sh
tar -xf openvino_2022.3.0.zip
ren w_openvino_toolkit_windows_2022.3.0.9052.9752fafe8eb_x86_64 openvino_2022.3.0
move openvino_2022.3.0 "C:\Program Files (x86)\Intel"
```
.. code-block:: sh
tar -xf openvino_2022.3.1.zip
ren w_openvino_toolkit_windows_2022.3.1.9227.cf2c7da5689_x86_64 openvino_2022.3.1
move openvino_2022.3.1 "C:\Program Files (x86)\Intel"
4. For simplicity, it is useful to create a symbolic link. Open a command prompt window as administrator (see Step 1 for how to do this) and run the following commands:
```sh
cd C:\Program Files (x86)\Intel
mklink /D openvino_2022 openvino_2022.3.0
```
> **NOTE**: If you have already installed a previous release of OpenVINO 2022, a symbolic link to the `openvino_2022` folder may already exist. If you want to override it, nagivate to the `C:\Program Files (x86)\Intel` folder and delete the existing linked folder before running the `mklink` command.
6. (Optional) Install *numpy* Python Library:
.. note::
This step is required only when you decide to use Python API.
You can use the ``requirements.txt`` file from the ``C:\Program Files (x86)\Intel\openvino_2022.3.1\python\python.<x>`` folder:
.. code-block:: sh
cd "C:\Program Files (x86)\Intel\openvino_2022.3.1"
python -m pip install -r .\python\python3.<x>\requirements.txt
5. For simplicity, it is useful to create a symbolic link. Open a command prompt window as administrator (see Step 1 for how to do this) and run the following commands:
.. code-block:: sh
cd C:\Program Files (x86)\Intel
mklink /D openvino_2022 openvino_2022.3.0
.. note::
If you have already installed a previous release of OpenVINO 2022, a symbolic link to the ``openvino_2022`` folder may already exist. If you want to override it, navigate to the ``C:\Program Files (x86)\Intel`` folder and delete the existing linked folder before running the ``mklink`` command.
Congratulations, you finished the installation! The ``C:\Program Files (x86)\Intel\openvino_2022`` folder now contains the core components for OpenVINO. If you used a different path in Step 1, you will find the ``openvino_2022`` folder there. The path to the ``openvino_2022`` directory is also referred as ``<INSTALL_DIR>`` throughout the OpenVINO documentation.
@endsphinxdirective
Congratulations, you finished the installation! The `C:\Program Files (x86)\Intel\openvino_2022` folder now contains the core components for OpenVINO. If you used a different path in Step 1, you will find the `openvino_2022` folder there. The path to the `openvino_2022` directory is also referred as `<INSTALL_DIR>` throughout the OpenVINO documentation.
### <a name="set-the-environment-variables-windows"></a>Step 2: Configure the Environment

View File

@@ -6,28 +6,22 @@
:maxdepth: 3
:hidden:
Use Archive <openvino_docs_install_guides_installing_openvino_from_archive_linux>
Use PyPI <openvino_docs_install_guides_installing_openvino_pip>
Use APT <openvino_docs_install_guides_installing_openvino_apt>
Use YUM <openvino_docs_install_guides_installing_openvino_yum>
Use Conda Forge <openvino_docs_install_guides_installing_openvino_conda>
Use Homebrew <openvino_docs_install_guides_installing_openvino_brew>
Use Docker <openvino_docs_install_guides_installing_openvino_docker_linux>
If you want to install OpenVINO™ Runtime on your Linux machine, these are your options:
* :doc:`Install OpenVINO Runtime using an Archive File <openvino_docs_install_guides_installing_openvino_from_archive_linux>`
* :doc:`Install OpenVINO using PyPI <openvino_docs_install_guides_installing_openvino_pip>`
* :doc:`Install OpenVINO Runtime using from APT <openvino_docs_install_guides_installing_openvino_apt>`
* :doc:`Install OpenVINO Runtime using from YUM <openvino_docs_install_guides_installing_openvino_yum>`
* :doc:`Install OpenVINO Runtime using Conda Forge <openvino_docs_install_guides_installing_openvino_conda>`
* :doc:`Install OpenVINO Runtime using Homebrew <openvino_docs_install_guides_installing_openvino_brew>`
* :doc:`Install OpenVINO using Docker <openvino_docs_install_guides_installing_openvino_docker_linux>`
For a full selection of distribution channels, see the
`OpenVINO Installation Selector Tool <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html>`__
From Archive <openvino_docs_install_guides_installing_openvino_from_archive_linux>
From PyPI <openvino_docs_install_guides_installing_openvino_pip>
From APT <openvino_docs_install_guides_installing_openvino_apt>
From YUM <openvino_docs_install_guides_installing_openvino_yum>
Using HomeBrew <openvino_docs_install_guides_installing_openvino_brew>
Using Docker <openvino_docs_install_guides_installing_openvino_docker_linux>
@endsphinxdirective
If you want to install OpenVINO™ Runtime on your Linux machine, there are a few ways to accomplish this. We prepared the following options for you:
* [Install OpenVINO Runtime from an Archive File](installing-openvino-from-archive-linux.md)
* [Install OpenVINO from PyPI](installing-openvino-pip.md)
* [Install OpenVINO Runtime from APT](installing-openvino-apt.md)
* [Install OpenVINO Runtime from YUM](installing-openvino-yum.md)
* [Install OpenVINO Runtime via HomeBrew](installing-openvino-brew.md)
* [Install OpenVINO with Docker](installing-openvino-docker-linux.md)
For a full selection of distribution channels, see the [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)

View File

@@ -6,7 +6,7 @@ Currently only the following ways are provided to install OpenVINO™:
* [Install OpenVINO Runtime from APT](@ref openvino_docs_install_guides_installing_openvino_apt)
* [Install OpenVINO Runtime from YUM](@ref openvino_docs_install_guides_installing_openvino_yum)
* [Install OpenVINO from PyPI](installing-openvino-pip.md)
* [Install OpenVINO Runtime via Homebrew](installing-openvino-brew.md)
* [Install OpenVINO Runtime via HomeBrew](installing-openvino-brew.md)
* [Install OpenVINO with Docker](installing-openvino-docker-linux.md)
* [Build From Source](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode)

View File

@@ -8,14 +8,14 @@
From Archive <openvino_docs_install_guides_installing_openvino_from_archive_macos>
From PyPI <openvino_docs_install_guides_installing_openvino_pip>
Using Homebrew <openvino_docs_install_guides_installing_openvino_brew>
Using HomeBrew <openvino_docs_install_guides_installing_openvino_brew>
@endsphinxdirective
If you want to install OpenVINO™ Runtime on macOS, there are a few ways to accomplish this. We prepared following options for you:
* [Install OpenVINO Runtime from an Archive File](installing-openvino-from-archive-macos.md)
* [Install OpenVINO Runtime via Homebrew](installing-openvino-brew.md)
* [Install OpenVINO Runtime via HomeBrew](installing-openvino-brew.md)
* [Install OpenVINO from PyPI](installing-openvino-pip.md)
For a full selection of distribution channels, see the [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)

View File

@@ -1,10 +1,10 @@
# Install OpenVINO™ Runtime for macOS from Installer
Currently only the following ways are provided to install OpenVINO™ on macOS:
Currently only the following ways are provided to install OpenVINO™:
* [Install OpenVINO Runtime using an Archive File](installing-openvino-from-archive-macos.md)
* [Install OpenVINO Runtime using Homebrew](installing-openvino-brew.md)
* [Install OpenVINO using PyPI](installing-openvino-pip.md)
* [Install OpenVINO Runtime from an Archive File](installing-openvino-from-archive-macos.md)
* [Install OpenVINO Runtime via HomeBrew](installing-openvino-brew.md)
* [Install OpenVINO from PyPI](installing-openvino-pip.md)
* [Build From Source](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode)
The other installation methods are temporarily unavailable.

View File

@@ -13,7 +13,7 @@
@endsphinxdirective
Intel® Distribution of OpenVINO™ Toolkit is a comprehensive toolkit for developing applications and solutions based on deep learning tasks, such as computer vision, automatic speech recognition, natural language processing, recommendation systems, and more. It provides high-performance and rich deployment options, from edge to cloud. Some of its advantages are:
Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for developing applications and solutions based on deep learning tasks, such as computer vision, automatic speech recognition, natural language processing, recommendation systems, and more. It provides high-performance and rich deployment options, from edge to cloud. Some of its advantages are:
* Enables CNN-based and transformer-based deep learning inference on the edge or cloud.
* Supports various execution modes across Intel® technologies: Intel® CPU, Intel® Integrated Graphics, Intel® Discrete Graphics, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
@@ -21,7 +21,6 @@ Intel® Distribution of OpenVINO™ Toolkit is a comprehensive toolkit for devel
* Compatible with models from a wide variety of frameworks, including TensorFlow, PyTorch, PaddlePaddle, ONNX, and more.
## Install OpenVINO
@sphinxdirective
@@ -34,11 +33,10 @@ Intel® Distribution of OpenVINO™ Toolkit is a comprehensive toolkit for devel
@endsphinxdirective
OpenVINO installation package is distributed as two options: OpenVINO Runtime and OpenVINO Development Tools.
OpenVINO installation package is distributed in two parts: OpenVINO Runtime and OpenVINO Development Tools.
* **OpenVINO Runtime** contains the core set of libraries for running machine learning model inference on processor devices.
* **OpenVINO Development Tools** is a set of utilities for working with OpenVINO and OpenVINO models. It includes the following tools:
- OpenVINO Runtime
- Model Optimizer
- Post-Training Optimization Tool
- Benchmark Tool

View File

@@ -6,15 +6,13 @@ You can install both OpenVINO™ Runtime and OpenVINO Development Tools through
.. note:
* If you install OpenVINO Development Tools, OpenVINO Runtime will also be installed as a dependency, so you don't need to install it separately.
* The PyPI distribution does not include support for VPU, VAD, and HDDL. For information on how to use these devices,
see :doc:`Additional Configurations For Hardware <openvino_docs_install_guides_configurations_header>`
From the 2022.1 release, the OpenVINO Development Tools can only be installed via PyPI. See :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` for detailed steps.
Installing OpenVINO Runtime
###########################
For system requirements and troubleshooting, see https://pypi.org/project/openvino/2022.3.1/
For system requirements and troubleshooting, see https://pypi.org/project/openvino/
Step 1. Set Up Python Virtual Environment
+++++++++++++++++++++++++++++++++++++++++

View File

@@ -37,21 +37,21 @@
The `/opt/intel` path is the recommended folder path for administrators or root users. If you prefer to install OpenVINO in regular userspace, the recommended path is `/home/<USER>/intel`. You may use a different path if desired.
3. Go to your `~/Downloads` directory and download OpenVINO Runtime archive file for Debian from the `OpenVINO package repository <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/>`_.
3. Go to your `~/Downloads` directory and download OpenVINO Runtime archive file for Debian from `OpenVINO package repository <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/>`_.
.. tab:: ARM 32-bit
.. code-block:: sh
cd ~/Downloads/
sudo wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_debian9_2022.3.1.9227.cf2c7da5689_armhf.tgz -O openvino_2022.3.1.tgz
sudo wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_debian9_2022.3.0.9052.9752fafe8eb_armhf.tgz -O openvino_2022.3.0.tgz
.. tab:: ARM 64-bit
.. code-block:: sh
cd ~/Downloads/
sudo wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/linux/l_openvino_toolkit_debian9_2022.3.1.9227.cf2c7da5689_arm64.tgz -O openvino_2022.3.1.tgz
sudo wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_debian9_2022.3.0.9052.9752fafe8eb_arm64.tgz -O openvino_2022.3.0.tgz
4. Extract the archive file and move it to the installation folder:
@@ -59,15 +59,15 @@
.. code-block:: sh
sudo tar -xf openvino_2022.3.1.tgz
sudo mv l_openvino_toolkit_debian9_2022.3.1.9227.cf2c7da5689_armhf /opt/intel/openvino_2022.3.1
sudo tar -xf openvino_2022.3.0.tgz
sudo mv l_openvino_toolkit_debian9_2022.3.0.9052.9752fafe8eb_armhf /opt/intel/openvino_2022.3.0
.. tab:: ARM 64-bit
.. code-block:: sh
sudo tar -xf openvino_2022.3.0.tgz
sudo mv l_openvino_toolkit_debian9_2022.3.1.9227.cf2c7da5689_arm64 /opt/intel/openvino_2022.3.1
sudo mv l_openvino_toolkit_debian9_2022.3.0.9052.9752fafe8eb_arm64 /opt/intel/openvino_2022.3.0
5. Install required system dependencies on Linux. To do this, OpenVINO provides a script in the extracted installation directory. Run the following command:
@@ -75,24 +75,11 @@
sudo -E ./install_dependencies/install_openvino_dependencies.sh
6. (Optional) Install *numpy* Python Library:
.. note::
This step is required only when you decide to use Python API.
You can use the ``requirements.txt`` file from the ``/opt/intel/openvino_2022.3.1/python/python.<x>`` folder:
6. For simplicity, it is useful to create a symbolic link as below:
.. code-block:: sh
cd /opt/intel/openvino_2022.3.1
pip3 install -r ./python/python3.<x>/requirements.txt
7. For simplicity, it is useful to create a symbolic link as below:
.. code-block:: sh
sudo ln -s openvino_2022.3.1 openvino_2022
sudo ln -s openvino_2022.3.0 openvino_2022
.. note::

View File

@@ -6,22 +6,16 @@
:maxdepth: 3
:hidden:
Use Archive <openvino_docs_install_guides_installing_openvino_from_archive_windows>
Use PyPI <openvino_docs_install_guides_installing_openvino_pip>
Use Conda Forge <openvino_docs_install_guides_installing_openvino_conda>
Use Docker <openvino_docs_install_guides_installing_openvino_docker_windows>
If you want to install OpenVINO™ Runtime on Windows, you have the following options:
* :doc:`Install OpenVINO Runtime using an Archive File <openvino_docs_install_guides_installing_openvino_from_archive_windows>`
* :doc:`Install OpenVINO Runtime using PyPI <openvino_docs_install_guides_installing_openvino_pip>`
* :doc:`Install OpenVINO Runtime using Conda Forge <openvino_docs_install_guides_installing_openvino_conda>`
* :doc:`Install OpenVINO using Docker <openvino_docs_install_guides_installing_openvino_docker_windows>`
For a full selection of distribution channels,
see the `OpenVINO Installation Selector Tool <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html>`__
From Archive <openvino_docs_install_guides_installing_openvino_from_archive_windows>
From PyPI <openvino_docs_install_guides_installing_openvino_pip>
Using Docker <openvino_docs_install_guides_installing_openvino_docker_windows>
@endsphinxdirective
If you want to install OpenVINO™ Runtime on Windows, you have the following options:
* [Install OpenVINO Runtime from an Archive File](installing-openvino-from-archive-windows.md)
* [Install OpenVINO from PyPI](installing-openvino-pip.md)
* [Install OpenVINO with Docker](installing-openvino-docker-windows.md)
For a full selection of distribution channels, see the [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)

View File

@@ -65,8 +65,6 @@ Follow the [Yocto Project official documentation](https://docs.yoctoproject.org/
CORE_IMAGE_EXTRA_INSTALL:append = " openvino-model-optimizer"
```
## Step 2: Build a Yocto Image with OpenVINO Packages
Run BitBake to build your image with OpenVINO packages. For example, to build the minimal image, run the following command:

View File

@@ -2,9 +2,7 @@
@sphinxdirective
With the OpenVINO™ 2022.3 release, you can install OpenVINO Runtime on Linux using the YUM repository.
OpenVINO™ Development Tools can be installed via PyPI only. See
`Installing Additional Components <#step-3-optional-install-additional-components>`__ for more information.
With the OpenVINO™ 2022.3 release, you can install OpenVINO Runtime on Linux using the YUM repository. OpenVINO™ Development Tools can be installed via PyPI only. See :ref:`Installing Additional Components <intall additional components yum>` for more information.
See the `Release Notes <https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html>`_ for more information on updates in the latest release.
@@ -41,18 +39,15 @@ Installing OpenVINO Runtime from YUM is recommended for C++ developers. If you a
* GCC 8.2.0
* `Python 3.7 - 3.10, 64-bit <https://www.python.org/downloads/>`_
@endsphinxdirective
## Install OpenVINO Runtime
### Step 1: Set Up the Repository
@sphinxdirective
Install OpenVINO Runtime
########################
Step 1: Set Up the Repository
+++++++++++++++++++++++++++++
1. Create a YUM repository file (``openvino-2022.repo``) in the ``/tmp`` directory as a normal user:
1. Create a YUM repository file (`openvino-2022.repo`) in the `/tmp` directory as a normal user:
.. code-block:: sh
@@ -66,7 +61,7 @@ Step 1: Set Up the Repository
gpgkey=https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
EOF
2. Move the new ``openvino-2022.repo`` file to the YUM configuration directory, i.e. ``/etc/yum.repos.d``:
2. Move the new `openvino-2022.repo` file to the YUM configuration directory, i.e. `/etc/yum.repos.d`:
.. code-block:: sh
@@ -87,12 +82,13 @@ To list available OpenVINO packages, use the following command:
yum list 'openvino*'
@endsphinxdirective
Step 2: Install OpenVINO Runtime Using the YUM Package Manager
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
### Step 2: Install OpenVINO Runtime Using the YUM Package Manager
Install OpenVINO Runtime
-------------------------
#### Install OpenVINO Runtime
@sphinxdirective
.. tab:: The Latest Version
@@ -116,9 +112,11 @@ Install OpenVINO Runtime
sudo yum install openvino-2022.3.0
@endsphinxdirective
Check for Installed Packages and Version
-----------------------------------------
#### Check for Installed Packages and Version
@sphinxdirective
Run the following command:
@@ -128,24 +126,25 @@ Run the following command:
.. _intall additional components yum:
@endsphinxdirective
### Step 3 (Optional): Install Additional Components
Step 3 (Optional): Install Additional Components
+++++++++++++++++++++++++++++++++++++++++++++++++
@sphinxdirective
OpenVINO Development Tools is a set of utilities for working with OpenVINO and OpenVINO models. It provides tools like Model Optimizer, Benchmark Tool, Post-Training Optimization Tool, and Open Model Zoo Downloader. If you installed OpenVINO Runtime using YUM, OpenVINO Development Tools must be installed separately.
See **For C++ Developers** section on the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` page for instructions.
@endsphinxdirective
Step 4 (Optional): Configure Inference on Non-CPU Devices
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
### Step 4 (Optional): Configure Inference on Non-CPU Devices
To enable the toolkit components to use processor graphics (GPU) on your system, follow the steps in [GPU Setup Guide](@ref openvino_docs_install_guides_configurations_for_intel_gpu).
Step 5: Build Samples
++++++++++++++++++++++
### Step 5: Build Samples
@sphinxdirective
To build the C++ or C sample applications for Linux, run the `build_samples.sh` script:
@@ -161,12 +160,13 @@ To build the C++ or C sample applications for Linux, run the `build_samples.sh`
/usr/share/openvino/samples/c/build_samples.sh
@endsphinxdirective
For more information, refer to :doc:`Build the Sample Applications on Linux <openvino_docs_OV_UG_Samples_Overview>`.
For more information, refer to <a href="openvino_docs_OV_UG_Samples_Overview.html#build-samples-linux">Build the Sample Applications on Linux</a>.
Uninstalling OpenVINO Runtime
##############################
### Uninstalling OpenVINO Runtime
@sphinxdirective
To uninstall OpenVINO Runtime via YUM, run the following command based on your needs:
@@ -189,10 +189,11 @@ To uninstall OpenVINO Runtime via YUM, run the following command based on your n
sudo yum autoremove openvino-2022.3.0
@endsphinxdirective
What's Next?
#############
## What's Next?
@sphinxdirective
Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials:

View File

@@ -105,7 +105,7 @@ For example, to install and configure the components for working with TensorFlow
## What's in the Package?
> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino/2022.3.1/) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications.
> **NOTE**: The openvino-dev package installs [OpenVINO™ Runtime](https://pypi.org/project/openvino) as a dependency, which is the engine that runs the deep learning model and includes a set of libraries for an easy inference integration into your applications.
**In addition, the openvino-dev package installs the following components by default:**

View File

@@ -46,8 +46,6 @@ Try one of these solutions:
<!-- this part was taken from original configurations-for-ivad-vpu.md -->
### Unable to run inference with the MYRIAD Plugin after running with the HDDL Plugin
Running inference with the MYRIAD Plugin after running with the HDDL Plugin is failed with the following error generated:

View File

@@ -8,9 +8,7 @@ repo_owner = "openvinotoolkit"
repo_name = "openvino_notebooks"
repo_branch = "tree/main"
artifacts_link = "http://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20230517220809/dist/rst_files/"
artifacts_link = "http://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20230309220806/dist/rst_files/"
blacklisted_extensions = ['.xml', '.bin']
@@ -36,7 +34,7 @@ To run without installing anything, click the launch binder button.
.. |github_link| raw:: html
<a href="https://github.com/{{ owner }}/{{ repo }}/{{ branch }}/{{ folder }}/{{ notebook }}" target="_blank"><img src="https://badgen.net/badge/icon/github?icon=github&label" alt="Github"></a>
<a href="https://github.com/{{ owner }}/{{ repo }}" target="_blank"><img src="https://badgen.net/badge/icon/github?icon=github&label" alt="Github"></a>
\n
"""
@@ -52,7 +50,7 @@ See the |installation_link| for instructions to run this tutorial locally on Win
.. |github_link| raw:: html
<a href="https://github.com/{{ owner }}/{{ repo }}/{{ branch }}/{{ folder }}/{{ notebook }}" target="_blank"><img src="https://badgen.net/badge/icon/github?icon=github&label" alt="Github"></a>
<a href="https://github.com/{{ owner }}/{{ repo }}" target="_blank"><img src="https://badgen.net/badge/icon/github?icon=github&label" alt="Github"></a>
\n
"""

View File

@@ -16,7 +16,6 @@ from consts import (
no_binder_template,
repo_directory,
repo_name,
repo_branch,
repo_owner,
rst_template,
section_names,
@@ -97,7 +96,6 @@ class NbProcessor:
"owner": repo_owner,
"repo": repo_name,
"folder": repo_directory,
"branch": repo_branch,
}
def fetch_binder_list(self, file_format: str = 'txt') -> list:

View File

@@ -1,99 +0,0 @@
import xml.etree.ElementTree as ET
from sphinx_sitemap import setup as base_setup, get_locales, hreflang_formatter
def setup(app):
app.add_config_value(
'ov_sitemap_urlset',
default=None,
rebuild=''
)
app.add_config_value(
'ov_sitemap_meta',
default=None,
rebuild=''
)
setup = base_setup(app)
for listener in app.events.listeners['build-finished']:
if listener.handler.__name__ == 'create_sitemap':
app.disconnect(listener.id)
app.connect('build-finished', create_sitemap)
return setup
def create_sitemap(app, exception):
"""Generates the sitemap.xml from the collected HTML page links"""
urlset = app.builder.config.ov_sitemap_urlset
meta = app.builder.config.ov_sitemap_meta
site_url = app.builder.config.site_url or app.builder.config.html_baseurl
site_url = site_url.rstrip('/') + '/'
if not site_url:
print("sphinx-sitemap error: neither html_baseurl nor site_url "
"are set in conf.py. Sitemap not built.")
return
if (not app.sitemap_links):
print("sphinx-sitemap warning: No pages generated for %s" %
app.config.sitemap_filename)
return
ET.register_namespace('xhtml', "http://www.w3.org/1999/xhtml")
root = ET.Element("urlset")
if not urlset:
root.set("xmlns", "http://www.sitemaps.org/schemas/sitemap/0.9")
else:
for item in urlset:
root.set(*item)
get_locales(app, exception)
if app.builder.config.version:
version = app.builder.config.version + '/'
else:
version = ""
for link in app.sitemap_links:
url = ET.SubElement(root, "url")
scheme = app.config.sitemap_url_scheme
if app.builder.config.language:
lang = app.builder.config.language + '/'
else:
lang = ""
ET.SubElement(url, "loc").text = site_url + scheme.format(
lang=lang, version=version, link=link
)
if meta:
for entry in meta:
namespace, values = entry
namespace_element = ET.SubElement(url, namespace)
for tag_name, tag_value in values.items():
ET.SubElement(namespace_element, tag_name).text = tag_value
if len(app.locales) > 0:
for lang in app.locales:
lang = lang + '/'
linktag = ET.SubElement(
url,
"{http://www.w3.org/1999/xhtml}link"
)
linktag.set("rel", "alternate")
linktag.set("hreflang", hreflang_formatter(lang.rstrip('/')))
linktag.set("href", site_url + scheme.format(
lang=lang, version=version, link=link
))
filename = app.outdir + "/" + app.config.sitemap_filename
ET.ElementTree(root).write(filename,
xml_declaration=True,
encoding='utf-8',
method="xml")
print("%s was generated for URL %s in %s" % (app.config.sitemap_filename,
site_url, filename))

View File

@@ -1,13 +0,0 @@
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "openvino_custom_sphinx_sitemap"
version = "0.0.1"
description = "Extends sphinx-sitemap plugin with additional sitemap metadata"
dependencies = [
"sphinx >= 4.5.0",
"sphinx-sitemap >= 2.2.0"
]
requires-python = ">=3.7"

View File

@@ -1,9 +0,0 @@
from setuptools import setup
setup(
name="openvino_custom_sphinx_sitemap",
version="0.0.1",
install_requires=['sphinx>=4.5.0', 'sphinx-sitemap>=2.2.0'],
packages=['openvino_custom_sphinx_sitemap'],
)

View File

@@ -1,2 +0,0 @@
<footer class="footer mt-5 mt-md-0">
</footer>

View File

@@ -0,0 +1,7 @@
<p>
<a href="https://www.intel.com/content/www/us/en/homepage.html" alt="Intel" style="color: #000;">©2023 Intel Corporation</a>
<a href="https://www.intel.com/content/www/us/en/legal/terms-of-use.html" alt="terms of use">Terms of Use</a>
<a href="https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html" data-cookie-notice="true" alt="cookies policy">Cookies</a>
<a href="https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html" alt="Privacy">Privacy</a>
</p>
<p style="font-size: 0.8em">Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.</p>

View File

@@ -1,6 +1,16 @@
<div>
<atomic-search-interface id="sa-search">
<atomic-search-box redirection-url="search.html">
</atomic-search-box>
</atomic-search-interface>
</div>
<form style="padding: 0 0.5rem;" class="d-flex align-items-center" action="{{ pathto('search') }}" method="get">
<div style="width:100%;" class="textfield textfield-q textfield-size-m left-slot">
<div class="slot left-slot-container">
<span class="icon fas fa-search"></span>
</div>
<input
class="input input-quiet input-size-m"
type="search"
name="q"
id="search-input"
placeholder="{{ _(theme_search_bar_text) }}"
aria-label="{{ theme_search_bar_text }}"
autocomplete="off"
>
</div>
</form>

Some files were not shown because too many files have changed in this diff Show More