Compare commits
47 Commits
2023.2.0
...
releases/2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d5bf02b634 | ||
|
|
67591efda5 | ||
|
|
4a7063285b | ||
|
|
899b47dccd | ||
|
|
61a072baaf | ||
|
|
a97aa14ae5 | ||
|
|
1223b2190d | ||
|
|
80f94dd5d3 | ||
|
|
c3190f1d43 | ||
|
|
293293bbfe | ||
|
|
e1da28e4a4 | ||
|
|
2a4a5f6ab5 | ||
|
|
dc4265b2d3 | ||
|
|
a836462f77 | ||
|
|
613ef81471 | ||
|
|
4c31a9bef7 | ||
|
|
9d94661392 | ||
|
|
338aa3d85b | ||
|
|
4e992aef3a | ||
|
|
7e18bd074a | ||
|
|
d8e7ea51b5 | ||
|
|
a60c921189 | ||
|
|
fb0aee2d1c | ||
|
|
1a6a4443a1 | ||
|
|
40fbe51621 | ||
|
|
560798f00c | ||
|
|
b868e6e271 | ||
|
|
575f51e3cf | ||
|
|
69b5836648 | ||
|
|
2cabc6b6bd | ||
|
|
553d15622e | ||
|
|
db4724aeb5 | ||
|
|
cadce8fbd4 | ||
|
|
4e702d96e1 | ||
|
|
c84cdeb264 | ||
|
|
5d4de9117c | ||
|
|
62460e1e2b | ||
|
|
0140796841 | ||
|
|
49c8526e20 | ||
|
|
17ad6116c8 | ||
|
|
cda12b6de5 | ||
|
|
fc678416a6 | ||
|
|
5ee4090e10 | ||
|
|
3a0abdfaa8 | ||
|
|
210608ca3a | ||
|
|
98072bbbae | ||
|
|
29d43d85ce |
2
.github/workflows/android_arm64.yml
vendored
2
.github/workflows/android_arm64.yml
vendored
@@ -31,7 +31,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:20.04
|
||||
volumes:
|
||||
|
||||
2
.github/workflows/fedora.yml
vendored
2
.github/workflows/fedora.yml
vendored
@@ -32,7 +32,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: fedora:33
|
||||
volumes:
|
||||
|
||||
6
.github/workflows/linux.yml
vendored
6
.github/workflows/linux.yml
vendored
@@ -39,7 +39,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:20.04
|
||||
volumes:
|
||||
@@ -514,7 +514,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:20.04
|
||||
volumes:
|
||||
@@ -1377,7 +1377,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/nvidia/cuda:11.8.0-runtime-ubuntu20.04
|
||||
volumes:
|
||||
|
||||
@@ -35,7 +35,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:22.04
|
||||
volumes:
|
||||
@@ -210,7 +210,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:22.04
|
||||
volumes:
|
||||
@@ -306,7 +306,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-8-cores
|
||||
runs-on: aks-linux-8-cores-16gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:22.04
|
||||
env:
|
||||
|
||||
2
.github/workflows/linux_riscv.yml
vendored
2
.github/workflows/linux_riscv.yml
vendored
@@ -35,7 +35,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: openvinogithubactions.azurecr.io/dockerhub/ubuntu:22.04
|
||||
volumes:
|
||||
|
||||
2
.github/workflows/webassembly.yml
vendored
2
.github/workflows/webassembly.yml
vendored
@@ -31,7 +31,7 @@ jobs:
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
runs-on: aks-linux-16-cores
|
||||
runs-on: aks-linux-16-cores-32gb
|
||||
container:
|
||||
image: emscripten/emsdk
|
||||
volumes:
|
||||
|
||||
26
README.md
26
README.md
@@ -67,24 +67,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
|
||||
<tbody>
|
||||
<tr>
|
||||
<td rowspan=2>CPU</td>
|
||||
<td> <a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
|
||||
<td> <a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
|
||||
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
|
||||
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE), Intel® Advanced Matrix Extensions (Intel® AMX)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> <a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">ARM CPU</a></tb>
|
||||
<td> <a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">ARM CPU</a></tb>
|
||||
<td><b><i><a href="./src/plugins/intel_cpu">openvino_arm_cpu_plugin</a></i></b></td>
|
||||
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with Apple silicon
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GPU</td>
|
||||
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
|
||||
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
|
||||
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GNA</td>
|
||||
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
|
||||
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
|
||||
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
|
||||
</tr>
|
||||
@@ -102,22 +102,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
|
||||
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
||||
<td>Auto plugin enables selecting Intel device for inference automatically</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
|
||||
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
|
||||
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
|
||||
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
|
||||
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
|
||||
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
||||
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
|
||||
</tr>
|
||||
@@ -164,9 +164,9 @@ The list of OpenVINO tutorials:
|
||||
## System requirements
|
||||
|
||||
The system requirements vary depending on platform and are available on dedicated pages:
|
||||
- [Linux](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_linux_header.html)
|
||||
- [Windows](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_windows_header.html)
|
||||
- [macOS](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_macos_header.html)
|
||||
- [Linux](https://docs.openvino.ai/2023.2/openvino_docs_install_guides_installing_openvino_linux_header.html)
|
||||
- [Windows](https://docs.openvino.ai/2023.2/openvino_docs_install_guides_installing_openvino_windows_header.html)
|
||||
- [macOS](https://docs.openvino.ai/2023.2/openvino_docs_install_guides_installing_openvino_macos_header.html)
|
||||
|
||||
## How to build
|
||||
|
||||
@@ -206,6 +206,6 @@ Report questions, issues and suggestions, using:
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
|
||||
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
|
||||
[OpenVINO Model Converter (OVC)]:https://docs.openvino.ai/2023.1/openvino_docs_model_processing_introduction.html#convert-a-model-in-cli-ovc
|
||||
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.2/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
|
||||
[OpenVINO Model Converter (OVC)]:https://docs.openvino.ai/2023.2/openvino_docs_model_processing_introduction.html#convert-a-model-in-cli-ovc
|
||||
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
|
||||
|
||||
@@ -85,12 +85,13 @@ unset(protobuf_installed CACHE)
|
||||
# FILEDESCRIPTION <description> # used on Windows to describe DLL file
|
||||
# [LINKABLE_FRONTEND] # whether we can use FE API directly or via FEM only
|
||||
# [SKIP_INSTALL] # private frontend, not for end users
|
||||
# [PROTOBUF_REQUIRED] # options to denote that protobuf is used
|
||||
# [PROTOBUF_LITE] # requires only libprotobuf-lite
|
||||
# [SKIP_NCC_STYLE] # use custom NCC rules
|
||||
# [LINK_LIBRARIES <lib1 lib2 ...>])
|
||||
#
|
||||
macro(ov_add_frontend)
|
||||
set(options LINKABLE_FRONTEND PROTOBUF_LITE SKIP_NCC_STYLE SKIP_INSTALL)
|
||||
set(options LINKABLE_FRONTEND PROTOBUF_REQUIRED PROTOBUF_LITE SKIP_NCC_STYLE SKIP_INSTALL)
|
||||
set(oneValueArgs NAME FILEDESCRIPTION)
|
||||
set(multiValueArgs LINK_LIBRARIES PROTO_FILES)
|
||||
cmake_parse_arguments(OV_FRONTEND "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
@@ -171,7 +172,7 @@ macro(ov_add_frontend)
|
||||
|
||||
# Create library
|
||||
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS}
|
||||
${PROTO_SRCS} ${PROTO_HDRS} ${flatbuffers_schema_files} ${proto_files})
|
||||
${PROTO_SRCS} ${PROTO_HDRS} ${flatbuffers_schema_files} ${proto_files})
|
||||
|
||||
if(OV_FRONTEND_LINKABLE_FRONTEND)
|
||||
# create beautiful alias
|
||||
@@ -179,7 +180,7 @@ macro(ov_add_frontend)
|
||||
endif()
|
||||
|
||||
# Shutdown protobuf when unloading the frontend dynamic library
|
||||
if(proto_files AND BUILD_SHARED_LIBS)
|
||||
if(OV_FRONTEND_PROTOBUF_REQUIRED AND BUILD_SHARED_LIBS)
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE openvino::protobuf_shutdown)
|
||||
endif()
|
||||
|
||||
@@ -208,17 +209,17 @@ macro(ov_add_frontend)
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES} PUBLIC openvino::runtime)
|
||||
ov_add_library_version(${TARGET_NAME})
|
||||
|
||||
# WA for TF frontends which always require protobuf (not protobuf-lite)
|
||||
# if TF FE is built in static mode, use protobuf for all other FEs
|
||||
if(FORCE_FRONTENDS_USE_PROTOBUF)
|
||||
set(OV_FRONTEND_PROTOBUF_LITE OFF)
|
||||
endif()
|
||||
# if protobuf::libprotobuf-lite is not available, use protobuf::libprotobuf
|
||||
if(NOT TARGET protobuf::libprotobuf-lite)
|
||||
set(OV_FRONTEND_PROTOBUF_LITE OFF)
|
||||
endif()
|
||||
if(OV_FRONTEND_PROTOBUF_REQUIRED)
|
||||
# WA for TF frontends which always require protobuf (not protobuf-lite)
|
||||
# if TF FE is built in static mode, use protobuf for all other FEs
|
||||
if(FORCE_FRONTENDS_USE_PROTOBUF)
|
||||
set(OV_FRONTEND_PROTOBUF_LITE OFF)
|
||||
endif()
|
||||
# if protobuf::libprotobuf-lite is not available, use protobuf::libprotobuf
|
||||
if(NOT TARGET protobuf::libprotobuf-lite)
|
||||
set(OV_FRONTEND_PROTOBUF_LITE OFF)
|
||||
endif()
|
||||
|
||||
if(proto_files)
|
||||
if(OV_FRONTEND_PROTOBUF_LITE)
|
||||
set(protobuf_target_name libprotobuf-lite)
|
||||
set(protobuf_install_name "protobuf_lite_installed")
|
||||
|
||||
@@ -2,40 +2,37 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
Debugging Auto-Device Plugin <openvino_docs_OV_UG_supported_plugins_AUTO_debugging>
|
||||
|
||||
.. meta::
|
||||
:description: The Automatic Device Selection mode in OpenVINO™ Runtime
|
||||
detects available devices and selects the optimal processing
|
||||
unit for inference automatically.
|
||||
|
||||
|
||||
This article introduces how Automatic Device Selection works and how to use it for inference.
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
Debugging Auto-Device Plugin <openvino_docs_OV_UG_supported_plugins_AUTO_debugging>
|
||||
|
||||
|
||||
.. _how-auto-works:
|
||||
|
||||
The Automatic Device Selection mode, or AUTO for short, uses a "virtual" or a "proxy" device,
|
||||
which does not bind to a specific type of hardware, but rather selects the processing unit
|
||||
for inference automatically. It detects available devices, picks the one best-suited for the
|
||||
task, and configures its optimization settings. This way, you can write the application once
|
||||
and deploy it anywhere.
|
||||
|
||||
How AUTO Works
|
||||
##############
|
||||
|
||||
The Automatic Device Selection mode, or AUTO for short, uses a "virtual" or a "proxy" device,
|
||||
which does not bind to a specific type of hardware, but rather selects the processing unit for inference automatically.
|
||||
It detects available devices, picks the one best-suited for the task, and configures its optimization settings.
|
||||
This way, you can write the application once and deploy it anywhere.
|
||||
|
||||
The selection also depends on your performance requirements, defined by the “hints” configuration API, as well as device priority list limitations, if you choose to exclude some hardware from the process.
|
||||
The selection also depends on your performance requirements, defined by the “hints”
|
||||
configuration API, as well as device priority list limitations, if you choose to exclude
|
||||
some hardware from the process.
|
||||
|
||||
The logic behind the choice is as follows:
|
||||
|
||||
1. Check what supported devices are available.
|
||||
2. Check precisions of the input model (for detailed information on precisions read more on the ``ov::device::capabilities``).
|
||||
3. Select the highest-priority device capable of supporting the given model, as listed in the table below.
|
||||
4. If model’s precision is FP32 but there is no device capable of supporting it, offload the model to a device supporting FP16.
|
||||
4. If model's precision is FP32 but there is no device capable of supporting it, offload the model to a device supporting FP16.
|
||||
|
||||
|
||||
+----------+-----------------------------------------------------+------------------------------------+
|
||||
@@ -51,7 +48,18 @@ The logic behind the choice is as follows:
|
||||
| 3 | Intel® CPU | FP32, FP16, INT8, BIN |
|
||||
| | (e.g. Intel® Core™ i7-1165G7) | |
|
||||
+----------+-----------------------------------------------------+------------------------------------+
|
||||
| 4 | Intel® NPU | |
|
||||
| | (e.g. Intel® Core™ Ultra) | |
|
||||
+----------+-----------------------------------------------------+------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
Note that NPU is currently excluded from the default priority list. To use it for inference, you
|
||||
need to specify it explicitly
|
||||
|
||||
|
||||
How AUTO Works
|
||||
##############
|
||||
|
||||
To put it simply, when loading the model to the first device on the list fails, AUTO will try to load it to the next device in line, until one of them succeeds.
|
||||
What is important, **AUTO starts inference with the CPU of the system by default**, as it provides very low latency and can start inference with no additional delays.
|
||||
@@ -59,12 +67,19 @@ While the CPU is performing inference, AUTO continues to load the model to the d
|
||||
This way, the devices which are much slower in compiling models, GPU being the best example, do not impact inference at its initial stages.
|
||||
For example, if you use a CPU and a GPU, the first-inference latency of AUTO will be better than that of using GPU alone.
|
||||
|
||||
Note that if you choose to exclude CPU from the priority list or disable the initial CPU acceleration feature via ``ov::intel_auto::enable_startup_fallback``, it will be unable to support the initial model compilation stage. The models with dynamic input/output or stateful :doc:`stateful<openvino_docs_OV_UG_model_state_intro>` operations will be loaded to the CPU if it is in the candidate list. Otherwise, these models will follow the normal flow and be loaded to the device based on priority.
|
||||
Note that if you choose to exclude CPU from the priority list or disable the initial
|
||||
CPU acceleration feature via ``ov::intel_auto::enable_startup_fallback``, it will be
|
||||
unable to support the initial model compilation stage. The models with dynamic
|
||||
input/output or stateful :doc:`stateful<openvino_docs_OV_UG_model_state_intro>`
|
||||
operations will be loaded to the CPU if it is in the candidate list. Otherwise,
|
||||
these models will follow the normal flow and be loaded to the device based on priority.
|
||||
|
||||
.. image:: _static/images/autoplugin_accelerate.svg
|
||||
|
||||
|
||||
This mechanism can be easily observed in the :ref:`Using AUTO with Benchmark app sample <using-auto-with-openvino-samples-and-benchmark-app>` section, showing how the first-inference latency (the time it takes to compile the model and perform the first inference) is reduced when using AUTO. For example:
|
||||
This mechanism can be easily observed in the :ref:`Using AUTO with Benchmark app sample <using-auto-with-openvino-samples-and-benchmark-app>`
|
||||
section, showing how the first-inference latency (the time it takes to compile the
|
||||
model and perform the first inference) is reduced when using AUTO. For example:
|
||||
|
||||
|
||||
.. code-block:: sh
|
||||
@@ -86,8 +101,9 @@ This mechanism can be easily observed in the :ref:`Using AUTO with Benchmark app
|
||||
Using AUTO
|
||||
##########
|
||||
|
||||
Following the OpenVINO™ naming convention, the Automatic Device Selection mode is assigned the label of "AUTO". It may be defined with no additional parameters, resulting in defaults being used, or configured further with the following setup options:
|
||||
|
||||
Following the OpenVINO™ naming convention, the Automatic Device Selection mode is assigned the label of "AUTO".
|
||||
It may be defined with no additional parameters, resulting in defaults being used, or configured further with
|
||||
the following setup options:
|
||||
|
||||
+----------------------------------------------+--------------------------------------------------------------------+
|
||||
| Property(C++ version) | Values and Description |
|
||||
@@ -165,6 +181,17 @@ Following the OpenVINO™ naming convention, the Automatic Device Selection mode
|
||||
| | |
|
||||
| | The default value is ``true``. |
|
||||
+----------------------------------------------+--------------------------------------------------------------------+
|
||||
| ``ov::intel_auto::schedule_policy`` | **Values**: |
|
||||
| | |
|
||||
| | ``ROUND_ROBIN`` |
|
||||
| | |
|
||||
| | ``DEVICE_PRIORITY`` |
|
||||
| | |
|
||||
| | Specify the schedule policy of infer request assigned to hardware |
|
||||
| | plugin for AUTO cumulative mode (MULTI). |
|
||||
| | |
|
||||
| | The default value is ``DEVICE_PRIORITY``. |
|
||||
+----------------------------------------------+--------------------------------------------------------------------+
|
||||
|
||||
Inference with AUTO is configured similarly to when device plugins are used:
|
||||
you compile the model on the plugin with configuration and execute inference.
|
||||
@@ -192,7 +219,6 @@ The code samples on this page assume following import(Python)/using (C++) are in
|
||||
Device Candidates and Priority
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
|
||||
The device candidate list enables you to customize the priority and limit the choice of devices available to AUTO.
|
||||
|
||||
* If <device candidate list> is not specified, AUTO assumes all the devices present in the system can be used.
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
docs/_static/benchmarks_files/OV-2023.2-Performance-Data.xlsx
vendored
Normal file
BIN
docs/_static/benchmarks_files/OV-2023.2-Performance-Data.xlsx
vendored
Normal file
Binary file not shown.
BIN
docs/_static/benchmarks_files/OV-2023.2-platform_list.pdf
vendored
Normal file
BIN
docs/_static/benchmarks_files/OV-2023.2-platform_list.pdf
vendored
Normal file
Binary file not shown.
BIN
docs/_static/benchmarks_files/OV-2023.2-system-info-detailed.xlsx
vendored
Normal file
BIN
docs/_static/benchmarks_files/OV-2023.2-system-info-detailed.xlsx
vendored
Normal file
Binary file not shown.
912
docs/_static/benchmarks_files/OV-benchmark-data.csv
vendored
912
docs/_static/benchmarks_files/OV-benchmark-data.csv
vendored
@@ -1,481 +1,431 @@
|
||||
Network model,Release,IE-Type,Platform name,Throughput-INT8,ThroughputFP16,ThroughputFP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency,UOM_T,UOM_V,UOM_E,UOM_L
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,11.35,,4.27,0.106093669,0.756801503,107,15,1,107,15,88.11,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,21.38,,15.11,0.182727309,0.328909156,117,65,1,117,65,48.47,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,32.23,,20.26,0.15061144,0.495859202,214,65,1,214,65,36.18,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,113.18,,45.08,0.344024364,0.905472125,329,125,1,329,125,17.43,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,34.34,,24.04,0.178867029,0.528345686,192,65,1,192,65,30.88,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,51.27,,18.44,0.120345681,1.830973571,426,28,1,426,28,23.31,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,37.98,,13.59,0.077518083,1.356566444,490,28,1,490,28,29.22,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,27.60,,17.56,0.091086845,0.788551831,303,35,1,303,35,42.83,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,34.12,,20.30,0.069915909,0.974827538,488,35,1,488,35,37.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,53.08,,19.48,0.097575059,1.516595204,544,35,1,544,35,23.07,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,163.20,,66.23,0.272459575,1.305626285,599,125,1,599,125,13.68,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,atom,Intel® Processor N-200,1.65,,0.83,0.008529061,0.274351466,193,6,1,193,6,641.46,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,51.01,,29.43,0.08587761,0.408090401,594,125,1,594,125,28.93,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,20.86,,14.80,0.068171185,0.293808206,306,71,1,306,71,49.41,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,217.83,,80.66,0.069284052,1.037281242,3144,210,2,1572,105,13.64,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,572.10,,224.73,0.033744047,1.395357512,16954,410,2,8477,205,7.81,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,872.62,,338.47,0.04661922,1.61596031,18718,540,2,9359,270,43.32,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,3255.60,,505.88,0.095752851,4.650852741,34000,700,2,17000,350,4.08,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,204.84,,76.40,0.101307066,1.024214437,2022,200,2,1011,100,14.24,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,426.78,,167.54,0.187678462,1.422602743,2274,300,2,1137,150,8.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,accel,Intel® Flex-170,842.00,683.21,,,,,,1,,,18.63,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,accel,Intel® Flex-140,174.28,123.71,,,,,,1,,,91.68,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,45.83,31.96,,0.428279604,3.055061174,107,15,1,107,15,87.12,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,73.09,55.44,,0.149162436,2.610342624,490,28,1,490,28,54.56,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,3.37,2.36,,0.017448724,0.561267278,193,6,1,193,6,1185.78,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,84.42,60.51,,0.198161663,3.014888156,426,28,1,426,28,46.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,46.67,,23.26,0.436174206,3.111376,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,73.20,,31.70,0.149385536,2.614246884,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.43,,2.03,0.022934748,0.7377344,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-base-cased,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,83.69,,37.23,0.196445915,2.988784286,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,1.18,,0.38,0.011017319,0.078590206,107,15,1,107,15,863.34,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,2.09,,1.33,0.017880344,0.032184618,117,65,1,117,65,492.10,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,2.97,,1.87,0.013882493,0.045705439,214,65,1,214,65,347.78,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,9.93,,3.74,0.030176091,0.079423471,329,125,1,329,125,155.72,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,3.36,,2.13,0.017518449,0.051746803,192,65,1,192,65,302.24,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,5.09,,1.63,0.011951214,0.181829179,426,28,1,426,28,219.77,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,3.79,,1.22,0.007726669,0.135216715,490,28,1,490,28,266.14,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,2.71,,1.61,0.008936965,0.077368581,303,35,1,303,35,412.44,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,3.35,,1.83,0.006861724,0.095672042,488,35,1,488,35,327.59,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,5.06,,1.75,0.009296462,0.144493584,544,35,1,544,35,210.96,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,15.19,,5.91,0.025358635,0.12151858,599,125,1,599,125,113.61,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,atom,Intel® Processor N-200,0.16,,0.08,0.000828224,0.0266412,193,6,1,193,6,6367.32,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,4.66,,2.88,0.007842661,0.037268324,594,125,1,594,125,224.55,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,2.11,,1.34,0.006898006,0.029729433,306,71,1,306,71,486.76,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,21.27,,6.90,0.006764356,0.101272067,3144,210,2,1572,105,103.38,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,50.91,,17.71,0.003002727,0.124166415,16954,410,2,8477,205,63.45,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,67.03,,27.69,0.003581195,0.124134843,18718,540,2,9359,270,253.77,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,251.10,,47.69,0.007385431,0.358720946,34000,700,2,17000,350,36.69,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,20.81,,6.64,0.010291168,0.104043704,2022,200,2,1011,100,106.83,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,38.76,,14.75,0.017046987,0.129216158,2274,300,2,1137,150,237.83,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,accel,Intel® Flex-170,144.12,101.13,,,,,,1,,,110.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,accel,Intel® Flex-140,29.97,20.57,,,,,,1,,,534.35,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,4.66,3.35,,0.043581125,0.31087869,107,15,1,107,15,820.82,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,5.11,5.77,,0.010428379,0.182496635,490,28,1,490,28,745.40,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,0.34,0.23,,0.00174067,0.055991537,193,6,1,193,6,11899.92,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,9.13,6.65,,0.021425568,0.325974707,426,28,1,426,28,449.68,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,5.20,,2.33,0.048610495,0.346754867,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,3.87,,2.23,0.007899318,0.138238071,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.44,,0.17,0.002288653,0.073618327,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,8.63,,3.52,0.02025661,0.308189857,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,11.86,,4.61,0.11084032,0.79066095,107,15,1,107,15,85.88,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,22.85,,14.48,0.195256904,0.351462427,117,65,1,117,65,43.93,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,34.47,,16.42,0.161068212,0.530286114,214,65,1,214,65,33.07,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,94.13,,42.32,0.286105004,0.753028371,329,125,1,329,125,16.23,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,37.18,,21.16,0.193650962,0.572015148,192,65,1,192,65,27.36,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,52.88,,16.51,0.124135164,1.888627857,426,28,1,426,28,19.87,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,30.78,,9.65,0.062807791,1.099136335,490,28,1,490,28,31.13,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,32.02,,18.34,0.105688204,0.914957883,303,35,1,303,35,37.47,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,40.36,,18.54,0.082712269,1.153245355,488,35,1,488,35,27.15,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,57.69,,22.51,0.106053446,1.648373567,544,35,1,544,35,21.75,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,148.87,,57.81,0.248526818,1.190940511,599,125,1,599,125,12.44,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,atom,Intel® Processor N-200,1.72,,1.01,0.008897382,0.286199128,193,6,1,193,6,595.26,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,51.51,,19.39,0.086713894,0.412064422,594,125,1,594,125,21.10,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,22.69,,15.40,0.074145796,0.319557937,306,71,1,306,71,43.78,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,190.40,,77.08,0.060561308,0.906689304,3144,210,2,1572,105,11.76,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,416.77,,155.13,0.024582207,1.016504228,16954,410,2,8477,205,5.66,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,584.92,,227.30,0.031248866,1.083178298,18718,540,2,9359,270,3.70,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,999.22,,380.82,0.029388758,1.427453957,34000,700,2,17000,350,3.55,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,184.31,,74.61,0.091151132,0.921537946,2022,200,2,1011,100,12.07,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,370.93,,139.96,0.163117657,1.236431838,2274,300,2,1137,150,6.87,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,accel,Intel® Flex-170,803.58,560.76,,,,,,1,,,19.59,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,accel,Intel® Flex-140,148.01,97.06,,,,,,1,,,108.12,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,60.17,27.60,,0.562349486,4.011426332,107,15,1,107,15,66.33,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,76.40,36.69,,0.155928067,2.728741167,490,28,1,490,28,51.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,3.65,1.92,,0.018917206,0.608503456,193,6,1,193,6,1094.30,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,105.14,48.76,,0.24680943,3.755029182,426,28,1,426,28,37.64,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,61.90,,17.22,0.578511308,4.126714,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,49.44,,9.02,0.100894422,1.765652381,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.81,,2.03,0.024920889,0.801621937,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
deeplabv3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,89.16,,24.78,0.209304953,3.184425357,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,272.36,,133.20,2.545454771,18.15757736,107,15,1,107,15,3.60,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,542.97,,451.26,4.640733479,8.353320262,117,65,1,117,65,1.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,899.54,,499.68,4.203451742,13.8390565,214,65,1,214,65,1.58,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,2804.17,,1285.76,8.523326054,22.43339417,329,125,1,329,125,0.88,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,868.27,,679.32,4.522249945,13.35803061,192,65,1,192,65,1.35,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,1372.54,,531.48,3.221931925,49.01939286,426,28,1,426,28,0.96,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,990.18,,318.31,2.020773404,35.36353457,490,28,1,490,28,1.18,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,741.91,,511.96,2.448553162,21.19747452,303,35,1,303,35,1.84,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,960.08,,614.86,1.967381666,27.43092151,488,35,1,488,35,1.49,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,1296.04,,651.56,2.382432184,37.02980308,544,35,1,544,35,1.31,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,4078.62,,2016.89,6.809056345,32.628998,599,125,1,599,125,0.73,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,atom,Intel® Processor N-200,39.61,,29.85,0.205210747,6.600945698,193,6,1,193,6,26.82,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,1458.83,,554.06,2.455950239,11.67067553,594,125,1,594,125,1.29,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,527.91,,453.23,1.725211318,7.435417792,306,71,1,306,71,2.04,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,5479.84,,1921.88,1.742952604,26.09449042,3144,210,2,1572,105,1.43,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,14421.48,,4410.27,0.850624065,35.17434244,16954,410,2,8477,205,0.92,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,22622.04,,6912.71,1.208571487,41.89266868,18718,540,2,9359,270,0.56,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,38771.76,,10993.76,1.140345798,55.3882245,34000,700,2,17000,350,0.66,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,5229.87,,1856.59,2.586481754,26.14933053,2022,200,2,1011,100,1.44,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,12359.26,,3615.96,5.435030176,41.19752874,2274,300,2,1137,150,0.55,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,accel,Intel® Flex-170,7195.50,6410.60,,,,,,1,,,1.97,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,accel,Intel® Flex-140,1219.84,1149.89,,,,,,1,,,13.07,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,692.73,509.86,,6.474119544,46.18205274,107,15,1,107,15,5.58,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,903.44,556.69,,1.84375582,32.26572686,490,28,1,490,28,4.30,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,57.93,40.13,,0.300129792,9.65417499,193,6,1,193,6,68.05,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,1008.77,740.13,,2.36801077,36.02759243,426,28,1,426,28,3.82,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,514.94,,313.82,4.812519626,34.32930667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,1114.75,,268.28,2.275002757,39.81254825,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,73.89,,44.62,0.38286464,12.31547925,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
mobilenet-v2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,1497.43,,605.84,3.515084507,53.4795,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,49.96,,14.45,0.466891776,3.330494671,107,15,1,107,15,19.80,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,97.49,,51.23,0.833227829,1.499810092,117,65,1,117,65,10.67,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,145.23,,74.40,0.678644387,2.234306135,214,65,1,214,65,8.18,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,515.94,,140.29,1.568210731,4.127530643,329,125,1,329,125,3.87,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,158.18,,82.33,0.823856129,2.433544259,192,65,1,192,65,7.04,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,229.98,,61.97,0.539855399,8.213514286,426,28,1,426,28,5.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,173.03,,44.88,0.353117963,6.179564359,490,28,1,490,28,6.59,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,122.85,,61.90,0.405448145,3.510022512,303,35,1,303,35,9.95,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,158.96,,76.32,0.325734087,4.541663835,488,35,1,488,35,7.54,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,270.04,,72.19,0.496399722,7.715469968,544,35,1,544,35,4.89,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,749.50,,228.07,1.251250835,5.995994004,599,125,1,599,125,2.93,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,atom,Intel® Processor N-200,6.55,,3.16,0.03395277,1.092147419,193,6,1,193,6,159.41,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,242.15,,98.01,0.407662044,1.937210033,594,125,1,594,125,5.41,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,93.12,,50.42,0.30430895,1.311528714,306,71,1,306,71,11.07,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,967.51,,269.32,0.307731475,4.607179803,3144,210,2,1572,105,2.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,2904.14,,747.72,0.171295011,7.083257598,16954,410,2,8477,205,1.53,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,4995.38,,1161.61,0.266875909,9.250709751,18718,540,2,9359,270,1.02,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,20106.68,,1683.03,0.591372933,28.72382815,34000,700,2,17000,350,1.01,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,930.86,,255.73,0.46036761,4.654316532,2022,200,2,1011,100,3.01,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,2277.89,,566.74,1.001712531,7.592980986,2274,300,2,1137,150,1.47,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,accel,Intel® Flex-170,3587.03,2207.96,,,,,,1,,,4.17,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,accel,Intel® Flex-140,681.39,441.41,,,,,,1,,,23.43,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,211.61,117.01,,1.977639185,14.10715952,107,15,1,107,15,18.79,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,289.48,170.46,,0.590782847,10.33869983,490,28,1,490,28,13.64,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,14.62,7.80,,0.075769949,2.437266688,193,6,1,193,6,272.49,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,352.09,211.39,,0.826510493,12.57476679,426,28,1,426,28,11.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,201.96,,71.44,1.887469159,13.46394667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,307.08,,88.49,0.626686058,10.96700601,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,18.58,,6.49,0.09624999,3.096041335,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
resnet-50,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,358.14,,114.23,0.840715023,12.79087857,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,107.63,,36.80,1.005906996,7.175469906,107,15,1,107,15,9.12,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,212.15,,122.46,1.813284395,3.263911911,117,65,1,117,65,4.93,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,327.95,,171.39,1.532485774,5.045414702,214,65,1,214,65,3.61,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,999.78,,361.81,3.038835794,7.99821581,329,125,1,329,125,1.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,343.23,,200.57,1.787633018,5.280392915,192,65,1,192,65,3.11,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,518.48,,150.36,1.217089202,18.51714286,426,28,1,426,28,2.24,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,387.68,,101.48,0.791191606,13.8458531,490,28,1,490,28,2.80,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,275.38,,157.24,0.908858835,7.868120772,303,35,1,303,35,4.33,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,367.82,,194.43,0.753734827,10.50921702,488,35,1,488,35,3.35,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,543.47,,186.05,0.999034589,15.5278519,544,35,1,544,35,2.61,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,1525.02,,586.71,2.545949853,12.20019169,599,125,1,599,125,1.62,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,atom,Intel® Processor N-200,14.49,,7.97,0.075060228,2.414437321,193,6,1,193,6,71.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,577.55,,223.76,0.972304716,4.620392012,594,125,1,594,125,2.38,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,203.21,,126.30,0.664084594,2.862111065,306,71,1,306,71,5.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,2048.96,,639.54,0.651703831,9.756937353,3144,210,2,1572,105,1.57,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,5725.24,,1655.76,0.337692546,13.96399861,16954,410,2,8477,205,1.11,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,10274.06,,2354.69,0.548886883,19.0260457,18718,540,2,9359,270,0.67,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,22569.91,,3519.34,0.663820955,32.24273208,34000,700,2,17000,350,0.82,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,1946.94,,612.87,0.962878848,9.73470515,2022,200,2,1011,100,1.63,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,4808.85,,1247.67,2.114709828,16.02950049,2274,300,2,1137,150,0.81,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,accel,Intel® Flex-170,4012.21,3280.14,,,,,,1,,,3.65,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,accel,Intel® Flex-140,837.59,673.84,,,,,,1,,,19.02,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,408.97,220.07,,3.822128486,27.26451654,107,15,1,107,15,9.63,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,522.64,285.29,,1.066608011,18.6656402,490,28,1,490,28,7.49,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,28.92,15.36,,0.149828275,4.819476185,193,6,1,193,6,136.60,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,637.62,384.26,,1.496766725,22.77223659,426,28,1,426,28,6.11,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,321.29,,138.22,3.002740187,21.41954667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,531.28,,141.48,1.084245141,18.97428996,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,35.69,,14.88,0.184945841,5.949091211,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd_mobilenet_v1_coco,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,690.18,,239.85,1.620143192,24.64932143,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,0.89,,0.23,0.00834996,0.059563049,107,15,1,107,15,1119.27,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,1.68,,0.97,0.014338412,0.025809141,117,65,1,117,65,596.89,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,2.42,,1.40,0.011301245,0.037207176,214,65,1,214,65,459.48,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,8.23,,2.40,0.025006186,0.065816281,329,125,1,329,125,163.47,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,2.78,,1.56,0.014501179,0.042834252,192,65,1,192,65,362.12,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,3.93,,1.00,0.009217819,0.140242536,426,28,1,426,28,278.25,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,2.96,,0.76,0.006043555,0.105762215,490,28,1,490,28,336.81,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,2.02,,1.13,0.006664408,0.057694728,303,35,1,303,35,563.91,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,2.68,,1.49,0.00548551,0.076483685,488,35,1,488,35,405.46,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,4.40,,1.32,0.008086729,0.125690868,544,35,1,544,35,235.75,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,12.52,,4.02,0.020905788,0.100180536,599,125,1,599,125,125.64,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,atom,Intel® Processor N-200,0.11,,0.05,0.000581374,0.01870087,193,6,1,193,6,8951.48,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,4.33,,2.45,0.007285664,0.034621476,594,125,1,594,125,239.93,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,1.60,,0.92,0.005224348,0.022516206,306,71,1,306,71,625.20,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,17.63,,4.57,0.005609079,0.083975927,3144,210,2,1572,105,115.76,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,57.85,,14.82,0.003412426,0.141107992,16954,410,2,8477,205,36.60,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,79.09,,20.81,0.004225558,0.146470375,18718,540,2,9359,270,106.87,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,445.98,,31.40,0.013117158,0.637119123,34000,700,2,17000,350,8.59,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,16.77,,4.34,0.008295831,0.083870853,2022,200,2,1011,100,121.62,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,42.59,,10.54,0.018728616,0.141962906,2274,300,2,1137,150,59.65,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,accel,Intel® Flex-170,167.49,103.03,,,,,,1,,,95.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,accel,Intel® Flex-140,29.99,17.56,,,,,,1,,,529.50,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,5.05,2.63,,0.047178932,0.336543045,107,15,1,107,15,773.01,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,8.98,4.71,,0.018327459,0.320730535,490,28,1,490,28,445.01,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,0.29,0.16,,0.001497632,0.048173819,193,6,1,193,6,13818.30,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,9.70,5.44,,0.022760985,0.34629213,426,28,1,426,28,422.10,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,0.90,,0.23,0.00837685,0.059754867,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,2.91,,0.75,0.005937663,0.103909111,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.11,,0.05,0.000582108,0.01872448,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
ssd-resnet34-1200,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,3.92,,1.00,0.00919669,0.139921071,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,5.46,,1.54,0.051044248,0.364115633,107,15,1,107,15,183.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,10.65,,5.82,0.09106238,0.163912284,117,65,1,117,65,94.90,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,15.38,,8.31,0.071851432,0.236557023,214,65,1,214,65,74.19,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,51.71,,15.62,0.157162207,0.41365093,329,125,1,329,125,29.85,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,17.56,,9.38,0.091438134,0.270094181,192,65,1,192,65,57.84,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,24.16,,6.62,0.056714953,0.8628775,426,28,1,426,28,45.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,18.06,,4.86,0.03686323,0.645106519,490,28,1,490,28,57.19,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,12.66,,6.85,0.041784766,0.361736687,303,35,1,303,35,86.80,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,16.86,,8.71,0.034540452,0.48159259,488,35,1,488,35,65.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,27.01,,8.10,0.049658249,0.771831063,544,35,1,544,35,42.00,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,78.04,,25.56,0.1302838,0.62431997,599,125,1,599,125,23.30,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,atom,Intel® Processor N-200,0.70,,0.35,0.003642344,0.117162061,193,6,1,193,6,1468.27,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,27.34,,14.09,0.046032451,0.218746209,594,125,1,594,125,40.58,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,10.07,,5.66,0.032912167,0.141846806,306,71,1,306,71,100.10,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,106.30,,29.82,0.033811441,0.506205571,3144,210,2,1572,105,21.86,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,313.22,,88.20,0.018474663,0.76394984,16954,410,2,8477,205,10.58,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,493.02,,109.11,0.026339551,0.913006894,18718,540,2,9359,270,18.51,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,2136.92,,194.88,0.06285072,3.052749275,34000,700,2,17000,350,3.29,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,101.27,,28.40,0.050083923,0.50634846,2022,200,2,1011,100,22.87,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,242.00,,62.34,0.106421602,0.806675746,2274,300,2,1137,150,13.70,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,accel,Intel® Flex-170,789.11,338.45,,,,,,1,,,19.85,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,accel,Intel® Flex-140,159.67,87.28,,,,,,1,,,99.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,31.90,15.02,,0.298118652,2.126579715,107,15,1,107,15,123.93,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,57.77,25.96,,0.117907546,2.063382053,490,28,1,490,28,68.96,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,1.76,0.94,,0.009116198,0.293237699,193,6,1,193,6,2271.06,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,63.86,29.69,,0.149904983,2.280697234,426,28,1,426,28,63.56,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,34.09,,9.18,0.318603364,2.272704,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,55.18,,12.97,0.112605748,1.970600588,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,2.22,,0.74,0.011488536,0.369547916,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,52.21,,13.99,0.122567488,1.864776786,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,54.42,,18.04,0.508553978,3.627685044,107,15,1,107,15,18.25,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,112.01,,63.79,0.957390784,1.723303411,117,65,1,117,65,9.02,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,167.15,,91.46,0.781091755,2.571594392,214,65,1,214,65,6.72,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,599.98,,196.78,1.823638134,4.799815567,329,125,1,329,125,3.05,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,185.17,,102.47,0.964441757,2.848812574,192,65,1,192,65,5.42,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,251.02,,77.47,0.589250939,8.965032143,426,28,1,426,28,4.54,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,186.99,,55.22,0.381609578,6.678167613,490,28,1,490,28,5.74,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,137.72,,76.53,0.454530194,3.934932821,303,35,1,303,35,8.20,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,186.09,,96.75,0.381340728,5.316979292,488,35,1,488,35,6.26,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,290.20,,92.52,0.533462621,8.291533312,544,35,1,544,35,4.20,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,858.94,,286.41,1.433957861,6.871526071,599,125,1,599,125,2.44,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,atom,Intel® Processor N-200,7.65,,4.05,0.039622221,1.27451476,193,6,1,193,6,136.49,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,298.60,,148.94,0.502696157,2.388812138,594,125,1,594,125,3.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,106.58,,62.62,0.348291454,1.501087112,306,71,1,306,71,9.44,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,1051.30,,339.08,0.334382549,5.006184448,3144,210,2,1572,105,2.51,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,2824.66,,923.96,0.166607362,6.889417597,16954,410,2,8477,205,1.22,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,4664.34,,1378.51,0.249189958,8.637662274,18718,540,2,9359,270,0.87,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,13094.97,,2140.00,0.385146034,18.70709309,34000,700,2,17000,350,1.09,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,1008.44,,322.64,0.49873439,5.042204682,2022,200,2,1011,100,2.62,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,2217.58,,702.90,0.975190642,7.391945065,2274,300,2,1137,150,1.33,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,accel,Intel® Flex-170,3731.30,2395.93,,,,,,1,,,4.06,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,accel,Intel® Flex-140,595.41,589.87,,,,,,1,,,26.79,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,289.69,151.78,,2.707403451,19.31281129,107,15,1,107,15,13.67,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,482.61,255.46,,0.984928525,17.23624918,490,28,1,490,28,8.06,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,18.65,9.81,,0.096624282,3.108081056,193,6,1,193,6,212.84,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,555.59,293.27,,1.304201059,19.84248754,426,28,1,426,28,6.92,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,266.57,,93.47,2.491299065,17.77126667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,383.36,,116.03,0.782374111,13.69154694,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,23.03,,8.30,0.119308014,3.837741122,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo-v3-tiny,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,444.34,,149.05,1.043043427,15.86916071,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,1.49,,0.38,0.013950494,0.099513526,107,15,1,107,15,672.25,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,2.43,,1.57,0.020775748,0.037396346,117,65,1,117,65,425.88,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,3.62,,2.29,0.016924989,0.055722272,214,65,1,214,65,322.49,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,11.46,,3.96,0.03483307,0.091680639,329,125,1,329,125,121.88,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,3.95,,2.54,0.020576722,0.06078047,192,65,1,192,65,262.38,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,6.56,,1.65,0.015387439,0.234108893,426,28,1,426,28,169.32,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,4.93,,1.23,0.010063508,0.176111389,490,28,1,490,28,209.95,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,3.02,,1.85,0.009976084,0.086364383,303,35,1,303,35,386.94,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,3.86,,2.43,0.007900388,0.110153975,488,35,1,488,35,282.08,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,6.32,,2.19,0.01162052,0.18061608,544,35,1,544,35,169.49,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,18.02,,6.59,0.030079429,0.144140625,599,125,1,599,125,91.92,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,atom,Intel® Processor N-200,0.17,,0.09,0.000895225,0.0287964,193,6,1,193,6,5824.42,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,6.10,,3.95,0.010266964,0.048788613,594,125,1,594,125,180.82,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,2.33,,1.48,0.007623167,0.032854777,306,71,1,306,71,431.48,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,29.17,,7.32,0.009277061,0.138890851,3144,210,2,1572,105,70.99,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,95.18,,21.75,0.005614225,0.232155061,16954,410,2,8477,205,23.58,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,129.66,,31.77,0.006926924,0.240107724,18718,540,2,9359,270,73.18,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,597.42,,48.08,0.017571322,0.853464211,34000,700,2,17000,350,9.00,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,27.77,,6.96,0.01373266,0.138837194,2022,200,2,1011,100,74.54,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,68.21,,15.93,0.029995749,0.227367774,2274,300,2,1137,150,43.86,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,accel,Intel® Flex-170,277.97,158.53,,,,,,1,,,57.27,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,accel,Intel® Flex-140,46.10,28.49,,,,,,1,,,346.80,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,8.40,4.35,,0.078545387,0.56029043,107,15,1,107,15,475.75,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,15.51,7.81,,0.03164744,0.553830203,490,28,1,490,28,257.89,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,0.46,0.25,,0.002385152,0.076722378,193,6,1,193,6,8685.75,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,17.28,8.86,,0.04056698,0.617197621,426,28,1,426,28,227.89,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,8.96,,2.57,0.083779393,0.597626333,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,15.42,,3.90,0.031467977,0.550689601,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.55,,0.19,0.002833692,0.09115042,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
unet-camvid-onnx-0001,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,14.32,,3.81,0.033621549,0.511527857,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,24.40,,9.61,0.22800938,1.626466914,107,15,1,107,15,40.45,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,53.51,,33.07,0.457363269,0.823253884,117,65,1,117,65,19.20,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,81.91,,47.09,0.382748211,1.260124878,214,65,1,214,65,13.68,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,248.13,,95.51,0.754209583,1.985079623,329,125,1,329,125,6.70,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,86.77,,53.00,0.451947196,1.334982486,192,65,1,192,65,11.88,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,110.84,,40.77,0.260193662,3.958660714,426,28,1,426,28,10.71,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,76.53,,27.31,0.156180065,2.733151145,490,28,1,490,28,13.42,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,71.40,,42.60,0.235643867,2.040002619,303,35,1,303,35,16.51,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,93.44,,53.42,0.19148001,2.669778431,488,35,1,488,35,12.45,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,129.22,,50.19,0.237534522,3.691965149,544,35,1,544,35,9.42,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,374.34,,153.46,0.624943307,2.994728327,599,125,1,599,125,5.32,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,atom,Intel® Processor N-200,3.26,,1.95,0.016869276,0.542628378,193,6,1,193,6,316.56,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,136.37,,72.87,0.229579691,1.090962692,594,125,1,594,125,9.15,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,52.29,,32.98,0.170869765,0.73642462,306,71,1,306,71,19.47,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,452.92,,175.37,0.144058565,2.156762523,3144,210,2,1572,105,5.85,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,978.34,,454.72,0.057705352,2.386186661,16954,410,2,8477,205,3.51,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,1708.13,,573.45,0.091255994,3.163203145,18718,540,2,9359,270,2.38,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,2882.62,,945.60,0.084783045,4.118033592,34000,700,2,17000,350,3.82,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,431.62,,166.10,0.213460485,2.158085503,2022,200,2,1011,100,6.19,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,855.04,,342.65,0.376007503,2.850136872,2274,300,2,1137,150,3.25,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,accel,Intel® Flex-170,1445.14,1480.07,,,,,,1,,,10.71,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,accel,Intel® Flex-140,201.93,259.00,,,,,,1,,,79.17,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,126.14,82.58,,1.178855159,8.409166799,107,15,1,107,15,31.55,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,170.37,110.99,,0.347683792,6.084466364,490,28,1,490,28,23.22,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,8.61,5.66,,0.044636,1.435791346,193,6,1,193,6,463.22,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,210.16,143.23,,0.493335138,7.505741748,426,28,1,426,28,18.84,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,116.50,,51.35,1.088790654,7.766706667,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,114.23,,46.77,0.233123263,4.079657102,490,28,1,490,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,10.54,,4.68,0.05458634,1.755860615,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.
|
||||
yolo_v8n,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,181.83,,76.10,0.426834977,6.493989286,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,32.34,,49.17,0.053990134,0.25872072,599,125,1,599,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,28.29,,43.45,0.001511565,0.052395333,18718,540,2,9359,270,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,47.21,,52.77,0.00138849,0.067440929,34000,700,2,17000,350,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,40.07,,42.88,0.017620783,0.133565533,2274,300,2,1137,150,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,accel,Intel® Flex-170,81.64,,82.81,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,accel,Intel® Flex-140,70.36,,65.11,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
bloomz-560m,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,242.95,,457.83,0.405588331,1.94357928,599,125,1,599,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,110.25,,200.49,0.005889907,0.204161611,18718,540,2,9359,270,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,119.09,,131.63,0.003502792,0.170135629,34000,700,2,17000,350,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,277.75,,278.22,0.122140598,0.925825733,2274,300,2,1137,150,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,accel,Intel® Flex-170,143.21,,143.00,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,accel,Intel® Flex-140,,,,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
GPT-j-6b,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,285.08,,511.77,0.475930451,2.28065872,599,125,1,599,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,95.08,,182.22,0.005079793,0.176080667,18718,540,2,9359,270,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,119.29,,133.20,0.003508666,0.170420929,34000,700,2,17000,350,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,338.67,,345.26,0.148931821,1.1289032,2274,300,2,1137,150,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,accel,Intel® Flex-170,138.06,,137.36,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,accel,Intel® Flex-140,,,,,,,,1,,,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
llama-2-7b-chat,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,msec/token,msec/token/$,msec/token/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,atom,Intel® Celeron™ 6305E CPU-only,,,,0,0,107,15,1,107,15,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i3-8100 CPU-only,,,,0,0,117,65,1,117,65,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i5-10500TE CPU-only,,,,0,0,214,65,1,214,65,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i5-13600K CPU-only,,,,0,0,329,125,1,329,125,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i5-8500 CPU-only,,,,0,0,192,65,1,192,65,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i7-1185G7 CPU-only,,,,0,0,426,28,1,426,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i7-1185GRE CPU-only,,,,0,0,490,28,1,490,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i7-8700T CPU-only,,,,0,0,303,35,1,303,35,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i9-10900TE CPU-only,,,,0,0,488,35,1,488,35,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i9-12900TE CPU-only,,,,0,0,544,35,1,544,35,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core,Intel® Core™ i9-13900K CPU-only,43.21,,43.13,0.072129599,0.34564504,599,125,1,599,125,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,atom,Intel® Processor N-200,,,,0,0,193,6,1,193,6,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® W1290P CPU-only,,,,0,0,594,125,1,594,125,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® E-2124G CPU-only,,,,0,0,306,71,1,306,71,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Gold 5218T CPU-only,,,,0,0,3144,210,2,1572,105,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Platinum 8270 CPU-only,,,,0,0,16954,410,2,8477,205,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Platinum 8380 CPU-only,18.95,,19.43,0.001012236,0.035087093,18718,540,2,9359,270,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Platinum 8490H CPU-only,5.88,,6.47,0.000172889,0.008397471,34000,700,2,17000,350,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Silver 4216R CPU-only,,,,0,0,2022,200,2,1011,100,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,xeon,Intel® Xeon® Silver 4316 CPU-only,21.92,,22.42,0.00963796,0.073055733,2274,300,2,1137,150,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,accel,Intel® Flex-170,4.29,,4.31,,,,,1,,,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,accel,Intel® Flex-140,18.68,,18.46,,,,,1,,,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-iGPU,Intel® Celeron™ 6305E iGPU-only,,,,0,0,107,15,1,107,15,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185GRE iGPU-only,,,,0,0,490,28,1,490,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-iGPU,Intel® Processor N200 iGPU-only,,,,0,0,193,6,1,193,6,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-iGPU,Intel® Core™ i7-1185G7 iGPU-only,,,,0,0,426,28,1,426,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,,,,0,0,107,15,1,107,15,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185GRE CPU+iGPU,,,,0,0,490,28,1,490,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Processor N200 CPU+iGPU,,,,0,0,193,6,1,193,6,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
stable diffusion V2,OV-2023.1,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,,,,0,0,426,28,1,426,28,,"Generation time, sec.",Generation-time/$,Generation-time/TDP,msec.
|
||||
end_rec,,,,,,,,,,,,,,,,,,
|
||||
Network model,Release,IE-Type,Platform name,Throughput-INT8,Throughput-FP16,Throughput-FP32,Value,Efficiency,Price,TDP,Sockets,Price/Socket,TDP/Socket,Latency,UOM_T,UOM_V,UOM_E,UOM_L,Latency_FP16,Latency_FP32,Latency_int4,Throughput_INT4
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i3-8100 ",21.27,,,0.182,0.327,117,65,1,117,65,48.62,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i5-10500TE ",32.04,,21.72,0.150,0.493,214,65,1,214,65,36.77,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i5-13600K ",112.78,,45.21,0.343,0.902,329,125,1,329,125,17.53,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,50.11,,18.36,0.118,1.790,426,28,1,426,28,23.39,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,38.17,,13.67,0.078,1.363,490,28,1,490,28,29.4,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,Intel® Core™ i7-12700H CPU,88.62,,35.37,0.177,0.771,502,115,1,502,115,17.1,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i7-8700T ",27.47,,18.34,0.091,0.785,303,35,1,303,35,43.1,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i9-10900TE ",33.58,,21.38,0.069,0.960,488,35,1,488,35,37.7,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i9-12900TE ",52.74,,20.43,0.097,1.507,544,35,1,544,35,23.05,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core,"Intel® Core™ i9-13900K ",164.86,,66.26,0.275,1.319,599,125,1,599,125,13.76,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® W1290P ",50.94,,33.27,0.086,0.407,594,125,1,594,125,29.19,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",20.73,,,0.083,0.292,249,71,1,249,71,49.54,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",215.86,,80.31,0.069,1.028,3144,210,2,1572,105,14.03,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",569.64,,222.97,0.034,1.389,16954,410,2,8477,205,7.97,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",876.04,,336.63,0.047,1.622,18718,540,2,9359,270,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",3131.74,,505.87,0.092,4.474,34000,700,2,17000,350,4.14,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",205.61,,76.34,0.102,1.028,2022,200,2,1011,100,14.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",423.53,,166.33,0.186,1.412,2274,300,2,1137,150,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",824.04,680.35,,0.428,5.494,1925,150,1,1925,150,19.37,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",620.32,554.08,,1.932,4.135,321,150,1,321,150,25.63,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,83.66,60.2,42.14,0.196,2.988,426,28,1,426,28,47.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,72.43,55.39,36.92,0.148,2.587,490,28,1,490,28,55.03,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,91.52,64.83,46.37,0.182,0.796,502,115,1,502,115,43.39,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,94.49,,39.79,0.222,3.375,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-base-cased,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,88.79,,35.19,0.177,0.772,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i3-8100 ",2.09,,,0.018,0.032,117,65,1,117,65,493.93,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i5-10500TE ",2.98,,1.86,0.014,0.046,214,65,1,214,65,351.2,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i5-13600K ",10.07,,3.75,0.031,0.081,329,125,1,329,125,156.29,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,5.07,,1.63,0.012,0.181,426,28,1,426,28,219.53,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,3.78,,1.21,0.008,0.135,490,28,1,490,28,267.11,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,Intel® Core™ i7-12700H CPU,8.31,,3.08,0.017,0.072,502,115,1,502,115,155.69,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i7-8700T ",2.7,,1.61,0.009,0.077,303,35,1,303,35,411.88,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i9-10900TE ",3.28,,1.99,0.007,0.094,488,35,1,488,35,331.95,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i9-12900TE ",5.12,,1.83,0.009,0.146,544,35,1,544,35,210.03,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core,"Intel® Core™ i9-13900K ",15.37,,5.95,0.026,0.123,599,125,1,599,125,113.51,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® W1290P ",4.65,,3.11,0.008,0.037,594,125,1,594,125,228.25,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",2.11,,,0.008,0.030,249,71,1,249,71,484.55,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",20.81,,6.91,0.007,0.099,3144,210,2,1572,105,106.59,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",50.63,,17.75,0.003,0.123,16954,410,2,8477,205,54.45,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",66.76,,27.44,0.004,0.124,18718,540,2,9359,270,231.49,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",250.47,,45.77,0.007,0.358,34000,700,2,17000,350,27.75,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",20.73,,6.57,0.010,0.104,2022,200,2,1011,100,106.9,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",38.85,,14.53,0.017,0.129,2274,300,2,1137,150,156.69,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",148.54,102.98,,0.077,0.990,1925,150,1,1925,150,107.19,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",127.99,94.72,,0.399,0.853,321,150,1,321,150,124.58,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,9.07,6.62,4.27,0.021,0.324,426,28,1,426,28,452.44,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,5.39,5.77,3.01,0.011,0.192,490,28,1,490,28,741.24,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,10.54,7.57,5,0.021,0.092,502,115,1,502,115,379.08,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,8.49,,3.46,0.020,0.303,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
bert-large-uncased-whole-word-masking-squad-0001,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,8.53,,3.01,0.017,0.074,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom,Intel® Atom® X6425E CPU,4.66,,2.87,0.070,0.388,67,12,1,67,12,219.16,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i3-8100 ",22,,14.03,0.188,0.339,117,65,1,117,65,45.56,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i5-10500TE ",35.15,,16.61,0.164,0.541,214,65,1,214,65,33.27,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i5-13600K ",101.57,,41.76,0.309,0.813,329,125,1,329,125,16.21,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,52.36,,16.31,0.123,1.870,426,28,1,426,28,19.93,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,31.69,,9.45,0.065,1.132,490,28,1,490,28,29.7,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,Intel® Core™ i7-12700H CPU,74.8,,29.13,0.149,0.650,502,115,1,502,115,16.96,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i7-8700T ",32.22,,18.38,0.106,0.921,303,35,1,303,35,37.52,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i9-10900TE ",39.4,,18.25,0.081,1.126,488,35,1,488,35,28.44,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i9-12900TE ",58.11,,22.54,0.107,1.660,544,35,1,544,35,21.65,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core,"Intel® Core™ i9-13900K ",149.59,,57.89,0.250,1.197,599,125,1,599,125,12.49,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom,Intel® Processor N200 CPU,1.72,,1.01,0.009,0.287,193,6,1,193,6,596.5,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® W1290P ",51.12,,19.38,0.086,0.409,594,125,1,594,125,21.94,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",21.77,,14.75,0.087,0.307,249,71,1,249,71,45.69,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",188.66,,76.86,0.060,0.898,3144,210,2,1572,105,11.81,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",413.18,,154.25,0.024,1.008,16954,410,2,8477,205,5.65,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",564.23,,223.46,0.030,1.045,18718,540,2,9359,270,5.36,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",1001.4,,380.46,0.029,1.431,34000,700,2,17000,350,3.6,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",182.57,,74.43,0.090,0.913,2022,200,2,1011,100,12.14,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",360.93,,138.14,0.159,1.203,2274,300,2,1137,150,7.04,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",732.85,602.34,,0.381,4.886,1925,150,1,1925,150,21.8,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",597.79,484.98,,1.862,3.985,321,150,1,321,150,26.18,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom,Intel® Celeron® 6305E CPU,11.64,,4.56,0.109,0.776,107,15,1,107,15,87.1,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,10.87,11.07,5.64,0.162,0.906,67,12,1,67,12,367.4,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,104.71,48.95,27.69,0.246,3.739,426,28,1,426,28,37.85,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,76.34,36.22,13.67,0.156,2.726,490,28,1,490,28,52.07,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,113.65,52.72,33.36,0.226,0.988,502,115,1,502,115,34.81,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,3.65,1.92,1.27,0.019,0.609,193,6,1,193,6,1094.07,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,60.3,27.73,16.44,0.564,4.020,107,15,1,107,15,66.22,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,11.11,,5.66,0.166,0.926,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,86.78,,24.13,0.204,3.099,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,75.71,,28.98,0.151,0.658,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,4.66,,1.9,0.024,0.776,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
deeplabv3,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,61.15,,16.89,0.571,4.077,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom,Intel® Atom® X6425E CPU,7.29,,5.01,0.109,0.608,67,12,1,67,12,140.41,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i3-8100 ",36.6,,24.31,0.313,0.563,117,65,1,117,65,28.3,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i5-10500TE ",58.84,,29.38,0.275,0.905,214,65,1,214,65,21.11,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i5-13600K ",139.32,,77.22,0.423,1.115,329,125,1,329,125,11.92,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,73.76,,41.07,0.173,2.634,426,28,1,426,28,15.53,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,52.57,,21.48,0.107,1.877,490,28,1,490,28,20.87,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,Intel® Core™ i7-12700H CPU,114.46,,54.77,0.228,0.995,502,115,1,502,115,11.88,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i7-8700T ",51.93,,34.22,0.171,1.484,303,35,1,303,35,24.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i9-10900TE ",66,,35.11,0.135,1.886,488,35,1,488,35,18.95,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i9-12900TE ",75.37,,44.41,0.139,2.154,544,35,1,544,35,15.52,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core,"Intel® Core™ i9-13900K ",207,,102.36,0.346,1.656,599,125,1,599,125,9.45,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom,Intel® Processor N200 CPU,2.09,,1.67,0.011,0.349,193,6,1,193,6,488.71,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® W1290P ",96.56,,38.61,0.163,0.772,594,125,1,594,125,14.19,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",35.01,,25.27,0.141,0.493,249,71,1,249,71,29.41,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",258.46,,164.63,0.082,1.231,3144,210,2,1572,105,11.88,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",518.85,,310.76,0.031,1.265,16954,410,2,8477,205,7.42,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",834.12,,495.89,0.045,1.545,18718,540,2,9359,270,4.31,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",1043.83,,861.5,0.031,1.491,34000,700,2,17000,350,5.43,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",248.64,,157.52,0.123,1.243,2022,200,2,1011,100,12.27,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",469.81,,293.67,0.207,1.566,2274,300,2,1137,150,5.89,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",846.71,825.71,,0.440,5.645,1925,150,1,1925,150,18.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",582.03,590.73,,1.813,3.880,321,150,1,321,150,26.38,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom,Intel® Celeron® 6305E CPU,18.06,,11.09,0.169,1.204,107,15,1,107,15,57.26,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,19.71,22.48,11.18,0.294,1.643,67,12,1,67,12,202.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,110.42,90.58,46.89,0.259,3.944,426,28,1,426,28,35.95,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,62.89,49.91,23.65,0.128,2.246,490,28,1,490,28,63.05,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,127.71,103.78,54.39,0.254,1.110,502,115,1,502,115,30.98,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,5.53,4.94,2.75,0.029,0.921,193,6,1,193,6,721.64,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,72.14,60.61,33.85,0.674,4.809,107,15,1,107,15,55.18,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,20.11,,11.51,0.300,1.676,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,101.03,,48.05,0.237,3.608,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,114.83,,55.31,0.229,0.999,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,5.73,,3.57,0.030,0.955,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
efficientdet-d0,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,56.19,,32.14,0.525,3.746,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom,Intel® Atom® X6425E CPU,132.01,,79.71,1.970,11.001,67,12,1,67,12,7.97,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i3-8100 ",536.37,,,4.584,8.252,117,65,1,117,65,2.02,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i5-10500TE ",898.55,,500.27,4.199,13.824,214,65,1,214,65,1.57,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i5-13600K ",2785.11,,1237.02,8.465,22.281,329,125,1,329,125,0.88,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,1347.18,,525.71,3.162,48.113,426,28,1,426,28,0.86,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,979.43,,319,1.999,34.980,490,28,1,490,28,1.19,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,Intel® Core™ i7-12700H CPU,2099.29,,1056.24,4.182,18.255,502,115,1,502,115,1.1,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i7-8700T ",741.65,,519.77,2.448,21.190,303,35,1,303,35,1.86,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i9-10900TE ",949.26,,604.02,1.945,27.122,488,35,1,488,35,1.5,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i9-12900TE ",1300.22,,657.07,2.390,37.149,544,35,1,544,35,1.32,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core,"Intel® Core™ i9-13900K ",4089.6,,2014.33,6.827,32.717,599,125,1,599,125,0.71,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom,Intel® Processor N200 CPU,41.1,,29.71,0.213,6.851,193,6,1,193,6,27.14,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® W1290P ",1450.73,,542.77,2.442,11.606,594,125,1,594,125,1.29,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",523.03,,,2.101,7.367,249,71,1,249,71,2.07,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",5410.81,,1915.84,1.721,25.766,3144,210,2,1572,105,1.42,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",14207.13,,4438.67,0.838,34.652,16954,410,2,8477,205,0.93,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",22308.51,,6801.73,1.192,41.312,18718,540,2,9359,270,0.57,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",38064.38,,10986.01,1.120,54.378,34000,700,2,17000,350,0.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",5178.33,,1862.47,2.561,25.892,2022,200,2,1011,100,1.45,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",12161.33,,3597.47,5.348,40.538,2274,300,2,1137,150,0.56,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",6748.16,5698.62,,3.506,44.988,1925,150,1,1925,150,2.37,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",4308.65,3849.95,,13.423,28.724,321,150,1,321,150,3.63,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom,Intel® Celeron® 6305E CPU,265.71,,132.81,2.483,17.714,107,15,1,107,15,3.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,191.22,225.68,130.69,2.854,15.935,67,12,1,67,12,20.63,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,1014.86,749.24,525.16,2.382,36.245,426,28,1,426,28,3.77,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,880.63,557.89,349.94,1.797,31.451,490,28,1,490,28,4.25,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,1319.62,916.27,563.83,2.629,11.475,502,115,1,502,115,2.83,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,58.68,40.52,26.29,0.304,9.781,193,6,1,193,6,67.34,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,685.09,513.1,339.2,6.403,45.672,107,15,1,107,15,5.56,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,107.78,,137.9,1.609,8.981,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,2182.22,,612.95,5.123,77.937,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,2071.13,,1048.22,4.126,18.010,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,115.35,,50.03,0.598,19.224,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
mobilenet-v2,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,1465.46,,396.56,13.696,97.698,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom,Intel® Atom® X6425E CPU,19.92,,8.18,0.297,1.660,67,12,1,67,12,51.27,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i3-8100 ",96.91,,50.72,0.828,1.491,117,65,1,117,65,10.72,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i5-10500TE ",145.07,,74.01,0.678,2.232,214,65,1,214,65,8.19,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i5-13600K ",515.17,,140.11,1.566,4.121,329,125,1,329,125,3.89,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,229.34,,61.85,0.538,8.191,426,28,1,426,28,5.03,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,172.44,,45.06,0.352,6.159,490,28,1,490,28,6.62,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,Intel® Core™ i7-12700H CPU,445.18,,122.92,0.887,3.871,502,115,1,502,115,3.93,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i7-8700T ",122.89,,62.1,0.406,3.511,303,35,1,303,35,9.97,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i9-10900TE ",156.59,,75.57,0.321,4.474,488,35,1,488,35,7.6,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i9-12900TE ",269.4,,72.67,0.495,7.697,544,35,1,544,35,4.89,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core,"Intel® Core™ i9-13900K ",749.69,,228.22,1.252,5.998,599,125,1,599,125,2.98,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom,Intel® Processor N200 CPU,6.72,,3.13,0.035,1.120,193,6,1,193,6,159.9,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® W1290P ",240.85,,96.84,0.405,1.927,594,125,1,594,125,5.5,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",92.92,,49.94,0.373,1.309,249,71,1,249,71,11.12,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",968.92,,267.96,0.308,4.614,3144,210,2,1572,105,2.91,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",2902.26,,747.22,0.171,7.079,16954,410,2,8477,205,1.55,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",4946.11,,1154.11,0.264,9.159,18718,540,2,9359,270,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",19987.31,,1672.83,0.588,28.553,34000,700,2,17000,350,1.02,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",931.66,,257,0.461,4.658,2022,200,2,1011,100,3,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",2276.19,,562.55,1.001,7.587,2274,300,2,1137,150,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",3436.03,2103.96,,1.785,22.907,1925,150,1,1925,150,4.65,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",2320.83,1555.26,,7.230,15.472,321,150,1,321,150,6.8,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom,Intel® Celeron® 6305E CPU,49.59,,14.36,0.463,3.306,107,15,1,107,15,19.89,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,49.36,52.35,27.44,0.737,4.113,67,12,1,67,12,80.69,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,351.53,206.09,116.49,0.825,12.555,426,28,1,426,28,11.12,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,291.8,170.69,95.05,0.596,10.421,490,28,1,490,28,13.6,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,389.36,224.15,136.98,0.776,3.386,502,115,1,502,115,10.01,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,14.66,7.82,4.24,0.076,2.444,193,6,1,193,6,271.96,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,213.06,118.28,67.33,1.991,14.204,107,15,1,107,15,18.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,73.78,,32.26,1.101,6.148,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,467.05,,119.19,1.096,16.680,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,446.64,,123.08,0.890,3.884,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,20.62,,6.29,0.107,3.437,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
resnet-50,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,299.3,,75.5,2.797,19.953,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom,Intel® Atom® X6425E CPU,0.33,,0.13,0.005,0.028,67,12,1,67,12,2993.01,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i3-8100 ",1.68,,0.97,0.014,0.026,117,65,1,117,65,601.85,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i5-10500TE ",2.42,,1.4,0.011,0.037,214,65,1,214,65,459.92,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i5-13600K ",8.24,,2.4,0.025,0.066,329,125,1,329,125,163.48,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,3.91,,1,0.009,0.140,426,28,1,426,28,277.92,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,2.88,,0.77,0.006,0.103,490,28,1,490,28,338.91,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,Intel® Core™ i7-12700H CPU,7.23,,2.11,0.014,0.063,502,115,1,502,115,160.16,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i7-8700T ",2.02,,1.13,0.007,0.058,303,35,1,303,35,564.49,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i9-10900TE ",2.65,,1.47,0.005,0.076,488,35,1,488,35,411.44,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i9-12900TE ",4.43,,1.32,0.008,0.126,544,35,1,544,35,233.69,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core,"Intel® Core™ i9-13900K ",12.56,,4.02,0.021,0.100,599,125,1,599,125,125.42,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom,Intel® Processor N200 CPU,0.11,,0.05,0.001,0.019,193,6,1,193,6,8949.48,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® W1290P ",4.33,,2.45,0.007,0.035,594,125,1,594,125,238.2,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",1.6,,0.92,0.006,0.023,249,71,1,249,71,628.09,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",17.64,,4.57,0.006,0.084,3144,210,2,1572,105,115.69,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",57.78,,14.8,0.003,0.141,16954,410,2,8477,205,36.97,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",78.79,,20.72,0.004,0.146,18718,540,2,9359,270,108.29,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",447.58,,31.29,0.013,0.639,34000,700,2,17000,350,8.52,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",16.78,,4.35,0.008,0.084,2022,200,2,1011,100,121.64,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",42.36,,10.47,0.019,0.141,2274,300,2,1137,150,62,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",212,109.86,,0.110,1.413,1925,150,1,1925,150,75.46,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",147.33,81.3,,0.459,0.982,321,150,1,321,150,107.9,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom,Intel® Celeron® 6305E CPU,0.89,,0.23,0.008,0.059,107,15,1,107,15,1121.85,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,1.18,1.18,0.6,0.018,0.098,67,12,1,67,12,3388.59,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,9.69,5.42,2.82,0.023,0.346,426,28,1,426,28,422.17,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,8.81,4.73,2.22,0.018,0.315,490,28,1,490,28,454.51,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,10.57,6.15,3.31,0.021,0.092,502,115,1,502,115,378.05,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,0.29,0.16,,0.001,0.048,193,6,1,193,6,13815.91,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,5.07,2.64,1.41,0.047,0.338,107,15,1,107,15,774.3,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,0.33,,0.13,0.005,0.028,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,3.91,,1,0.009,0.140,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,7.22,,2.11,0.014,0.063,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.11,,0.05,0.001,0.019,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd-resnet34-1200,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,0.89,,0.23,0.008,0.059,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom,Intel® Atom® X6425E CPU,45.25,,21.49,0.675,3.771,67,12,1,67,12,23.03,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i3-8100 ",211.26,,122.9,1.806,3.250,117,65,1,117,65,4.94,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i5-10500TE ",328.11,,171.73,1.533,5.048,214,65,1,214,65,3.6,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i5-13600K ",958.88,,352.8,2.915,7.671,329,125,1,329,125,2.39,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,516.83,,149.6,1.213,18.458,426,28,1,426,28,1.95,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,387.14,,100.71,0.790,13.827,490,28,1,490,28,2.82,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,Intel® Core™ i7-12700H CPU,851.54,,313.45,1.696,7.405,502,115,1,502,115,2.26,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i7-8700T ",276.74,,157.91,0.913,7.907,303,35,1,303,35,4.32,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i9-10900TE ",364.53,,192.13,0.747,10.415,488,35,1,488,35,3.37,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i9-12900TE ",524.73,,184.04,0.965,14.992,544,35,1,544,35,3.11,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core,"Intel® Core™ i9-13900K ",1448.44,,577.78,2.418,11.587,599,125,1,599,125,2.07,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom,Intel® Processor N200 CPU,14.48,,7.94,0.075,2.413,193,6,1,193,6,72.01,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® W1290P ",575.79,,221.76,0.969,4.606,594,125,1,594,125,2.37,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",202.62,,125.71,0.814,2.854,249,71,1,249,71,5.11,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",2056.28,,640.87,0.654,9.792,3144,210,2,1572,105,1.56,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",5764.35,,1656.74,0.340,14.059,16954,410,2,8477,205,1.1,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",10274.61,,2320.94,0.549,19.027,18718,540,2,9359,270,0.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",22310.17,,3557.58,0.656,31.872,34000,700,2,17000,350,0.82,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",1961.85,,610.96,0.970,9.809,2022,200,2,1011,100,1.63,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",4825.79,,1246.04,2.122,16.086,2274,300,2,1137,150,0.81,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",4044.15,3428.72,,2.101,26.961,1925,150,1,1925,150,3.93,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",2984.21,2546.5,,9.297,19.895,321,150,1,321,150,5.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom,Intel® Celeron® 6305E CPU,107.12,,36.58,1.001,7.142,107,15,1,107,15,9.17,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,92.52,95.67,51.13,1.381,7.710,67,12,1,67,12,42.26,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,651.76,382.05,253.7,1.530,23.277,426,28,1,426,28,6.02,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,524.22,312.45,186.78,1.070,18.722,490,28,1,490,28,7.46,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,773.55,416.41,274.89,1.541,6.727,502,115,1,502,115,4.96,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,29.11,15.38,9.5,0.151,4.852,193,6,1,193,6,136.41,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,411.09,221.78,136.65,3.842,27.406,107,15,1,107,15,9.59,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,108.74,,57.49,1.623,9.061,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,681.22,,234.33,1.599,24.329,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,846.65,,312.78,1.687,7.362,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,35.06,,14.07,0.182,5.843,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
ssd_mobilenet_v1_coco,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,299.91,,136.25,2.803,19.994,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom,Intel® Atom® X6425E CPU,0.48,,0.06,0.007,0.040,67,12,1,67,12,2086.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i3-8100 ",2.42,,1.55,0.021,0.037,117,65,1,117,65,426.14,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i5-10500TE ",3.6,,2.28,0.017,0.055,214,65,1,214,65,324.72,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i5-13600K ",11.52,,3.96,0.035,0.092,329,125,1,329,125,121.88,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,6.54,,1.63,0.015,0.234,426,28,1,426,28,168.96,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,4.87,,1.22,0.010,0.174,490,28,1,490,28,209.5,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,Intel® Core™ i7-12700H CPU,10.23,,3.55,0.020,0.089,502,115,1,502,115,123.74,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i7-8700T ",3.02,,1.86,0.010,0.086,303,35,1,303,35,385.98,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i9-10900TE ",3.86,,2.4,0.008,0.110,488,35,1,488,35,286.48,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i9-12900TE ",6.29,,2.21,0.012,0.180,544,35,1,544,35,167.25,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core,"Intel® Core™ i9-13900K ",17.97,,6.61,0.030,0.144,599,125,1,599,125,91.7,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom,Intel® Processor N200 CPU,0.17,,0.09,0.001,0.029,193,6,1,193,6,5851.61,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® W1290P ",6.17,,3.96,0.010,0.049,594,125,1,594,125,180.39,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",2.31,,1.48,0.009,0.033,249,71,1,249,71,434.36,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",29.19,,7.31,0.009,0.139,3144,210,2,1572,105,71.02,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",95.18,,21.6,0.006,0.232,16954,410,2,8477,205,23.81,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",129.12,,31.53,0.007,0.239,18718,540,2,9359,270,73.82,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",594.44,,48.37,0.017,0.849,34000,700,2,17000,350,8.51,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",27.77,,6.96,0.014,0.139,2022,200,2,1011,100,74.6,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",69.04,,15.92,0.030,0.230,2274,300,2,1137,150,43.95,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",308.2,201.07,,0.160,2.055,1925,150,1,1925,150,51.9,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",264.35,182.28,,0.824,1.762,321,150,1,321,150,60.21,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom,Intel® Celeron® 6305E CPU,1.49,,0.38,0.014,0.099,107,15,1,107,15,675.13,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,0.98,1.99,0.98,0.015,0.082,67,12,1,67,12,4060.12,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,17.25,8.84,4.82,0.040,0.616,426,28,1,426,28,227.85,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,15.51,7.82,4.16,0.032,0.554,490,28,1,490,28,257.82,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,18.55,9.77,5.39,0.037,0.161,502,115,1,502,115,215.12,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,0.46,0.25,0.14,0.002,0.077,193,6,1,193,6,8685.49,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,8.41,4.38,2.38,0.079,0.560,107,15,1,107,15,475.59,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,1.23,,0.79,0.018,0.102,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,13.96,,3.78,0.033,0.499,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,10.26,,3.52,0.020,0.089,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,0.57,,0.19,0.003,0.094,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
unet-camvid-onnx-0001,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,8.94,,2.44,0.084,0.596,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom,Intel® Atom® X6425E CPU,2.09,,0.88,0.031,0.174,67,12,1,67,12,484.41,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i3-8100 ",10.63,,5.8,0.091,0.163,117,65,1,117,65,95.04,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i5-10500TE ",15.37,,8.28,0.072,0.236,214,65,1,214,65,74.25,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i5-13600K ",51.79,,15.62,0.157,0.414,329,125,1,329,125,29.87,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,24.16,,6.61,0.057,0.863,426,28,1,426,28,46.04,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,18.04,,4.84,0.037,0.644,490,28,1,490,28,57.2,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,Intel® Core™ i7-12700H CPU,45.55,,13.34,0.091,0.396,502,115,1,502,115,29.39,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i7-8700T ",12.71,,6.82,0.042,0.363,303,35,1,303,35,87.06,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i9-10900TE ",16.64,,8.64,0.034,0.475,488,35,1,488,35,67.32,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i9-12900TE ",27.33,,8.16,0.050,0.781,544,35,1,544,35,41.73,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core,"Intel® Core™ i9-13900K ",78.06,,25.64,0.130,0.624,599,125,1,599,125,23.25,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom,Intel® Processor N200 CPU,0.7,,0.34,0.004,0.117,193,6,1,193,6,1470.7,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® W1290P ",27.37,,14.08,0.046,0.219,594,125,1,594,125,40.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",10.06,,5.64,0.040,0.142,249,71,1,249,71,100.33,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",106.36,,29.72,0.034,0.506,3144,210,2,1572,105,21.82,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",313.83,,87.89,0.019,0.765,16954,410,2,8477,205,10.5,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",490.61,,109.01,0.026,0.909,18718,540,2,9359,270,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",2125.85,,193.93,0.063,3.037,34000,700,2,17000,350,3.31,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",101.13,,28.36,0.050,0.506,2022,200,2,1011,100,22.77,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",242.25,,62.31,0.107,0.808,2274,300,2,1137,150,13.97,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",784.51,385.29,,0.408,5.230,1925,150,1,1925,150,20.34,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",582.91,341.6,,1.816,3.886,321,150,1,321,150,27.27,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom,Intel® Celeron® 6305E CPU,5.45,,1.54,0.051,0.363,107,15,1,107,15,184.42,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,6.74,6.85,3.38,0.101,0.562,67,12,1,67,12,591.73,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,63.57,29.68,16.2,0.149,2.270,426,28,1,426,28,63.78,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,57.51,26.04,13.46,0.117,2.054,490,28,1,490,28,69.04,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,70.17,33.53,18.68,0.140,0.610,502,115,1,502,115,56.66,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,1.76,0.94,0.5,0.009,0.293,193,6,1,193,6,2270.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,32.22,15.09,8.06,0.301,2.148,107,15,1,107,15,123.79,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,7.72,,3.75,0.115,0.643,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,51.27,,13.81,0.120,1.831,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,45.37,,13.46,0.090,0.395,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,2.17,,0.7,0.011,0.361,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,33.62,,8.52,0.314,2.242,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom,Intel® Atom® X6425E CPU,22.9,,10.3,0.342,1.908,67,12,1,67,12,44.81,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i3-8100 ",111.7,,63.53,0.955,1.718,117,65,1,117,65,9.05,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i5-10500TE ",167.36,,91.55,0.782,2.575,214,65,1,214,65,6.74,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i5-13600K ",600,,195.96,1.824,4.800,329,125,1,329,125,3.05,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,252.28,,77.33,0.592,9.010,426,28,1,426,28,4.56,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,186.61,,55.02,0.381,6.665,490,28,1,490,28,5.72,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,Intel® Core™ i7-12700H CPU,501.3,,153.33,0.999,4.359,502,115,1,502,115,3.07,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i7-8700T ",137.83,,76.63,0.455,3.938,303,35,1,303,35,8.19,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i9-10900TE ",184.15,,95.48,0.377,5.261,488,35,1,488,35,6.36,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i9-12900TE ",293.87,,93.77,0.540,8.396,544,35,1,544,35,4.16,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core,"Intel® Core™ i9-13900K ",859.57,,285.93,1.435,6.877,599,125,1,599,125,2.43,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom,Intel® Processor N200 CPU,7.83,,4.08,0.041,1.306,193,6,1,193,6,136.7,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® W1290P ",298.34,,148.85,0.502,2.387,594,125,1,594,125,4.01,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",106.15,,62.71,0.426,1.495,249,71,1,249,71,9.46,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",1051.35,,338.78,0.334,5.006,3144,210,2,1572,105,2.52,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",2835.63,,919.26,0.167,6.916,16954,410,2,8477,205,1.22,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",4653.13,,1373.23,0.249,8.617,18718,540,2,9359,270,0.87,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",13069.92,,2139.01,0.384,18.671,34000,700,2,17000,350,1.07,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",1009.33,,322.76,0.499,5.047,2022,200,2,1011,100,2.62,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",2219.25,,701.36,0.976,7.397,2274,300,2,1137,150,1.33,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",3774.19,2809.6,,1.961,25.161,1925,150,1,1925,150,4.2,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",2481.58,2188.3,,7.731,16.544,321,150,1,321,150,6.38,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom,Intel® Celeron® 6305E CPU,54.03,,17.97,0.505,3.602,107,15,1,107,15,18.27,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,65.71,66.39,33.87,0.981,5.476,67,12,1,67,12,60.33,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,546.82,290.91,170.54,1.284,19.529,426,28,1,426,28,7.02,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,494.07,258.4,135.87,1.008,17.645,490,28,1,490,28,7.98,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,614.04,322.38,201.06,1.223,5.339,502,115,1,502,115,6.27,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,18.67,9.83,5.51,0.097,3.112,193,6,1,193,6,213.14,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,292.06,153.11,86.79,2.730,19.471,107,15,1,107,15,13.59,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,28.49,,39.7,0.425,2.374,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,485.88,,147.22,1.141,17.353,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,504.34,,154.35,1.005,4.386,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,23.01,,8.08,0.119,3.835,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v3_tiny,OV-2023.2,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,322.03,,92.97,3.010,21.469,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec," "," "," "," "," "," "," "," "," "," "," "," "," "," ",FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom,Intel® Atom® X6425E CPU,10.23,,5.1,0.153,0.853,67,12,1,67,12,101.71,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i3-8100 ",53.43,,33.01,0.457,0.822,117,65,1,117,65,19.24,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i5-10500TE ",81.28,,46.84,0.380,1.251,214,65,1,214,65,13.7,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i5-13600K ",249.13,,95.35,0.757,1.993,329,125,1,329,125,6.67,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,Intel® Core™ i7-1185G7 CPU,110.57,,40.76,0.260,3.949,426,28,1,426,28,10.77,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,Intel® Core™ i7-1185GRE CPU,77.4,,27.48,0.158,2.764,490,28,1,490,28,13.63,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,Intel® Core™ i7-12700H CPU,213.22,,81.23,0.425,1.854,502,115,1,502,115,6.64,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i7-8700T ",71.39,,42.39,0.236,2.040,303,35,1,303,35,16.54,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i9-10900TE ",92.64,,52.82,0.190,2.647,488,35,1,488,35,12.63,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i9-12900TE ",132.43,,50.68,0.243,3.784,544,35,1,544,35,9.16,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core,"Intel® Core™ i9-13900K ",377.83,,153.02,0.631,3.023,599,125,1,599,125,5.31,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom,Intel® Processor N200 CPU,3.26,,1.94,0.017,0.543,193,6,1,193,6,316.73,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® W1290P ",135.15,,72.61,0.228,1.081,594,125,1,594,125,9.22,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® E-2124G ",52.15,,32.9,0.209,0.735,249,71,1,249,71,19.49,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® Gold 5218T ",450.82,,174.94,0.143,2.147,3144,210,2,1572,105,5.96,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® Platinum 8270 ",998.4,,454.7,0.059,2.435,16954,410,2,8477,205,3.53,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",1714.12,,554.58,0.092,3.174,18718,540,2,9359,270,2.38,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® Platinum 8490H ",2889.04,,998.41,0.085,4.127,34000,700,2,17000,350,3.61,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® Silver 4216R ",431.74,,165.69,0.214,2.159,2022,200,2,1011,100,6.18,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,xeon,"Intel® Xeon® Silver 4316 ",862.18,,340.38,0.379,2.874,2274,300,2,1137,150,3.32,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",1539.24,1433.21,,0.800,10.262,1925,150,1,1925,150,10.28,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",1005.97,1032.49,,3.134,6.706,321,150,1,321,150,15.79,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom,Intel® Celeron® 6305E CPU,24.25,,9.56,0.227,1.617,107,15,1,107,15,40.75,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom-iGPU,Intel® Atom® X6425E iGPU,32.03,33.52,19.17,0.478,2.669,67,12,1,67,12,124.04,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core-iGPU,Intel® Core™ i7-1185G7 iGPU,206.31,140.75,88.2,0.484,7.368,426,28,1,426,28,19.11,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core-iGPU,Intel® Core™ i7-1185GRE iGPU,164.45,109.09,61.03,0.336,5.873,490,28,1,490,28,24.02,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core-iGPU,Intel® Core™ i7-12700H iGPU,220.75,149.98,96.81,0.440,1.920,502,115,1,502,115,17.82,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom-iGPU,Intel® Processor N200 iGPU,8.37,5.57,3.25,0.043,1.394,193,6,1,193,6,477.17,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom-iGPU,Intel® Celeron® 6305E iGPU,35.14,,20.32,0.524,2.928,67,12,1,67,12,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom-CPU+iGPU,Intel® Atom® X6425E CPU+iGPU,179.3,,74.12,0.421,6.403,426,28,1,426,28,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-1185G7 CPU+iGPU,212.84,,82.48,0.424,1.851,502,115,1,502,115,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,core-CPU+iGPU,Intel® Core™ i7-12700H CPU+iGPU,10.06,,4.44,0.052,1.676,193,6,1,193,6,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
yolo_v8n,OV-2023.2,atom-CPU+iGPU,Intel® Processor N200 CPU+iGPU,113.96,,48.9,1.065,7.597,107,15,1,107,15,,FPS,FPS/$,FPS/TDP,msec.,,,,
|
||||
end_rec,,atom-CPU+iGPU,Intel® Celeron® 6305E CPU+iGPU,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
chatGLM2-6B,OV-2023.2,core,"Intel® Core™ i9-13900K ",277,,340,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,374
|
||||
chatGLM2-6B,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",,173,,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,
|
||||
chatGLM2-6B,OV-2023.2,xeon,Intel® Xeon® Platinum 8490H,,114,,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,
|
||||
chatGLM2-6B,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",,,,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,
|
||||
chatGLM2-6B,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",95,121,,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
Llama-2-7b-chat,OV-2023.2,core,"Intel® Core™ i9-13900K ",415,,420,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,417
|
||||
Llama-2-7b-chat,OV-2023.2,xeon,"Intel® Xeon® Platinum 8380 ",179,,201,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,133
|
||||
Llama-2-7b-chat,OV-2023.2,xeon,Intel® Xeon® Platinum 8490H,143,,133,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,136
|
||||
Llama-2-7b-chat,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",111,95,,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,126
|
||||
Llama-2-7b-chat,OV-2023.2,accel,"Intel® Arc®A-Series Graphics ",163,163,,,,,,,,,,msec./token,FPS/$,FPS/TDP,msec./token,,,,221
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
begin_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
Stable-Diffusion-v2-1,OV-2023.2,accel,"Intel® Data Center GPU Flex 170 ",7.1,4.4,,,,,,,,,,sec.,FPS/$,FPS/TDP,sec.,,,,
|
||||
end_rec,,,,,,,,,,,,,,,,,,,,,,
|
||||
|
52
docs/_static/css/custom.css
vendored
52
docs/_static/css/custom.css
vendored
@@ -20,6 +20,7 @@ main .searchForm {
|
||||
pre {
|
||||
white-space: pre-wrap;
|
||||
word-wrap: break-word;
|
||||
background-color: #efefef;
|
||||
}
|
||||
|
||||
|
||||
@@ -29,26 +30,28 @@ a#wap_dns {display: none;}
|
||||
|
||||
/* Sphinx-design tabs override */
|
||||
.sd-tab-set>input:checked+label {
|
||||
border-color: var(--sd-color-tabs-underline-inactive);
|
||||
color: var(--sd-color-info-text)!important;
|
||||
background-color: rgb(0 104 181)!important;
|
||||
color: var(--sd-color-black)!important;
|
||||
background-color: #f8f8f8!important;
|
||||
border: solid 1px #bdbdbd;
|
||||
border-bottom: solid 0px;
|
||||
margin-bottom: -1px;
|
||||
}
|
||||
|
||||
.sd-tab-set>input:checked+label:hover {
|
||||
color: --sd-color-info-text;
|
||||
background-color: rgb(0,74,134)!important;
|
||||
background-color: #f8f8f8!important;
|
||||
}
|
||||
|
||||
.sd-tab-set>input:not(:checked)+label:hover {
|
||||
color: var(--sd-color-black)!important;
|
||||
background-color: rgb(245, 245, 245)!important;
|
||||
background-color: #cccccc!important;
|
||||
border-color: var(--sd-color-card-header)!important;
|
||||
}
|
||||
|
||||
.sd-tab-set>label {
|
||||
border-bottom: 0.125rem solid transparent;
|
||||
margin-right: 10px!important;
|
||||
margin-bottom: 8px;
|
||||
margin-bottom: 0;
|
||||
color: var(--sd-color-black)!important;
|
||||
border-color: var(--sd-color-tabs-underline-inactive);
|
||||
cursor: pointer;
|
||||
@@ -60,11 +63,29 @@ a#wap_dns {display: none;}
|
||||
z-index: 1;
|
||||
}
|
||||
|
||||
.sd-tab-content {
|
||||
box-shadow:none!important;
|
||||
border-top: solid 2px var(--sd-color-tabs-overline)!important;
|
||||
.sd-tab-label {
|
||||
background-color: #e5e5e5;
|
||||
}
|
||||
|
||||
.sd-tab-content {
|
||||
box-shadow: 0 0 0 0;
|
||||
border: solid 1px var(--sd-color-tabs-overline);
|
||||
border-color: #bdbdbd;
|
||||
background-color: #f8f8f8;
|
||||
padding-right: 4px;
|
||||
padding-left: 4px;
|
||||
padding-bottom: 6px;
|
||||
margin: 0 0 0 0;
|
||||
|
||||
}
|
||||
|
||||
.sd-tab-content .sd-tab-content {
|
||||
background-color: #f8f8f8
|
||||
}
|
||||
|
||||
.sd-tab-content .sd-tab-content .sd-tab-content {
|
||||
background-color: #f8f8f8
|
||||
}
|
||||
|
||||
/* Navigation panels override */
|
||||
/* =================================================== */
|
||||
@@ -573,13 +594,17 @@ div.highlight {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.modal-content-grid p {
|
||||
font-size: 0.6rem;
|
||||
}
|
||||
|
||||
.modal-content-grid-container .column {
|
||||
min-width: 100px;
|
||||
}
|
||||
|
||||
.modal-content-grid-container label {
|
||||
margin-bottom: 0;
|
||||
padding-right: 4px;
|
||||
padding-right: 6px;
|
||||
}
|
||||
|
||||
.modal-content-grid-container input {
|
||||
@@ -620,6 +645,13 @@ div.highlight {
|
||||
margin-bottom: 0rem;
|
||||
}
|
||||
|
||||
.ul {
|
||||
display: flex;
|
||||
flex-direction: row !important;
|
||||
margin: 0px;
|
||||
padding: 0px 0px 0px 10px;
|
||||
}
|
||||
|
||||
.benchmark-graph-results-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
|
||||
5
docs/_static/html/modal.html
vendored
5
docs/_static/html/modal.html
vendored
@@ -52,6 +52,7 @@
|
||||
<div class="modal-line-divider"></div>
|
||||
<div class="modal-content-grid">
|
||||
<div class="precisions-column column"></div>
|
||||
<p>(Use for Throughput and Latency parameters only)</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -69,9 +70,5 @@
|
||||
<div class="chart-placeholder"></div>
|
||||
</section>
|
||||
<div class="modal-footer">
|
||||
<div class="modal-line-divider"></div>
|
||||
<div class="modal-footer-content">
|
||||
<div class="modal-disclaimer-box"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
3
docs/_static/images/notebook_eye.png
vendored
Normal file
3
docs/_static/images/notebook_eye.png
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1a2e58cf3e5703356b0e060ebc7cb0cbb852db9cde003d41c1d86bafc3a4ccb1
|
||||
size 68559
|
||||
4
docs/_static/images/ov_homepage_diagram.png
vendored
4
docs/_static/images/ov_homepage_diagram.png
vendored
@@ -1,3 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e0791abad48ec62d3ebcd111cf42139abe4bfb809c84882c0e8aa88ff7b430b7
|
||||
size 85563
|
||||
oid sha256:27bff5eb0b93754e6f8cff0ae294d0221cc9184a517d1991da06bea9cc272eb7
|
||||
size 84550
|
||||
|
||||
3
docs/_static/images/ov_workflow_diagram_convenience.svg
vendored
Normal file
3
docs/_static/images/ov_workflow_diagram_convenience.svg
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bbc2855ac007644a2562362bc7a8786c93b3d1d3e96ba733eec9a6c03f63a8c9
|
||||
size 160830
|
||||
3
docs/_static/images/ov_workflow_diagram_performance.svg
vendored
Normal file
3
docs/_static/images/ov_workflow_diagram_performance.svg
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a013860e4b2f942c5632bae8e3dfade266cfdcad2e34f6371ea8b1873e18f75b
|
||||
size 178797
|
||||
166
docs/_static/js/graphs.js
vendored
166
docs/_static/js/graphs.js
vendored
@@ -1,31 +1,26 @@
|
||||
// =================== GENERAL OUTPUT CONFIG =========================
|
||||
|
||||
const chartDisclaimers = {
|
||||
Value: 'Value: Performance/(No_of_sockets * Price_of_CPU_dGPU), where prices are in USD as of May 2023.',
|
||||
Efficiency: 'Efficiency: Performance/(No_of_sockets * TDP_of_CPU_dGPU), where total power dissipation (TDP) is in Watt as of May 2023.'
|
||||
Value: 'Value: Performance/(No_of_sockets * Price_of_CPU_dGPU), where prices are in USD as of November 2023.',
|
||||
Efficiency: 'Efficiency: Performance/(No_of_sockets * TDP_of_CPU_dGPU), where total power dissipation (TDP) is in Watt as of November 2023.'
|
||||
}
|
||||
|
||||
const OVdefaultSelections = {
|
||||
platformTypes: {name: 'ietype', data: ['core']},
|
||||
platforms: {name: 'platform',
|
||||
data: [
|
||||
'Intel® Core™ i9-12900K CPU-only',
|
||||
'Intel® Core™ i9-13900K CPU-only',
|
||||
'Intel® Core™ i5-10500TE CPU-only',
|
||||
'Intel® Core™ i5-13600K CPU-only',
|
||||
'Intel® Core™ i5-8500 CPU-only',
|
||||
'Intel® Core™ i7-8700T CPU-only',
|
||||
'Intel® Core™ i9-10900TE CPU-only',
|
||||
'Intel® Core™ i7-1165G7 CPU-only'
|
||||
'Intel® Core™ i5-10500TE ',
|
||||
'Intel® Core™ i7-1185G7 CPU',
|
||||
'Intel® Core™ i9-10900TE ',
|
||||
]
|
||||
},
|
||||
platformFilters: {name: 'coretype', data: ['CPU']},
|
||||
models: {name: 'networkmodel',
|
||||
data: [
|
||||
'bert-large-uncased-whole-word-masking-squad-0001 ',
|
||||
'mobilenet-ssd ',
|
||||
'bert-base-cased',
|
||||
'yolo_v3_tiny',
|
||||
'yolo_v8n',
|
||||
'resnet-50',
|
||||
'yolo_v3_tiny'
|
||||
]
|
||||
},
|
||||
parameters: {name: 'kpi', data: ['Throughput']},
|
||||
@@ -122,6 +117,7 @@ class ExcelData {
|
||||
this.release = csvdataline[1];
|
||||
this.ieType = csvdataline[2];
|
||||
this.platformName = csvdataline[3];
|
||||
this.throughputInt4 = csvdataline[22];
|
||||
this.throughputInt8 = csvdataline[4];
|
||||
this.throughputFP16 = csvdataline[5];
|
||||
this.throughputFP32 = csvdataline[6];
|
||||
@@ -133,11 +129,13 @@ class ExcelData {
|
||||
this.pricePerSocket = csvdataline[12];
|
||||
this.tdpPerSocket = csvdataline[13];
|
||||
this.latency = csvdataline[14];
|
||||
|
||||
this.throughputUnit = csvdataline[15]
|
||||
this.valueUnit = csvdataline[16]
|
||||
this.efficiencyUnit = csvdataline[17]
|
||||
this.latencyUnit = csvdataline[18]
|
||||
this.latency16 = csvdataline[19];
|
||||
this.latency32 = csvdataline[20];
|
||||
this.latency4 = csvdataline[21];
|
||||
this.throughputUnit = csvdataline[15];
|
||||
this.valueUnit = csvdataline[16];
|
||||
this.efficiencyUnit = csvdataline[17];
|
||||
this.latencyUnit = csvdataline[18];
|
||||
}
|
||||
}
|
||||
|
||||
@@ -149,7 +147,6 @@ class OVMSExcelData extends ExcelData {
|
||||
this.throughputInt8 = csvdataline[4];
|
||||
this.throughputOVMSFP32 = csvdataline[7];
|
||||
this.throughputFP32 = csvdataline[6];
|
||||
|
||||
this.throughputUnit = csvdataline[8]
|
||||
}
|
||||
}
|
||||
@@ -168,20 +165,28 @@ class GraphData {
|
||||
{
|
||||
'ovmsint8': excelData.throughputOVMSInt8,
|
||||
'ovmsfp32': excelData.throughputOVMSFP32,
|
||||
'int4': excelData.throughputInt4,
|
||||
'int8': excelData.throughputInt8,
|
||||
'fp16': excelData.throughputFP16,
|
||||
'fp32': excelData.throughputFP32
|
||||
},
|
||||
excelData.value,
|
||||
excelData.efficiency,
|
||||
excelData.latency);
|
||||
{
|
||||
'ovmsint8': excelData.throughputOVMSInt8,
|
||||
'ovmsfp32': excelData.throughputOVMSFP32,
|
||||
'int4': excelData.latency4,
|
||||
'int8': excelData.latency,
|
||||
'fp16': excelData.latency16,
|
||||
'fp32': excelData.latency32
|
||||
},);
|
||||
|
||||
this.price = excelData.price;
|
||||
this.tdp = excelData.tdp;
|
||||
this.sockets = excelData.sockets;
|
||||
this.pricePerSocket = excelData.pricePerSocket;
|
||||
this.tdpPerSocket = excelData.tdpPerSocket;
|
||||
this.latency = excelData.latency;
|
||||
|
||||
this.throughputUnit = excelData.throughputUnit;
|
||||
this.valueUnit = excelData.valueUnit;
|
||||
this.efficiencyUnit = excelData.efficiencyUnit;
|
||||
@@ -191,11 +196,11 @@ class GraphData {
|
||||
|
||||
|
||||
class KPI {
|
||||
constructor(precisions, value, efficiency, latency) {
|
||||
constructor(precisions, value, efficiency, latencies) {
|
||||
this.throughput = precisions;
|
||||
this.value = value;
|
||||
this.efficiency = efficiency;
|
||||
this.latency = latency;
|
||||
this.latency = latencies;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -221,12 +226,12 @@ class Modal {
|
||||
static getKpisLabels(version) {
|
||||
if (version == 'ovms')
|
||||
return ['Throughput'];
|
||||
return ['Throughput', 'Value', 'Efficiency', 'Latency'];
|
||||
return ['Throughput', 'Latency', 'Value', 'Efficiency'];
|
||||
}
|
||||
static getPrecisionsLabels(version) {
|
||||
if (version == 'ovms')
|
||||
return ['OV-INT8 (reference)', 'INT8', 'OV-FP32 (reference)', 'FP32'];
|
||||
return ['INT8', 'FP16', 'FP32'];
|
||||
return ['INT4', 'INT8', 'FP16', 'FP32'];
|
||||
}
|
||||
static getCoreTypes(labels) {
|
||||
return labels.map((label) => {
|
||||
@@ -249,6 +254,8 @@ class Modal {
|
||||
return 'ovmsint8';
|
||||
case 'OV-FP32 (reference)':
|
||||
return 'ovmsfp32';
|
||||
case 'INT4':
|
||||
return 'int4';
|
||||
case 'INT8':
|
||||
return 'int8';
|
||||
case 'FP16':
|
||||
@@ -260,6 +267,27 @@ class Modal {
|
||||
}
|
||||
});
|
||||
}
|
||||
static getUnitDescription(unit) {
|
||||
console.log(unit)
|
||||
switch (unit) {
|
||||
case 'msec.':
|
||||
return '(lower is better)';
|
||||
case 'msec/token':
|
||||
return '(lower is better)';
|
||||
case 'Generating time, sec.':
|
||||
return '(lower is better)';
|
||||
case 'msec/token/TDP':
|
||||
return '(lower is better)';
|
||||
case 'FPS':
|
||||
return '(higher is better)';
|
||||
case 'FPS/$':
|
||||
return '(higher is better)';
|
||||
case 'FPS/TDP':
|
||||
return '(higher is better)';
|
||||
default:
|
||||
return '';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -303,6 +331,7 @@ class Graph {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// this returns an object that is used to ender the chart
|
||||
static getGraphConfig(kpi, units, precisions) {
|
||||
@@ -310,48 +339,69 @@ class Graph {
|
||||
case 'throughput':
|
||||
return {
|
||||
chartTitle: 'Throughput',
|
||||
chartSubtitle: '(higher is better)',
|
||||
iconClass: 'throughput-icon',
|
||||
datasets: precisions.map((precision) => this.getPrecisionConfig(precision, units.throughputUnit)),
|
||||
unit: units.throughputUnit,
|
||||
datasets: precisions.map((precision) => this.getPrecisionThroughputConfig(precision, units.throughputUnit)),
|
||||
};
|
||||
case 'latency':
|
||||
return {
|
||||
chartTitle: 'Latency',
|
||||
chartSubtitle: '(lower is better)',
|
||||
iconClass: 'latency-icon',
|
||||
datasets: [{ data: null, color: '#8F5DA2', label: `${units.latencyUnit}` }],
|
||||
unit: units.latencyUnit,
|
||||
datasets: precisions.map((precision) => this.getPrecisionLatencyConfig(precision, units.latencyUnit)),
|
||||
};
|
||||
case 'value':
|
||||
return {
|
||||
chartTitle: 'Value',
|
||||
chartSubtitle: '(higher is better)',
|
||||
iconClass: 'value-icon',
|
||||
datasets: [{ data: null, color: '#8BAE46', label: `${units.valueUnit} (INT8)` }],
|
||||
unit: units.valueUnit,
|
||||
datasets: [{ data: null, color: '#8BAE46', label: `INT8` }],
|
||||
};
|
||||
case 'efficiency':
|
||||
return {
|
||||
chartTitle: 'Efficiency',
|
||||
chartSubtitle: '(higher is better)',
|
||||
iconClass: 'efficiency-icon',
|
||||
datasets: [{ data: null, color: '#E96115', label: `${units.efficiencyUnit} (INT8)` }],
|
||||
unit: units.efficiencyUnit,
|
||||
datasets: [{ data: null, color: '#E96115', label: `INT8` }],
|
||||
};
|
||||
default:
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
static getPrecisionConfig(precision, unit) {
|
||||
static getPrecisionThroughputConfig(precision, unit) {
|
||||
switch (precision) {
|
||||
case 'ovmsint8':
|
||||
return { data: null, color: '#FF8F51', label: `${unit} (OV Ref. INT8)` };
|
||||
case 'ovmsfp32':
|
||||
return { data: null, color: '#B24501', label: `${unit} (OV Ref. FP32)` };
|
||||
case 'int4':
|
||||
return { data: null, color: '#5bd0f0', label: `INT4` };
|
||||
case 'int8':
|
||||
return { data: null, color: '#00C7FD', label: `${unit} (INT8)` };
|
||||
return { data: null, color: '#00C7FD', label: `INT8` };
|
||||
case 'fp16':
|
||||
return { data: null, color: '#009fca', label: `${unit} (FP16)` };
|
||||
return { data: null, color: '#009fca', label: `FP16` };
|
||||
case 'fp32':
|
||||
return { data: null, color: '#007797', label: `${unit} (FP32)` };
|
||||
return { data: null, color: '#007797', label: `FP32` };
|
||||
default:
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
static getPrecisionLatencyConfig(precision, unit) {
|
||||
switch (precision) {
|
||||
case 'ovmsint8':
|
||||
return { data: null, color: '#FF8F51', label: `${unit} (OV Ref. INT8)` };
|
||||
case 'ovmsfp32':
|
||||
return { data: null, color: '#B24501', label: `${unit} (OV Ref. FP32)` };
|
||||
case 'int4':
|
||||
return { data: null, color: '#c197d1', label: `INT4` };
|
||||
case 'int8':
|
||||
return { data: null, color: '#b274ca', label: `INT8` };
|
||||
case 'fp16':
|
||||
return { data: null, color: '#8424a9', label: `FP16` };
|
||||
case 'fp32':
|
||||
return { data: null, color: '#5b037d', label: `FP32` };
|
||||
default:
|
||||
return {};
|
||||
}
|
||||
@@ -389,7 +439,6 @@ $(document).ready(function () {
|
||||
|
||||
function clickBuildGraphs(graph, networkModels, ietype, platforms, kpis, precisions) {
|
||||
renderData(graph, networkModels, ietype, platforms, kpis, precisions);
|
||||
|
||||
$('.modal-footer').show();
|
||||
$('#modal-display-graphs').show();
|
||||
$('.edit-settings-btn').on('click', (event) => {
|
||||
@@ -558,7 +607,7 @@ $(document).ready(function () {
|
||||
|
||||
function validateThroughputSelection() {
|
||||
const precisions = $('.precisions-column').find('input')
|
||||
if (getSelectedKpis().includes('Throughput')) {
|
||||
if (getSelectedKpis().includes('Throughput') || getSelectedKpis().includes('Latency')) {
|
||||
precisions.prop('disabled', false);
|
||||
}
|
||||
else {
|
||||
@@ -709,10 +758,10 @@ $(document).ready(function () {
|
||||
if (!listContainer) {
|
||||
listContainer = document.createElement('ul');
|
||||
listContainer.style.display = 'flex';
|
||||
listContainer.style.flexDirection = 'column';
|
||||
listContainer.style.flexDirection = 'row';
|
||||
listContainer.style.margin = 0;
|
||||
listContainer.style.padding = 0;
|
||||
listContainer.style.paddingLeft = '10px';
|
||||
listContainer.style.paddingLeft = '0px';
|
||||
|
||||
legendContainer.appendChild(listContainer);
|
||||
}
|
||||
@@ -723,8 +772,9 @@ $(document).ready(function () {
|
||||
const htmlLegendPlugin = {
|
||||
id: 'htmlLegend',
|
||||
afterUpdate(chart, args, options) {
|
||||
const ul = getOrCreateLegendList(chart, chart.options.plugins.htmlLegend.containerID);
|
||||
|
||||
const ul = getOrCreateLegendList(chart, chart.options.plugins.htmlLegend.containerID);
|
||||
|
||||
// Remove old legend items
|
||||
while (ul.firstChild) {
|
||||
ul.firstChild.remove();
|
||||
@@ -732,12 +782,11 @@ $(document).ready(function () {
|
||||
|
||||
// Reuse the built-in legendItems generator
|
||||
const items = chart.legend.legendItems;
|
||||
|
||||
items.forEach(item => {
|
||||
const li = document.createElement('li');
|
||||
li.style.alignItems = 'center';
|
||||
li.style.display = 'flex';
|
||||
li.style.flexDirection = 'row';
|
||||
li.style.display = 'block';
|
||||
li.style.flexDirection = 'column';
|
||||
li.style.marginLeft = '10px';
|
||||
|
||||
li.onclick = () => {
|
||||
@@ -758,7 +807,7 @@ $(document).ready(function () {
|
||||
boxSpan.style.borderWidth = item.lineWidth + 'px';
|
||||
boxSpan.style.display = 'inline-block';
|
||||
boxSpan.style.height = '12px';
|
||||
boxSpan.style.marginRight = '10px';
|
||||
boxSpan.style.marginRight = '4px';
|
||||
boxSpan.style.width = '30px';
|
||||
|
||||
// Text
|
||||
@@ -766,7 +815,6 @@ $(document).ready(function () {
|
||||
textContainer.style.color = item.fontColor;
|
||||
textContainer.style.margin = 0;
|
||||
textContainer.style.padding = 0;
|
||||
// textContainer.style.fontFamily = 'Roboto';
|
||||
textContainer.style.fontSize = '0.8rem';
|
||||
textContainer.style.textDecoration = item.hidden ? 'line-through' : '';
|
||||
|
||||
@@ -832,7 +880,7 @@ $(document).ready(function () {
|
||||
function renderData(graph, networkModels, ietype, platforms, kpis, precisions) {
|
||||
|
||||
$('.chart-placeholder').empty();
|
||||
$('.modal-disclaimer-box').empty();
|
||||
$('.modal-footer').empty();
|
||||
const display = new ChartDisplay(getChartsDisplayMode(kpis.length), kpis.length);
|
||||
|
||||
networkModels.forEach((networkModel) => {
|
||||
@@ -863,6 +911,10 @@ $(document).ready(function () {
|
||||
}
|
||||
})
|
||||
|
||||
if(kpis.includes('Value') || kpis.includes('Efficiency')){
|
||||
$('.modal-footer').append($('<div class="modal-line-divider"></div>'))
|
||||
}
|
||||
$('.modal-footer').append($('<div class="modal-footer-content"><div class="modal-disclaimer-box"></div></div>'))
|
||||
for (let kpi of kpis) {
|
||||
if (chartDisclaimers[kpi])
|
||||
$('.modal-disclaimer-box').append($('<p>').text(chartDisclaimers[kpi]))
|
||||
@@ -883,10 +935,9 @@ $(document).ready(function () {
|
||||
chartWrap.addClass('chart-wrap');
|
||||
chartContainer.append(chartWrap);
|
||||
var labels = Graph.getPlatformNames(model);
|
||||
|
||||
var graphConfigs = kpis.map((str) => {
|
||||
var kpi = str.toLowerCase();
|
||||
var groupUnit = model[0]
|
||||
var groupUnit = model[0];
|
||||
if (kpi === 'throughput') {
|
||||
var throughputData = Graph.getDatabyKPI(model, kpi);
|
||||
var config = Graph.getGraphConfig(kpi, groupUnit, precisions);
|
||||
@@ -895,11 +946,18 @@ $(document).ready(function () {
|
||||
});
|
||||
return config;
|
||||
}
|
||||
else if(kpi === 'latency'){
|
||||
var latencyData = Graph.getDatabyKPI(model, kpi);
|
||||
var config = Graph.getGraphConfig(kpi, groupUnit, precisions);
|
||||
precisions.forEach((prec, index) => {
|
||||
config.datasets[index].data = latencyData.map(tData => tData[prec]);
|
||||
});
|
||||
return config;
|
||||
}
|
||||
var config = Graph.getGraphConfig(kpi, groupUnit);
|
||||
config.datasets[0].data = Graph.getDatabyKPI(model, kpi);
|
||||
return config;
|
||||
});
|
||||
|
||||
// get the client platform labels and create labels for all the graphs
|
||||
var labelsContainer = $('<div>');
|
||||
labelsContainer.addClass('chart-labels-container');
|
||||
@@ -922,8 +980,8 @@ $(document).ready(function () {
|
||||
var columnHeader = $('<div class="chart-header">');
|
||||
columnHeader.append($('<div class="title">' + graphConfig.chartTitle + '</div>'));
|
||||
columnHeader.append($('<div class="title">' + Graph.getGraphPlatformText(ietype) + '</div>'));
|
||||
columnHeader.append($('<div class="subtitle">' + graphConfig.chartSubtitle + '</div>'));
|
||||
|
||||
columnHeader.append($('<div class="subtitle">' + graphConfig.unit + ' ' + Modal.getUnitDescription(graphConfig.unit) + '</div>'));
|
||||
|
||||
columnHeaderContainer.append(columnHeader);
|
||||
chartGraphsContainer.append(graphItem);
|
||||
var graphClass = $('<div>');
|
||||
@@ -961,7 +1019,7 @@ $(document).ready(function () {
|
||||
var heightRatio = (30 + (labels.length * 55));
|
||||
var chart = $('<div>');
|
||||
const containerId = `legend-container-${id}`;
|
||||
const legend = $(`<div id="${containerId}">`);
|
||||
const legend = $(`<div id="${containerId}">`);
|
||||
legend.addClass('graph-legend-container');
|
||||
chart.addClass('chart');
|
||||
chart.addClass(widthClass);
|
||||
|
||||
File diff suppressed because one or more lines are too long
61
docs/_static/selector-tool/assets/selector-a91b1d3d.js
vendored
Normal file
61
docs/_static/selector-tool/assets/selector-a91b1d3d.js
vendored
Normal file
File diff suppressed because one or more lines are too long
@@ -1,7 +1,7 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta name="version" content="68d2f71" />
|
||||
<meta name="version" content="v_2023_2_0-5cca680" />
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Download Intel® Distribution of OpenVINO™ Toolkit</title>
|
||||
@@ -9,11 +9,10 @@
|
||||
name="description"
|
||||
content="Download a version of the Intel® Distribution of OpenVINO™ toolkit for Linux, Windows, or macOS."
|
||||
/>
|
||||
<script type="module" crossorigin src="./assets/selector-114afa0d.js"></script>
|
||||
<script type="module" crossorigin src="./assets/selector-a91b1d3d.js"></script>
|
||||
<link rel="stylesheet" href="./assets/selector-5c3f26d1.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
@@ -83,6 +83,18 @@ OpenVINO Python API
|
||||
|
||||
openvino.runtime.opset11
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.runtime.opset12
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.runtime.opset13
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
@@ -95,6 +107,60 @@ OpenVINO Python API
|
||||
|
||||
openvino.preprocess
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.device
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.hint
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.intel_auto
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.intel_cpu
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.intel_gpu
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.intel_gpu.hint
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.log
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
openvino.properties.streams
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _autosummary
|
||||
:template: custom-module-template.rst
|
||||
|
||||
@@ -8,19 +8,69 @@
|
||||
|
||||
openvino_docs_performance_benchmarks
|
||||
compatibility_and_support
|
||||
Release Notes <https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-1.html>
|
||||
prerelease_information
|
||||
system_requirements
|
||||
Release Notes <openvino_release_notes>
|
||||
Additional Resources <resources>
|
||||
|
||||
OpenVINO is a toolkit for simple and efficient deployment of various deep learning models.
|
||||
In this section you will find information on the product itself, as well as the software
|
||||
and hardware solutions it supports.
|
||||
|
||||
OpenVINO (Open Visual Inference and Neural network Optimization) is an open-source software toolkit designed to optimize, accelerate, and deploy deep learning models for user applications. OpenVINO was developed by Intel to work efficiently on a wide range of Intel hardware platforms, including CPUs (x86 and Arm), GPUs, and NPUs.
|
||||
|
||||
|
||||
Features
|
||||
##############################################################
|
||||
|
||||
One of the main purposes of OpenVINO is to streamline the deployment of deep learning models in user applications. It optimizes and accelerates model inference, which is crucial for such domains as Generative AI, Large Language models, and use cases like object detection, classification, segmentation, and many others.
|
||||
|
||||
* :doc:`Model Optimization <openvino_docs_model_optimization_guide>`
|
||||
|
||||
OpenVINO provides multiple optimization methods for both the training and post-training stages, including weight compression for Large Language models and Intel Optimum integration with Hugging Face.
|
||||
|
||||
* :doc:`Model Conversion and Framework Compatibility <openvino_docs_model_processing_introduction>`
|
||||
|
||||
Supported models can be loaded directly or converted to the OpenVINO format to achieve better performance. Supported frameworks include ONNX, PyTorch, TensorFlow, TensorFlow Lite, Keras, and PaddlePaddle.
|
||||
|
||||
* :doc:`Model Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
|
||||
|
||||
OpenVINO accelerates deep learning models on various hardware platforms, ensuring real-time, efficient inference.
|
||||
|
||||
* `Deployment on a server <https://github.com/openvinotoolkit/model_server>`__
|
||||
|
||||
A model can be deployed either locally using OpenVINO Runtime or on a model server. Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions. The model server enables quick model inference using external resources.
|
||||
|
||||
Architecture
|
||||
##############################################################
|
||||
|
||||
To learn more about how OpenVINO works, read the Developer documentation on its `architecture <https://github.com/openvinotoolkit/openvino/blob/master/src/docs/architecture.md>`__ and `core components <https://github.com/openvinotoolkit/openvino/blob/master/src/README.md>`__.
|
||||
|
||||
OpenVINO Ecosystem
|
||||
##############################################################
|
||||
|
||||
Along with the primary components of model optimization and runtime, the toolkit also includes:
|
||||
|
||||
* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__ - a tool for enhanced OpenVINO™ inference to get performance boost with minimal accuracy drop.
|
||||
* :doc:`Openvino Notebooks <tutorials>`- Jupyter Python notebook tutorials, which demonstrate key features of the toolkit.
|
||||
* `OpenVINO Model Server <https://github.com/openvinotoolkit/model_server>`__ - a server that enables scalability via a serving microservice.
|
||||
* :doc:`OpenVINO Training Extensions <ote_documentation>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
|
||||
* :doc:`Dataset Management Framework (Datumaro) <datumaro_documentation>` - a tool to build, transform, and analyze datasets.
|
||||
|
||||
Community
|
||||
##############################################################
|
||||
|
||||
OpenVINO community plays a vital role in the growth and development of the open-sourced toolkit. Users can contribute to OpenVINO and get support using the following channels:
|
||||
|
||||
* `OpenVINO GitHub issues, discussions and pull requests <https://github.com/openvinotoolkit/openvino>`__
|
||||
* `OpenVINO Blog <https://blog.openvino.ai/>`__
|
||||
* `Community Forum <https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/bd-p/distribution-openvino-toolkit>`__
|
||||
* `OpenVINO video tutorials <https://www.youtube.com/watch?v=_Jnjt21ZDS8&list=PLg-UKERBljNxdIQir1wrirZJ50yTp4eHv>`__
|
||||
* `Support Information <https://www.intel.com/content/www/us/en/support/products/96066/software/development-software/openvino-toolkit.html>`__
|
||||
|
||||
Case Studies
|
||||
##############################################################
|
||||
|
||||
OpenVINO has been employed in various case studies across a wide range of industries and applications, including healthcare, retail, safety and security, transportation, and more. Read about how OpenVINO enhances efficiency, accuracy, and safety in different sectors on the `success stories page <https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/success-stories.html>`__.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -8,55 +8,30 @@
|
||||
Distribution of OpenVINO™ toolkit.
|
||||
|
||||
|
||||
The OpenVINO runtime can infer various models of different input and output formats. Here, you can find configurations
|
||||
supported by OpenVINO devices, which are CPU, GPU, NPU, and GNA (Gaussian Neural Accelerator coprocessor).
|
||||
Currently, processors of the 11th generation and later (up to the 13th generation at the moment) provide a further performance boost, especially with INT8 models.
|
||||
OpenVINO enables you to implement its inference capabilities in your own software,
|
||||
utilizing various hardware. It currently supports the following processing units
|
||||
(for more details, see :doc:`system requirements <system_requirements>`):
|
||||
|
||||
* :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>`
|
||||
* :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>`
|
||||
* :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>`
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
GNA, currently available in the Intel® Distribution of OpenVINO™ toolkit,
|
||||
will be deprecated together with the hardware being discontinued
|
||||
in future CPU solutions.
|
||||
|
||||
With OpenVINO™ 2023.0 release, support has been cancelled for:
|
||||
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
|
||||
- Intel® Vision Accelerator Design with Intel® Movidius™
|
||||
|
||||
To keep using the MYRIAD and HDDL plugins with your hardware, revert to the OpenVINO 2022.3 LTS release.
|
||||
|
||||
|
||||
|
||||
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|
||||
| OpenVINO Device | Supported Hardware |
|
||||
+=====================================================================+======================================================================================================+
|
||||
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|
||||
|| (x86) | Extensions 512 (Intel® AVX-512), Intel® Advanced Matrix Extensions (Intel® AMX), |
|
||||
|| | Intel® Core™ Processors with Intel® AVX2, |
|
||||
|| | Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|
||||
|| | |
|
||||
|| (Arm®) | Raspberry Pi™ 4 Model B, Apple® Mac mini with Apple silicon |
|
||||
|| | |
|
||||
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|
||||
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel® Processor Graphics including Intel® HD Graphics and Intel® Iris® Graphics, |
|
||||
|| | Intel® Arc™ A-Series Graphics, Intel® Data Center GPU Flex Series, Intel® Data Center GPU Max Series |
|
||||
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|
||||
|| :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® |
|
||||
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® |
|
||||
|| | Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® |
|
||||
|| | Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, |
|
||||
|| | Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® |
|
||||
|| | Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ |
|
||||
|| | i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, |
|
||||
|| | Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, |
|
||||
|| | Intel® Core™ i3-1000G4 Processor |
|
||||
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|
||||
|| :doc:`NPU <openvino_docs_OV_UG_supported_plugins_NPU>` | |
|
||||
|| | |
|
||||
|| | |
|
||||
|| | |
|
||||
|| | |
|
||||
|| | |
|
||||
|| | |
|
||||
|| | |
|
||||
+---------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Beside inference using a specific device, OpenVINO offers three inference modes for automated inference management. These are:
|
||||
Beside running inference with a specific device,
|
||||
OpenVINO offers automated inference management with the following inference modes:
|
||||
|
||||
* :doc:`Automatic Device Selection <openvino_docs_OV_UG_supported_plugins_AUTO>` - automatically selects the best device
|
||||
available for the given task. It offers many additional options and optimizations, including inference on
|
||||
@@ -67,7 +42,7 @@ Beside inference using a specific device, OpenVINO offers three inference modes
|
||||
automatically, for example, if one device doesn’t support certain operations.
|
||||
|
||||
|
||||
Devices similar to the ones we have used for benchmarking can be accessed using `Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__,
|
||||
Devices similar to the ones used for benchmarking can be accessed using `Intel® DevCloud for the Edge <https://devcloud.intel.com/edge/>`__,
|
||||
a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution
|
||||
of OpenVINO™ Toolkit. `Learn more <https://devcloud.intel.com/edge/get_started/devcloud/>`__ or `Register here <https://inteliot.force.com/DevcloudForEdge/s/>`__.
|
||||
|
||||
@@ -76,9 +51,7 @@ To learn more about each of the supported devices and modes, refer to the sectio
|
||||
* :doc:`Inference Device Support <openvino_docs_OV_UG_Working_with_devices>`
|
||||
* :doc:`Inference Modes <openvino_docs_Runtime_Inference_Modes_Overview>`
|
||||
|
||||
|
||||
|
||||
For setting relevant configuration, refer to the
|
||||
For setting up a relevant configuration, refer to the
|
||||
:doc:`Integrate with Customer Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
|
||||
topic (step 3 "Configure input and output").
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
|
||||
openvino_docs_performance_benchmarks_faq
|
||||
OpenVINO Accuracy <openvino_docs_performance_int8_vs_fp32>
|
||||
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2023.1/_static/benchmarks_files/OV-2023.0-Performance-Data.xlsx>
|
||||
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2023.2/_static/benchmarks_files/OV-2023.2-Performance-Data.xlsx>
|
||||
openvino_docs_MO_DG_Getting_Performance_Numbers
|
||||
|
||||
|
||||
@@ -100,14 +100,14 @@ For a listing of all platforms and configurations used for testing, refer to the
|
||||
|
||||
.. grid-item::
|
||||
|
||||
.. button-link:: _static/benchmarks_files/OV-2023.1-Platform_list.pdf
|
||||
.. button-link:: _static/benchmarks_files/OV-2023.2-platform_list.pdf
|
||||
:color: primary
|
||||
:outline:
|
||||
:expand:
|
||||
|
||||
:material-regular:`download;1.5em` Click for Hardware Platforms [PDF]
|
||||
|
||||
.. button-link:: _static/benchmarks_files/OV-2023.1-system-info-detailed.xlsx
|
||||
.. button-link:: _static/benchmarks_files/OV-2023.2-system-info-detailed.xlsx
|
||||
:color: primary
|
||||
:outline:
|
||||
:expand:
|
||||
@@ -166,7 +166,7 @@ or `create an account <https://www.intel.com/content/www/us/en/secure/forms/devc
|
||||
Disclaimers
|
||||
####################################
|
||||
|
||||
* Intel® Distribution of OpenVINO™ toolkit performance results are based on release 2023.1, as of September 12, 2023.
|
||||
* Intel® Distribution of OpenVINO™ toolkit performance results are based on release 2023.2, as of November 15, 2023.
|
||||
|
||||
* OpenVINO Model Server performance results are based on release 2023.0, as of June 01, 2023.
|
||||
|
||||
|
||||
@@ -49,14 +49,10 @@
|
||||
- Public Network
|
||||
- Task
|
||||
- Input Size
|
||||
* - `BLOOMZ-560M <https://huggingface.co/bigscience/bloomz-560m>`__
|
||||
- BigScience Bloomz & MT0
|
||||
- Transformer based llm
|
||||
- 2048
|
||||
* - `GPT-J-6B <https://huggingface.co/EleutherAI/gpt-j-6b>`__
|
||||
- Eleuther AI
|
||||
* - `chatGLM2-6B <https://huggingface.co/THUDM/chatglm2-6b/tree/main>`__
|
||||
- THUDM
|
||||
- Transformer
|
||||
- 2048
|
||||
- 32K
|
||||
* - `Llama-2-7b-chat <https://ai.meta.com/llama/>`__
|
||||
- Meta AI
|
||||
- Auto regressive language
|
||||
@@ -77,6 +73,14 @@
|
||||
- DeepLab v3 Tf
|
||||
- semantic segmentation
|
||||
- 513x513
|
||||
* - `efficientdet-d0 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/efficientdet-d0-tf>`__
|
||||
- Efficientdet
|
||||
- classification
|
||||
- 512x512
|
||||
* - `faster_rcnn_resnet50_coco <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco>`__
|
||||
- Faster RCNN TF
|
||||
- object detection
|
||||
- 600x1024
|
||||
* - `mobilenet-v2 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-pytorch>`__
|
||||
- Mobilenet V2 PyTorch
|
||||
- classification
|
||||
|
||||
@@ -25,87 +25,93 @@ for more information.
|
||||
* - bert-base-cased
|
||||
- SST-2_bert_cased_padded
|
||||
- accuracy
|
||||
- -3.00%
|
||||
- -2.00%
|
||||
- 2.94%
|
||||
- -0.76%
|
||||
- 2.42%
|
||||
- 2.72%
|
||||
* - bert-large-uncased-whole-word-masking-squad-0001
|
||||
- SQUAD_v1_1_bert_msl384_mql64_ds128_lowercase
|
||||
- F1
|
||||
- -0.04%
|
||||
- 0.03%
|
||||
- 0.06%
|
||||
- 0.07%
|
||||
- -0.03%
|
||||
- 0.11%
|
||||
* - deeplabv3
|
||||
- VOC2012_segm
|
||||
- mean_iou
|
||||
- 0.00%
|
||||
- 0.49%
|
||||
- 0.23%
|
||||
- -0.13%
|
||||
- -0.16%
|
||||
* - efficientdet-d0
|
||||
- COCO2017_detection_91cl
|
||||
- coco_precision
|
||||
- -0.84%
|
||||
- -0.59%
|
||||
- -0.63%
|
||||
* - faster_rcnn_resnet50_coco
|
||||
- COCO2017_detection_91cl_bkgr
|
||||
- coco_orig_precision
|
||||
- -0.19%
|
||||
- -0.19%
|
||||
- -0.04%
|
||||
* - mobilenet-v2
|
||||
- ImageNet2012
|
||||
- accuracy @ top1
|
||||
-
|
||||
- 0.97%
|
||||
- -0.97%
|
||||
- -0.95%
|
||||
* - resnet-50
|
||||
- ImageNet2012
|
||||
- accuracy @ top1
|
||||
- 0.20%
|
||||
- 0.12%
|
||||
- -0.09%
|
||||
- -0.12%
|
||||
- -0.19%
|
||||
* - ssd-mobilenet-v1-coco
|
||||
- COCO2017_detection_80cl_bkgr
|
||||
- coco-precision
|
||||
- 2.97%
|
||||
- 0.29%
|
||||
- -0.31%
|
||||
- -2.97%
|
||||
- -0.29%
|
||||
- -0.26%
|
||||
* - ssd-resnet34-1200
|
||||
- COCO2017_detection_80cl_bkgr
|
||||
- map
|
||||
- 0.06%
|
||||
- 0.06%
|
||||
- -0.03%
|
||||
- -0.06%
|
||||
- 0.04%
|
||||
* - unet-camvid-onnx-0001
|
||||
- CamVid_12cl
|
||||
- mean_iou @ mean
|
||||
- 6.32%
|
||||
- -6.40%
|
||||
- -0.63%
|
||||
- -6.32%
|
||||
- 6.40%
|
||||
- 6.40%
|
||||
* - yolo_v3
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- -0.06%
|
||||
- -0.21%
|
||||
- -0.71%
|
||||
- -0.13%
|
||||
- -0.26%
|
||||
- -0.44%
|
||||
* - yolo_v3_tiny
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- 0.73%
|
||||
- 0.21%
|
||||
- -0.78%
|
||||
- -0.11%
|
||||
- -0.13%
|
||||
- -0.15%
|
||||
* - yolo_v8n
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- -0.26%
|
||||
- -0.22%
|
||||
- 0.12%
|
||||
* - bloomz-560m
|
||||
- ROOTS corpus
|
||||
- 0.27%
|
||||
- 0.23%
|
||||
- 0.17%
|
||||
* - chatGLM2-6b
|
||||
- lambada openai
|
||||
- ppl
|
||||
-
|
||||
- 17.595
|
||||
-
|
||||
-
|
||||
* - GPT-J-6B
|
||||
- Pile dataset
|
||||
- ppl
|
||||
-
|
||||
- 4.11
|
||||
- 4.11
|
||||
* - Llama-2-7b-chat
|
||||
- Wiki, StackExch, Crawl
|
||||
- ppl
|
||||
-
|
||||
- 3.27
|
||||
- 3.27
|
||||
- 3.268
|
||||
-
|
||||
* - Stable-Diffusion-V2-1
|
||||
- LIAON-5B
|
||||
- ppl
|
||||
@@ -131,15 +137,27 @@ for more information.
|
||||
* - bert-large-uncased-whole-word-masking-squad-0001
|
||||
- SQUAD_v1_1_bert_msl384_mql64_ds128_lowercase
|
||||
- F1
|
||||
- -0.19%
|
||||
- 0.04%
|
||||
- 0.04%
|
||||
- 0.04%
|
||||
* - deeplabv3
|
||||
- VOC2012_segm
|
||||
- mean_iou
|
||||
- 0.49%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
* - efficientdet-d0
|
||||
- COCO2017_detection_91cl
|
||||
- coco_precision
|
||||
- -0.02%
|
||||
- -0.02%
|
||||
- -0.02%
|
||||
* - faster_rcnn_resnet50_coco
|
||||
- COCO2017_detection_91cl_bkgr
|
||||
- coco_orig_precision
|
||||
- 0.00%
|
||||
-
|
||||
- 0.00%
|
||||
* - mobilenet-v2
|
||||
- ImageNet2012
|
||||
- accuracy @ top1
|
||||
@@ -150,7 +168,7 @@ for more information.
|
||||
- ImageNet2012
|
||||
- accuracy @ top1
|
||||
- 0.00%
|
||||
- -0.02%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
* - ssd-mobilenet-v1-coco
|
||||
- COCO2017_detection_80cl_bkgr
|
||||
@@ -161,26 +179,26 @@ for more information.
|
||||
* - ssd-resnet34-1200
|
||||
- COCO2017_detection_80cl_bkgr
|
||||
- map
|
||||
- 0.01%
|
||||
- 0.06%
|
||||
- -0.06%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
* - unet-camvid-onnx-0001
|
||||
- CamVid_12cl
|
||||
- mean_iou @ mean
|
||||
- 0.02%
|
||||
- -6.45%
|
||||
- 6.45%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
* - yolo_v3
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- 0.00%
|
||||
- 0.01%
|
||||
- 0.01%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
* - yolo_v3_tiny
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- 0.00%
|
||||
- -0.02%
|
||||
- -0.04%
|
||||
- -0.04%
|
||||
- 0.02%
|
||||
* - yolo_v8n
|
||||
- COCO2017_detection_80cl
|
||||
@@ -188,24 +206,18 @@ for more information.
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
- 0.00%
|
||||
* - bloomz-560m
|
||||
- ROOTS corpus
|
||||
* - chatGLM2-6b
|
||||
- lambada-openai
|
||||
- ppl
|
||||
-
|
||||
- 22.89
|
||||
- 22.89
|
||||
* - GPT-J-6B
|
||||
- Pile dataset
|
||||
- ppl
|
||||
- 17.488
|
||||
-
|
||||
- 4.10
|
||||
- 4.10
|
||||
* - Llama-2-7b-chat
|
||||
- Wiki, StackExch, Crawl
|
||||
- ppl
|
||||
-
|
||||
- 2.91
|
||||
- 2.91
|
||||
- 3.262
|
||||
-
|
||||
* - Stable-Diffusion-V2-1
|
||||
- LIAON-5B
|
||||
- ppl
|
||||
|
||||
@@ -1,334 +0,0 @@
|
||||
# Pre-release Information {#prerelease_information}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Check the pre-release information that includes a general
|
||||
changelog for each version of OpenVINO Toolkit published under
|
||||
the current cycle.
|
||||
|
||||
To ensure you can test OpenVINO's upcoming features even before they are officially released,
|
||||
OpenVINO developers continue to roll out pre-release software. On this page you can find
|
||||
a general changelog for each version published under the current cycle.
|
||||
|
||||
Your feedback on these new features is critical for us to make the best possible production quality version.
|
||||
Please file a github Issue on these with the label “pre-release” so we can give it immediate attention. Thank you.
|
||||
|
||||
.. note::
|
||||
|
||||
These versions are pre-release software and have not undergone full validation or qualification. OpenVINO™ toolkit pre-release is:
|
||||
|
||||
* NOT to be incorporated into production software/solutions.
|
||||
* NOT subject to official support.
|
||||
* Subject to change in the future.
|
||||
* Introduced to allow early testing and get early feedback from the community.
|
||||
|
||||
.. button-link:: https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=Pre-release%2Csupport_request&projects=&template=pre_release_feedback.yml&title=%5BPre-Release+Feedback%5D%3A
|
||||
:color: primary
|
||||
:outline:
|
||||
|
||||
:material-regular:`feedback;1.4em` Share your feedback
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.2 Dev 22.09.2023
|
||||
:animate: fade-in-slide-down
|
||||
:color: primary
|
||||
:open:
|
||||
|
||||
**What's Changed:**
|
||||
|
||||
* CPU runtime:
|
||||
|
||||
* Optimized Yolov8n and YoloV8s models on BF16/FP32.
|
||||
* Optimized Falcon model on 4th Generation Intel® Xeon® Scalable Processors.
|
||||
|
||||
* GPU runtime:
|
||||
|
||||
* int8 weight compression further improves LLM performance. PR #19548
|
||||
* Optimization for gemm & fc in iGPU. PR #19780
|
||||
|
||||
* TensorFlow FE:
|
||||
|
||||
* Added support for Selu operation. PR #19528
|
||||
* Added support for XlaConvV2 operation. PR #19466
|
||||
* Added support for TensorListLength and TensorListResize operations. PR #19390
|
||||
|
||||
* PyTorch FE:
|
||||
|
||||
* New operations supported
|
||||
|
||||
* aten::minimum aten::maximum. PR #19996
|
||||
* aten::broadcast_tensors. PR #19994
|
||||
* added support aten::logical_and, aten::logical_or, aten::logical_not, aten::logical_xor. PR #19981
|
||||
* aten::scatter_reduce and extend aten::scatter. PR #19980
|
||||
* prim::TupleIndex operation. PR #19978
|
||||
* mixed precision in aten::min/max. PR #19936
|
||||
* aten::tile op PR #19645
|
||||
* aten::one_hot PR #19779
|
||||
* PReLU. PR #19515
|
||||
* aten::swapaxes. PR #19483
|
||||
* non-boolean inputs for __or__ and __and__ operations. PR #19268
|
||||
|
||||
* Torchvision NMS can accept negative scores. PR #19826
|
||||
* New openvino_notebooks:
|
||||
|
||||
* Visual Question Answering and Image Captioning using BLIP
|
||||
|
||||
**Fixed GitHub issues**
|
||||
|
||||
* Fixed #19784 “[Bug]: Cannot install libprotobuf-dev along with libopenvino-2023.0.2 on Ubuntu 22.04” with PR #19788
|
||||
* Fixed #19617 “Add a clear error message when creating an empty Constant” with PR #19674
|
||||
* Fixed #19616 “Align openvino.compile_model and openvino.Core.compile_model functions” with PR #19778
|
||||
* Fixed #19469 “[Feature Request]: Add SeLu activation in the OpenVino IR (TensorFlow Conversion)” with PR #19528
|
||||
* Fixed #19019 “[Bug]: Low performance of the TF quantized model.” With PR #19735
|
||||
* Fixed #19018 “[Feature Request]: Support aarch64 python wheel for Linux” with PR #19594
|
||||
* Fixed #18831 “Question: openvino support for Nvidia Jetson Xavier ?” with PR #19594
|
||||
* Fixed #18786 “OpenVINO Wheel does not install Debug libraries when CMAKE_BUILD_TYPE is Debug #18786” with PR #19197
|
||||
* Fixed #18731 “[Bug] Wrong output shapes of MaxPool” with PR #18965
|
||||
* Fixed #18091 “[Bug] 2023.0 Version crashes on Jetson Nano - L4T - Ubuntu 18.04” with PR #19717
|
||||
* Fixed #7194 “Conan for simplifying dependency management” with PR #17580
|
||||
|
||||
|
||||
**Acknowledgements:**
|
||||
|
||||
Thanks for contributions from the OpenVINO developer community:
|
||||
|
||||
* @siddhant-0707,
|
||||
* @PRATHAM-SPS,
|
||||
* @okhovan
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.1.0.dev20230728
|
||||
:animate: fade-in-slide-down
|
||||
:color: secondary
|
||||
|
||||
`Check on GitHub <https://github.com/openvinotoolkit/openvino/releases/tag/2023.1.0.dev20230811>`__
|
||||
|
||||
**New features:**
|
||||
|
||||
* CPU runtime:
|
||||
|
||||
* Enabled weights decompression support for Large Language models (LLMs). The implementation
|
||||
supports avx2 and avx512 HW targets for Intel® Core™ processors for improved
|
||||
latency mode (FP32 VS FP32+INT8 weights comparison). For 4th Generation Intel® Xeon®
|
||||
Scalable Processors (formerly Sapphire Rapids) this INT8 decompression feature provides
|
||||
performance improvement, compared to pure BF16 inference.
|
||||
* Reduced memory consumption of compile model stage by moving constant folding of Transpose
|
||||
nodes to the CPU Runtime side.
|
||||
* Set FP16 inference precision by default for non-convolution networks on ARM. Convolution
|
||||
network will be executed in FP32.
|
||||
|
||||
* GPU runtime: Added paddings for dynamic convolutions to improve performance for models like
|
||||
Stable-Diffusion v2.1.
|
||||
|
||||
* Python API:
|
||||
|
||||
* Added the ``torchvision.transforms`` object to OpenVINO preprocessing.
|
||||
* Moved all python tools related to OpenVINO into a single namespace,
|
||||
improving user experience with better API readability.
|
||||
|
||||
* TensorFlow FE:
|
||||
|
||||
* Added support for the TensorFlow 1 Checkpoint format. All native TensorFlow formats are now enabled.
|
||||
* Added support for 8 new operations:
|
||||
|
||||
* MaxPoolWithArgmax
|
||||
* UnravelIndex
|
||||
* AdjustContrastv2
|
||||
* InvertPermutation
|
||||
* CheckNumerics
|
||||
* DivNoNan
|
||||
* EnsureShape
|
||||
* ShapeN
|
||||
|
||||
* PyTorch FE:
|
||||
|
||||
* Added support for 6 new operations. To know how to enjoy PyTorch models conversion follow
|
||||
this `Link <https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html#experimental-converting-a-pytorch-model-with-pytorch-frontend>`__
|
||||
|
||||
* aten::concat
|
||||
* aten::masked_scatter
|
||||
* aten::linspace
|
||||
* aten::view_as
|
||||
* aten::std
|
||||
* aten::outer
|
||||
* aten::broadcast_to
|
||||
|
||||
**New openvino_notebooks:**
|
||||
|
||||
* `245-typo-detector <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/245-typo-detector>`__
|
||||
: English Typo Detection in sentences with OpenVINO™
|
||||
|
||||
* `247-code-language-id <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/247-code-language-id/247-code-language-id.ipynb>`__
|
||||
: Identify the programming language used in an arbitrary code snippet
|
||||
|
||||
* `121-convert-to-openvino <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/121-convert-to-openvino>`__
|
||||
: Learn OpenVINO model conversion API
|
||||
|
||||
* `244-named-entity-recognition <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/244-named-entity-recognition>`__
|
||||
: Named entity recognition with OpenVINO™
|
||||
|
||||
* `246-depth-estimation-videpth <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/246-depth-estimation-videpth>`__
|
||||
: Monocular Visual-Inertial Depth Estimation with OpenVINO™
|
||||
|
||||
* `248-stable-diffusion-xl <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/248-stable-diffusion-xl>`__
|
||||
: Image generation with Stable Diffusion XL
|
||||
|
||||
* `249-oneformer-segmentation <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/249-oneformer-segmentation>`__
|
||||
: Universal segmentation with OneFormer
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.1.0.dev20230728
|
||||
:animate: fade-in-slide-down
|
||||
:color: secondary
|
||||
|
||||
`Check on GitHub <https://github.com/openvinotoolkit/openvino/releases/tag/2023.1.0.dev20230728>`__
|
||||
|
||||
**New features:**
|
||||
|
||||
* Common:
|
||||
|
||||
- Proxy & hetero plugins have been migrated to API 2.0, providing enhanced compatibility and stability.
|
||||
- Symbolic shape inference preview is now available, leading to improved performance for Large Language models (LLMs).
|
||||
|
||||
* CPU Plugin: Memory efficiency for output data between CPU plugin and the inference request has been significantly improved,
|
||||
resulting in better performance for LLMs.
|
||||
* GPU Plugin:
|
||||
|
||||
- Enabled support for dynamic shapes in more models, leading to improved performance.
|
||||
- Introduced the 'if' and DetectionOutput operator to enhance model capabilities.
|
||||
- Various performance improvements for StableDiffusion, SegmentAnything, U-Net, and Large Language models.
|
||||
- Optimized dGPU performance through the integration of oneDNN 3.2 and fusion optimizations for MVN, Crop+Concat, permute, etc.
|
||||
|
||||
* Frameworks:
|
||||
|
||||
- PyTorch Updates: OpenVINO now supports originally quantized PyTorch models, including models produced with the Neural Network Compression Framework (NNCF).
|
||||
- TensorFlow FE: Now supports Switch/Merge operations, bringing TensorFlow 1.x control flow support closer to full compatibility and enabling more models.
|
||||
- Python API: Python Conversion API is now the primary conversion path, making it easier for Python developers to work with OpenVINO.
|
||||
|
||||
* NNCF: Enabled SmoothQuant method for Post-training Quantization, offering more techniques for quantizing models.
|
||||
|
||||
**Distribution:**
|
||||
|
||||
* Added conda-forge pre-release channel, simplifying OpenVINO pre-release installation with "conda install -c "conda-forge/label/openvino_dev" openvino" command.
|
||||
* Python API is now distributed as a part of conda-forge distribution, allowing users to access it using the command above.
|
||||
* Runtime can now be installed and used via vcpkg C++ package manager, providing more flexibility in integrating OpenVINO into projects.
|
||||
|
||||
**New models:**
|
||||
|
||||
* Enabled Large Language models such as open-llama, bloom, dolly-v2, GPT-J, llama-2, and more. We encourage users to try running their custom LLMs and share their feedback with us!
|
||||
* Optimized performance for Stable Diffusion v2.1 (FP16 and INT8 for GPU) and Clip (CPU, INT8) models, improving their overall efficiency and accuracy.
|
||||
|
||||
**New openvino_notebooks:**
|
||||
|
||||
* `242-freevc-voice-conversion <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/242-freevc-voice-conversion>`__ - High-Quality Text-Free One-Shot Voice Conversion with FreeVC
|
||||
* `241-riffusion-text-to-music <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/241-riffusion-text-to-music>`__ - Text-to-Music generation using Riffusion
|
||||
* `220-books-alignment-labse <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/220-cross-lingual-books-alignment>`__ - Cross-lingual Books Alignment With Transformers
|
||||
* `243-tflite-selfie-segmentation <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/243-tflite-selfie-segmentation>`__ - Selfie Segmentation using TFLite
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.1.0.dev20230623
|
||||
:animate: fade-in-slide-down
|
||||
:color: secondary
|
||||
|
||||
The first pre-release for OpenVINO 2023.1, focused on fixing bugs and performance issues.
|
||||
|
||||
`Check on GitHub <https://github.com/openvinotoolkit/openvino/releases/tag/2023.1.0.dev20230623>`__
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.0.0.dev20230407
|
||||
:animate: fade-in-slide-down
|
||||
:color: secondary
|
||||
|
||||
Note that a new distribution channel has been introduced for C++ developers: `Conda Forge <https://anaconda.org/conda-forge/openvino>`__
|
||||
(the 2022.3.0 release is available there now).
|
||||
|
||||
* ARM device support is improved:
|
||||
|
||||
* increased model coverage up to the scope of x86,
|
||||
* dynamic shapes enabled,
|
||||
* performance boosted for many models including BERT,
|
||||
* validated for Raspberry Pi 4 and Apple® Mac M1/M2.
|
||||
|
||||
* Performance for NLP scenarios is improved, especially for int8 models.
|
||||
* The CPU device is enabled with BF16 data types, such that quantized models (INT8) can be run with BF16 plus INT8 mixed
|
||||
precision, taking full advantage of the AMX capability of 4th Generation Intel® Xeon® Scalable Processors
|
||||
(formerly Sapphire Rapids). The customer sees BF16/INT8 advantage, by default.
|
||||
* Performance is improved on modern, hybrid Intel® Xeon® and Intel® Core® platforms,
|
||||
where threads can be reliably and correctly mapped to the E-cores, P-cores, or both CPU core types.
|
||||
It is now possible to optimize for performance or for power savings as needed.
|
||||
* Neural Network Compression Framework (NNCF) becomes the quantization tool of choice. It now enables you to perform
|
||||
post-training optimization, as well as quantization-aware training. Try it out: ``pip install nncf``.
|
||||
Post-training Optimization Tool (POT) has been deprecated and will be removed in the future
|
||||
(`MR16758 <https://github.com/openvinotoolkit/openvino/pull/16758/files>`__).
|
||||
* New models are enabled, such as:
|
||||
|
||||
* Stable Diffusion 2.0,
|
||||
* Paddle Slim,
|
||||
* Segment Anything Model (SAM),
|
||||
* Whisper,
|
||||
* YOLOv8.
|
||||
|
||||
* Bug fixes:
|
||||
|
||||
* Fixes the problem of OpenVINO-dev wheel not containing the benchmark_app package.
|
||||
* Rolls back the default of model saving with the FP16 precision - FP32 is the default again.
|
||||
|
||||
* Known issues:
|
||||
|
||||
* PyTorch model conversion via convert_model Python API fails if “silent=false” is specified explicitly.
|
||||
By default, this parameter is set to true and there should be no issues.
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.0.0.dev20230407
|
||||
:animate: fade-in-slide-down
|
||||
:color: secondary
|
||||
|
||||
* Enabled remote tensor in C API 2.0 (accepting tensor located in graph memory)
|
||||
* Introduced model caching on GPU. Model Caching, which reduces First Inference Latency (FIL), is
|
||||
extended to work as a single method on both CPU and GPU plug-ins.
|
||||
* Added the post-training Accuracy-Aware Quantization mechanism for OpenVINO IR. By using this mechanism
|
||||
the user can define the accuracy drop criteria and NNCF will consider it during the quantization.
|
||||
* Migrated the CPU plugin to OneDNN 3.1.
|
||||
* Enabled CPU fall-back for the AUTO plugin - in case of run-time failure of networks on accelerator devices, CPU is used.
|
||||
* Now, AUTO supports the option to disable CPU as the initial acceleration device to speed up first-inference latency.
|
||||
* Implemented ov::hint::inference_precision, which enables running network inference independently of the IR precision.
|
||||
The default mode is FP16, it is possible to infer in FP32 to increase accuracy.
|
||||
* Optimized performance on dGPU with Intel oneDNN v3.1, especially for transformer models.
|
||||
* Enabled dynamic shapes on iGPU and dGPU for Transformer(NLP) models. Not all dynamic models are enabled but model coverage will be expanded in following releases.
|
||||
* Improved performance for Transformer models for NLP pipelines on CPU.
|
||||
* Extended support to the following models:
|
||||
|
||||
* Enabled MLPerf RNN-T model.
|
||||
* Enabled Detectron2 MaskRCNN.
|
||||
* Enabled OpenSeeFace models.
|
||||
* Enabled Clip model.
|
||||
* Optimized WeNet model.
|
||||
|
||||
|
||||
Known issues:
|
||||
|
||||
* OpenVINO-dev wheel does not contain the benchmark_app package
|
||||
|
||||
|
||||
|
||||
.. dropdown:: OpenVINO Toolkit 2023.0.0.dev20230217
|
||||
:animate: fade-in-slide-down
|
||||
:color: secondary
|
||||
|
||||
OpenVINO™ repository tag: `2023.0.0.dev20230217 <https://github.com/openvinotoolkit/openvino/releases/tag/2023.0.0.dev20230217>`__
|
||||
|
||||
* Enabled PaddlePaddle Framework 2.4
|
||||
* Preview of TensorFlow Lite Frontend – Load models directly via “read_model” into OpenVINO Runtime and export OpenVINO IR format using model conversion API or “convert_model”
|
||||
* PyTorch Frontend is available as an experimental feature which will allow you to convert PyTorch models, using convert_model Python API directly from your code without the need to export to the ONNX format. Model coverage is continuously increasing. Feel free to start using the option and give us feedback.
|
||||
* Model conversion API now uses the TensorFlow Frontend as the default path for conversion to IR. Known limitations compared to the legacy approach are: TF1 Loop, Complex types, models requiring config files and old python extensions. The solution detects unsupported functionalities and provides fallback. To force using the legacy frontend ``use_legacy_fronted`` can be specified.
|
||||
* Model conversion API now supports out-of-the-box conversion of TF2 Object Detection models. At this point, same performance experience is guaranteed only on CPU devices. Feel free to start enjoying TF2 Object Detection models without config files!
|
||||
* Introduced new option ov::auto::enable_startup_fallback / ENABLE_STARTUP_FALLBACK to control whether to use CPU to accelerate first inference latency for accelerator HW devices like GPU.
|
||||
* New FrontEndManager register_front_end(name, lib_path) interface added, to remove “OV_FRONTEND_PATH” env var (a way to load non-default frontends).
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
387
docs/articles_en/about_openvino/releasenotes_for_openvino.md
Normal file
387
docs/articles_en/about_openvino/releasenotes_for_openvino.md
Normal file
@@ -0,0 +1,387 @@
|
||||
# OpenVINO Release Notes {#openvino_release_notes}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
The Intel® Distribution of OpenVINO™ toolkit is an open-source solution for optimizing
|
||||
and deploying AI inference in domains such as computer vision,automatic speech
|
||||
recognition, natural language processing, recommendation systems, and generative AI.
|
||||
With its plug-in architecture, OpenVINO enables developers to write once and deploy
|
||||
anywhere. We are proud to announce the release of OpenVINO 2023.2 introducing a range
|
||||
of new features, improvements, and deprecations aimed at enhancing the developer
|
||||
experience.
|
||||
|
||||
New and changed in 2023.2
|
||||
###########################
|
||||
|
||||
Summary of major features and improvements
|
||||
++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
* More Generative AI coverage and framework integrations to minimize code changes.
|
||||
|
||||
* **Expanded model support for direct PyTorch model conversion** - automatically convert
|
||||
additional models directly from PyTorch or execute via ``torch.compile`` with OpenVINO
|
||||
as the backend.
|
||||
* **New and noteworthy models supported** - we have enabled models used for chatbots,
|
||||
instruction following, code generation, and many more, including prominent models
|
||||
like Llava, chatGLM, Bark (text to audio) and LCM (Latent Consistency Models, an
|
||||
optimized version of Stable Diffusion).
|
||||
* **Easier optimization and conversion of Hugging Face models** - compress LLM models
|
||||
to Int8 with the Hugging Face Optimum command line interface and export models to
|
||||
the OpenVINO IR format.
|
||||
* **OpenVINO is now available on Conan** - a package manager which allows more seamless
|
||||
package management for large scale projects for C and C++ developers.
|
||||
|
||||
* Broader Large Language Model (LLM) support and more model compression techniques.
|
||||
|
||||
* Accelerate inference for LLM models on Intel® CoreTM CPU and iGPU with the
|
||||
use of Int8 model weight compression.
|
||||
* Expanded model support for dynamic shapes for improved performance on GPU.
|
||||
* Preview support for Int4 model format is now included. Int4 optimized model
|
||||
weights are now available to try on Intel® Core™ CPU and iGPU, to accelerate
|
||||
models like Llama 2 and chatGLM2.
|
||||
* The following Int4 model compression formats are supported for inference
|
||||
in runtime:
|
||||
|
||||
* Generative Pre-training Transformer Quantization (GPTQ); with GPTQ-compressed
|
||||
models, you can access them through the Hugging Face repositories.
|
||||
* Native Int4 compression through Neural Network Compression Framework (NNCF).
|
||||
|
||||
* More portability and performance to run AI at the edge, in the cloud, or locally.
|
||||
|
||||
* **In 2023.1 we announced full support for ARM** architecture, now we have improved
|
||||
performance by enabling FP16 model formats for LLMs and integrating additional
|
||||
acceleration libraries to improve latency.
|
||||
|
||||
Support Change and Deprecation Notices
|
||||
++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
* The OpenVINO™ Development Tools package (pip install openvino-dev) is deprecated
|
||||
and will be removed from installation options and distribution channels with
|
||||
2025.0. To learn more, refer to the
|
||||
:doc:`OpenVINO Legacy Features and Components page <openvino_legacy_features>`.
|
||||
To ensure optimal performance, install the OpenVINO package (pip install openvino),
|
||||
which includes essential components such as OpenVINO Runtime, OpenVINO Converter,
|
||||
and Benchmark Tool.
|
||||
|
||||
* Tools:
|
||||
|
||||
* :doc:`Deployment Manager <openvino_docs_install_guides_deployment_manager_tool>`
|
||||
is deprecated and will be removed in the 2024.0 release.
|
||||
* Accuracy Checker is deprecated and will be discontinued with 2024.0.
|
||||
* Post-Training Optimization Tool (POT) is deprecated and will be
|
||||
discontinued with 2024.0.
|
||||
* Model Optimizer is deprecated and will be fully supported up until the 2025.0
|
||||
release. Model conversion to the OpenVINO format should be performed through
|
||||
OpenVINO Model Converter, which is part of the PyPI package. Follow the
|
||||
:doc:`Model Optimizer to OpenVINO Model Converter transition <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
|
||||
guide for smoother transition. Known limitations are TensorFlow model with
|
||||
TF1 Control flow and object detection models. These limitations relate to
|
||||
the gap in TensorFlow direct conversion capabilities which will be addressed
|
||||
in upcoming releases.
|
||||
* PyTorch 1.13 support is deprecated in Neural Network Compression Framework (NNCF)
|
||||
|
||||
* Runtime:
|
||||
|
||||
* Intel® Gaussian & Neural Accelerator (Intel® GNA) will be deprecated in a future
|
||||
release. We encourage developers to use the Neural Processing Unit (NPU) for
|
||||
low powered systems like Intel® Core™ Ultra or 14th generation and beyond.
|
||||
* OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.
|
||||
* Python 3.7 support has been discontinued.
|
||||
|
||||
OpenVINO™ Development Tools
|
||||
++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
List of components and their changes:
|
||||
------------------------------------------
|
||||
|
||||
* :doc:`OpenVINO Model Converter tool <openvino_docs_model_processing_introduction>`
|
||||
now supports the original framework shape format.
|
||||
* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__
|
||||
|
||||
* Added data-free Int4 weight compression support for LLMs in OpenVINO IR with
|
||||
``nncf.compress_weights()``.
|
||||
* Improved quantization time of LLMs with NNCF PTQ API for ``nncf.quantize()``
|
||||
and ``nncf.quantize_with_accuracy_control()``.
|
||||
* Added support for SmoothQuant and ChannelAlighnment algorithms in NNCF HyperParameter
|
||||
Tuner for automatic optimization of their hyperparameters during quantization.
|
||||
* Added quantization support for the ``IF`` operation of models in OpenVINO format
|
||||
to speed up such models.
|
||||
* NNCF Post-training Quantization for PyTorch backend is now supported with
|
||||
``nncf.quantize()`` and the common implementation of quantization algorithms.
|
||||
* Added support for PyTorch 2.1. PyTorch 1.13 support has been deprecated.
|
||||
|
||||
OpenVINO™ Runtime (previously known as Inference Engine)
|
||||
---------------------------------------------------------
|
||||
|
||||
* OpenVINO Common
|
||||
|
||||
* Operations for reference implementations updated from legacy API to API 2.0.
|
||||
* Symbolic transformation introduced the ability to remove Reshape operations
|
||||
surrounding MatMul operations.
|
||||
|
||||
* OpenVINO Python API
|
||||
|
||||
* Better support for the ``openvino.properties`` submodule, which now allows the use
|
||||
of properties directly, without additional parenthesis. Example use-case:
|
||||
``{openvino.properties.cache_dir: “./some_path/”}``.
|
||||
* Added missing properties: ``execution_devices`` and ``loaded_from_cache``.
|
||||
* Improved error propagation on imports from OpenVINO package.
|
||||
|
||||
* AUTO device plug-in (AUTO)
|
||||
|
||||
* o Provided additional option to improve performance of cumulative throughput
|
||||
(or MULTI), where part of CPU resources can be reserved for GPU inference
|
||||
when GPU and CPU are both used for inference (using ``ov::hint::enable_cpu_pinning(true)``).
|
||||
This avoids the performance issue of CPU resource contention where there
|
||||
is not enough CPU resources to schedule tasks for GPU
|
||||
(`PR #19214 <https://github.com/openvinotoolkit/openvino/pull/19214>`__).
|
||||
|
||||
* CPU
|
||||
|
||||
* Introduced support of GPTQ quantized Int4 models, with improved performance
|
||||
compared to Int8 weight-compressed or FP16 models. In the CPU plugin,
|
||||
the gain in performance is achieved by FullyConnected acceleration with
|
||||
4bit weight decompression
|
||||
(`PR #20607 <https://github.com/openvinotoolkit/openvino/pull/20607>`__).
|
||||
* Improved performance of Int8 weight-compressed large language models on
|
||||
some platforms, such as 13th Gen Intel Core
|
||||
(`PR #20607 <https://github.com/openvinotoolkit/openvino/pull/20607>`__).
|
||||
* Further reduced memory consumption of select large language models on
|
||||
CPU platforms with AMX and AVX512 ISA, by eliminating extra memory copy
|
||||
with a unified weight layout
|
||||
(`PR #19575 <https://github.com/openvinotoolkit/openvino/pull/19575>`__).
|
||||
|
||||
* Fixed performance issue observed in 2023.1 release on select Xeon CPU
|
||||
platform with improved thread workload partitioning matching L2 cache
|
||||
utilization
|
||||
(`PR #20436 <https://github.com/openvinotoolkit/openvino/pull/20436>`__).
|
||||
* Extended support of configuration (enable_cpu_pinning) on Windows
|
||||
platforms to allow fine-grain control on CPU resource used for inference
|
||||
workload, by binding inference thread to CPU cores
|
||||
(`PR #19418 <https://github.com/openvinotoolkit/openvino/pull/19418>`__).
|
||||
* Optimized YoloV8n and YoloV8s model performance for BF16/FP32 precision.
|
||||
* Optimized Falcon model on 4th Gen Intel® Xeon® Scalable Processors.
|
||||
* Enabled support for FP16 inference precision on ARM.
|
||||
|
||||
* GPU
|
||||
|
||||
* Enhanced inference performance for Large Language Models.
|
||||
* Introduced int8 weight compression to boost LLM performance.
|
||||
(`PR #19548 <https://github.com/openvinotoolkit/openvino/pull/19548>`__).
|
||||
* Implemented Int4 GPTQ weight compression for improved LLM performance.
|
||||
* Optimized constant weights for LLMs, resulting in better memory usage
|
||||
and faster model loading.
|
||||
* Optimized gemm (general matrix multiply) and fc (fully connected) for
|
||||
enhanced performance on iGPU.
|
||||
(`PR #19780 <https://github.com/openvinotoolkit/openvino/pull/19780>`__).
|
||||
* Completed GPU plugin migration to API 2.0.
|
||||
* Added support for oneDNN 3.3 version.
|
||||
|
||||
* Model Import Updates
|
||||
|
||||
* TensorFlow Framework Support
|
||||
|
||||
* Supported conversion of models from memory in keras.Model and tf.function formats.
|
||||
`PR #19903 <https://github.com/openvinotoolkit/openvino/pull/19903>`__
|
||||
* Supported TF 2.14.
|
||||
`PR #20385 <https://github.com/openvinotoolkit/openvino/pull/20385>`__
|
||||
|
||||
* PyTorch Framework Support
|
||||
|
||||
* Supported Int4 GPTQ models.
|
||||
* New operations supported.
|
||||
|
||||
* ONNX Framework Support
|
||||
|
||||
* Added support for ONNX version 1.14.1
|
||||
(`PR #18359 <https://github.com/openvinotoolkit/openvino/pull/18359>`__)
|
||||
|
||||
|
||||
OpenVINO Ecosystem
|
||||
+++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
OpenVINO Model Server
|
||||
--------------------------
|
||||
|
||||
Introduced an extension of the KServe gRPC API, enabling streaming input and
|
||||
output for servables with Mediapipe graphs. This extension ensures the persistence
|
||||
of Mediapipe graphs within a user session, improving processing performance.
|
||||
This enhancement supports stateful graphs, such as tracking algorithms, and
|
||||
enables the use of source calculators.
|
||||
(`see additional documentation <https://github.com/openvinotoolkit/model_server/blob/main/docs/streaming_endpoints.md>`__)
|
||||
|
||||
* Mediapipe framework has been updated to the version 0.10.3.
|
||||
* model_api used in the openvino inference Mediapipe calculator has been updated
|
||||
and included with all its features.
|
||||
* Added a demo showcasing gRPC streaming with Mediapipe graph.
|
||||
(`see here <https://github.com/openvinotoolkit/model_server/tree/main/demos/mediapipe/holistic_tracking>`__)
|
||||
* Added parameters for gRPC quota configuration and changed default gRPC channel
|
||||
arguments to add rate limits. It will minimize the risks of impact of the service
|
||||
from uncontrolled flow of requests.
|
||||
* Updated python clients requirements to match wide range of python versions from 3.6 to 3.11
|
||||
|
||||
Learn more about the changes in https://github.com/openvinotoolkit/model_server/releases
|
||||
|
||||
Jupyter Notebook Tutorials
|
||||
-----------------------------
|
||||
|
||||
* The following notebooks have been updated or newly added:
|
||||
|
||||
* `LaBSE <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/220-cross-lingual-books-alignment>`__
|
||||
Cross-lingual Books Alignment With Transformers
|
||||
* `LLM chatbot <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/254-llm-chatbot>`__
|
||||
Create LLM-powered Chatbot
|
||||
|
||||
* Updated to include Int4 weight compression and Zephyr 7B model
|
||||
|
||||
* `Bark Text-to-Speech <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/256-bark-text-to-audio>`__
|
||||
Text-to-Speech generation using Bark
|
||||
* `LLaVA Multimodal Chatbot <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/257-llava-multimodal-chatbot>`__
|
||||
Visual-language assistant with LLaVA
|
||||
* `BLIP-Diffusion - Subject-Driven Generation <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/258-blip-diffusion-subject-generation>`__
|
||||
Subject-driven image generation and editing using BLIP Diffusion
|
||||
* `DeciDiffusion <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/259-decidiffusion-image-generation>`__
|
||||
Image generation with DeciDiffusion
|
||||
* `Fast Segment Anything <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/261-fast-segment-anything>`__
|
||||
Object segmentations with FastSAM
|
||||
* `SoftVC VITS Singing Voice Conversion <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/262-softvc-voice-conversion>`__
|
||||
* `QR Code Monster <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/264-qrcode-monster>`__
|
||||
Generate creative QR codes with ControlNet QR Code Monster
|
||||
* `Würstchen <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/265-wuerstchen-image-generation>`__
|
||||
Text-to-image generation with Würstchen
|
||||
* `Distil-Whisper <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/267-distil-whisper-asr>`__
|
||||
Automatic speech recognition using Distil-Whisper and OpenVINO™
|
||||
|
||||
|
||||
* Added optimization support (8-bit quantization, weight compression)
|
||||
by NNCF for the following notebooks:
|
||||
|
||||
* `Image generation with DeepFloyd IF <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/238-deepfloyd-if>`__
|
||||
* `Instruction following using Databricks Dolly 2.0 <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/240-dolly-2-instruction-following>`__
|
||||
* `Visual Question Answering and Image Captioning using BLIP <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/233-blip-visual-language-processing>`__
|
||||
* `Grammatical Error Correction <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/214-grammar-correction>`__
|
||||
* `Universal segmentation with OneFormer <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/249-oneformer-segmentation>`__
|
||||
* `Visual-language assistant with LLaVA and OpenVINO <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/257-llava-multimodal-chatbot>`__
|
||||
* `Image editing with InstructPix2Pix <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/231-instruct-pix2pix-image-editing>`__
|
||||
* `MMS: Scaling Speech Technology to 1000+ languages <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/255-mms-massively-multilingual-speech>`__
|
||||
* `Image generation with Latent Consistency Model <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/263-latent-consistency-models-image-generation>`__
|
||||
* `Object segmentations with FastSAM <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/261-fast-segment-anything>`__
|
||||
* `Automatic speech recognition using Distil-Whisper <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/267-distil-whisper-asr>`__
|
||||
|
||||
|
||||
|
||||
Known issues
|
||||
++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
| **ID - 118179**
|
||||
| *Component* - Python API, Plugins
|
||||
| *Description:*
|
||||
| When input byte sizes are matching, inference methods accept incorrect inputs
|
||||
in copy mode (share_inputs=False). Example: [1, 4, 512, 512] is allowed when
|
||||
[1, 512, 512, 4] is required by the model.
|
||||
| *Workaround:*
|
||||
| Pass inputs which shape and layout match model ones.
|
||||
|
||||
| **ID - 124181**
|
||||
| *Component* - CPU plugin
|
||||
| *Description:*
|
||||
| On CPU platform with L2 cache size less than 256KB, such as i3 series of 8th
|
||||
Gen Intel CORE platforms, some models may hang during model loading.
|
||||
| *Workaround:*
|
||||
| Rebuild the software from OpenVINO master or use the next OpenVINO release.
|
||||
|
||||
| **ID - 121959**
|
||||
| *Component* - CPU plugin
|
||||
| *Description:*
|
||||
| During inference using latency hint on selected hybrid CPU platforms
|
||||
(such as 12th or 13th Gen Intel CORE), there is a sporadic occurrence of
|
||||
increased latency caused by the operating system scheduling of P-cores or
|
||||
E-cores during OpenVINO initialization.
|
||||
| *Workaround:*
|
||||
| This will be fixed in the next OpenVINO release.
|
||||
|
||||
| **ID - 123101**
|
||||
| *Component* - GPU plugin
|
||||
| *Description:*
|
||||
| Hung up of GPU plugin on A770 Graphics (dGPU) in case of
|
||||
large batch size (1750).
|
||||
| *Workaround:*
|
||||
| Decrease the batch size, wait for fixed driver released.
|
||||
|
||||
Included in This Release
|
||||
+++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
The Intel® Distribution of OpenVINO™ toolkit is available for downloading in
|
||||
three types of operating systems: Windows, Linux, and macOS.
|
||||
|
||||
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|
||||
|| Component || License | Location |
|
||||
+================================+===================================+=================+=================+=======================+=================================================+
|
||||
|| OpenVINO (Inference Engine) C++ Runtime || Dual licensing: || <install_root>/runtime/* |
|
||||
|| Unified API to integrate the inference with application logic || Intel® OpenVINO™ Distribution License (Version May 2021) || <install_root>/runtime/include/* |
|
||||
|| OpenVINO (Inference Engine) Headers || Apache 2.0 || |
|
||||
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|
||||
|| OpenVINO (Inference Engine) Pythion API || Apache 2.0 || <install_root>/python/* |
|
||||
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|
||||
|| OpenVINO (Inference Engine) Samples || Apache 2.0 || <install_root>/samples/* |
|
||||
|| Samples that illustrate OpenVINO C++/ Python API usage || || |
|
||||
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|
||||
|| [Deprecated] Deployment manager || Apache 2.0 || <install_root>/tools/deployment_manager/* |
|
||||
|| The Deployment Manager is a Python* command-line tool that || || |
|
||||
|| creates a deployment package by assembling the model, IR files, || || |
|
||||
|| your application, and associated dependencies into a runtime || || |
|
||||
|| package for your target device. || || |
|
||||
+--------------------------------------------------------------------+-----------------------------------------------------------+-------------------------------------------------+
|
||||
|
||||
|
||||
Legal Information
|
||||
+++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
You may not use or facilitate the use of this document in connection with any infringement
|
||||
or other legal analysis concerning Intel products described herein.
|
||||
|
||||
You agree to grant Intel a non-exclusive, royalty-free license to any patent claim
|
||||
thereafter drafted which includes subject matter disclosed herein.
|
||||
|
||||
No license (express or implied, by estoppel or otherwise) to any intellectual property
|
||||
rights is granted by this document.
|
||||
|
||||
All information provided here is subject to change without notice. Contact your Intel
|
||||
representative to obtain the latest Intel product specifications and roadmaps.
|
||||
|
||||
The products described may contain design defects or errors known as errata which may
|
||||
cause the product to deviate from published specifications. Current characterized errata
|
||||
are available on request.
|
||||
|
||||
Intel technologies' features and benefits depend on system configuration and may require
|
||||
enabled hardware, software or service activation. Learn more at
|
||||
`http://www.intel.com/ <http://www.intel.com/>`__
|
||||
or from the OEM or retailer.
|
||||
|
||||
No computer system can be absolutely secure.
|
||||
|
||||
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks
|
||||
of Intel Corporation in the U.S. and/or other countries.
|
||||
|
||||
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
|
||||
|
||||
Other names and brands may be claimed as the property of others.
|
||||
|
||||
Copyright © 2023, Intel Corporation. All rights reserved.
|
||||
|
||||
For more complete information about compiler optimizations, see our Optimization Notice.
|
||||
|
||||
Performance varies by use, configuration and other factors. Learn more at
|
||||
`www.Intel.com/PerformanceIndex <www.Intel.com/PerformanceIndex>`__.
|
||||
|
||||
Download
|
||||
+++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
`The OpenVINO product selector tool <https://docs.openvino.ai/install>`__
|
||||
provides easy access to the right packages that match your desired OS, version,
|
||||
and distribution options.
|
||||
|
||||
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -1,140 +1,168 @@
|
||||
# System Requirements {#system_requirements}
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
||||
Certain hardware (including but not limited to GPU and GNA) requires manual
|
||||
installation of specific drivers to work correctly. The drivers may also require
|
||||
updates to the operating system, including Linux kernel. These updates need to
|
||||
be handled by the user and are not part of OpenVINO installation. Refer to your
|
||||
system's documentation for updating instructions.
|
||||
Certain hardware requires specific drivers to work properly with OpenVINO.
|
||||
These drivers, including Linux* kernels, might require updates to your operating system,
|
||||
which is not part of OpenVINO installation. Refer to your hardware's documentation
|
||||
for updating instructions.
|
||||
|
||||
|
||||
Intel CPU processors
|
||||
#####################
|
||||
CPU
|
||||
##########
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Supported Hardware
|
||||
|
||||
* Intel Atom® processor with Intel® SSE4.2 support
|
||||
* Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
|
||||
* 6th - 13th generation Intel® Core™ processors
|
||||
* Intel® Xeon® Scalable Processors (code name Skylake)
|
||||
* 2nd Generation Intel® Xeon® Scalable Processors (code name Cascade Lake)
|
||||
* 3rd Generation Intel® Xeon® Scalable Processors (code name Cooper Lake and Ice Lake)
|
||||
* 4th Generation Intel® Xeon® Scalable Processors (code name Sapphire Rapids)
|
||||
* Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
|
||||
* 6th - 13th generation Intel® Core™ processors
|
||||
* Intel® Core™ Ultra (codename Meteor Lake)
|
||||
* Intel® Xeon® Scalable Processors (code name Skylake)
|
||||
* 2nd Generation Intel® Xeon® Scalable Processors (code name Cascade Lake)
|
||||
* 3rd Generation Intel® Xeon® Scalable Processors (code name Cooper Lake and Ice Lake)
|
||||
* 4th Generation Intel® Xeon® Scalable Processors (code name Sapphire Rapids)
|
||||
* ARM* and ARM64 CPUs; Apple M1, M2 and Raspberry Pi
|
||||
|
||||
.. tab-item:: Required Operating Systems
|
||||
.. tab-item:: Supported Operating Systems
|
||||
|
||||
* Ubuntu 22.04 long-term support (LTS), 64-bit (Kernel 5.15+)
|
||||
* Ubuntu 20.04 long-term support (LTS), 64-bit (Kernel 5.15+)
|
||||
* Ubuntu 18.04 long-term support (LTS) with limitations, 64-bit (Kernel 5.4+)
|
||||
* Windows* 10
|
||||
* Windows* 11
|
||||
* macOS* 10.15 and above, 64-bit
|
||||
* Windows* 10
|
||||
* Windows* 11
|
||||
* macOS* 10.15 and above, 64-bit
|
||||
* macOS 11 and above, ARM64
|
||||
* Red Hat Enterprise Linux* 8, 64-bit
|
||||
* Debian 9 ARM64 and ARM
|
||||
* CentOS 7 64-bit
|
||||
|
||||
Intel® Processor Graphics
|
||||
###########################################
|
||||
GPU
|
||||
##########
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Supported Hardware
|
||||
.. tab-item:: Supported Hardware
|
||||
|
||||
* Intel® HD Graphics
|
||||
* Intel® HD Graphics
|
||||
* Intel® UHD Graphics
|
||||
* Intel® Iris® Pro Graphics
|
||||
* Intel® Iris® Xe Graphics
|
||||
* Intel® Iris® Xe Max Graphics
|
||||
* Intel® Arc ™ GPU Series
|
||||
* Intel® Data Center GPU Flex Series
|
||||
* Intel® Data Center GPU Max Series
|
||||
|
||||
.. tab-item:: Required Operating Systems
|
||||
.. tab-item:: Supported Operating Systems
|
||||
|
||||
* Ubuntu* 22.04 long-term support (LTS), 64-bit
|
||||
* Ubuntu* 20.04 long-term support (LTS), 64-bit
|
||||
* Windows* 10, 64-bit
|
||||
* Windows* 11, 64-bit
|
||||
* Red Hat Enterprise Linux* 8, 64-bit
|
||||
* Ubuntu 22.04 long-term support (LTS), 64-bit
|
||||
* Ubuntu 20.04 long-term support (LTS), 64-bit
|
||||
* Windows 10, 64-bit
|
||||
* Windows 11, 64-bit
|
||||
* Centos 7
|
||||
* Red Hat Enterprise Linux 8, 64-bit
|
||||
|
||||
.. note::
|
||||
.. tab-item:: Additional considerations
|
||||
|
||||
| Using a GPU requires installing drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit.
|
||||
| Not all Intel CPUs include the integrated graphics processor. See `Product Specifications <https://ark.intel.com/>`__
|
||||
for information about your processor.
|
||||
| Although this release works with Ubuntu 20.04 for discrete graphic cards, the support is limited
|
||||
due to discrete graphics drivers.
|
||||
| Recommended `OpenCL™ driver <https://github.com/intel/compute-runtime>`__ versions:
|
||||
22.43 for Ubuntu 22.04, 22.41 for Ubuntu 20.04 and 22.28 for Red Hat Enterprise Linux 8
|
||||
* The use of of GPU requires drivers that are not included in the Intel®
|
||||
Distribution of OpenVINO™ toolkit package.
|
||||
* A chipset that supports processor graphics is required for Intel® Xeon®
|
||||
processors. Processor graphics are not included in all processors. See
|
||||
`Product Specifications <https://ark.intel.com/>`__
|
||||
for information about your processor.
|
||||
* Although this release works with Ubuntu 20.04 for discrete graphic cards,
|
||||
Ubuntu 20.04 is not POR for discrete graphics drivers, so OpenVINO support
|
||||
is limited.
|
||||
* The following minimum (i.e., used for old hardware) OpenCL™ driver's versions
|
||||
were used during OpenVINO internal validation: 22.43 for Ubuntu 22.04, 21.48
|
||||
for Ubuntu 20.04 and 21.49 for Red Hat Enterprise Linux 8.
|
||||
|
||||
|
||||
Intel® Gaussian & Neural Accelerator
|
||||
###########################################
|
||||
|
||||
Operating Systems:
|
||||
|
||||
Ubuntu* 22.04 long-term support (LTS), 64-bit
|
||||
Ubuntu* 20.04 long-term support (LTS), 64-bit
|
||||
Windows* 10, 64-bit
|
||||
Windows* 11, 64-bit
|
||||
|
||||
|
||||
Operating system and developer environment requirements
|
||||
############################################################
|
||||
NPU and GNA
|
||||
#############################
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Linux OS
|
||||
.. tab-item:: Operating Systems for NPU
|
||||
|
||||
* Ubuntu 22.04 long-term support (LTS), 64-bit
|
||||
* Windows 11, 64-bit
|
||||
|
||||
.. tab-item:: Operating Systems for GNA
|
||||
|
||||
* Ubuntu 22.04 long-term support (LTS), 64-bit
|
||||
* Ubuntu 20.04 long-term support (LTS), 64-bit
|
||||
* Windows 10, 64-bit
|
||||
* Windows 11, 64-bit
|
||||
|
||||
.. tab-item:: Additional considerations
|
||||
|
||||
* These Accelerators require drivers that are not included in the
|
||||
Intel® Distribution of OpenVINO™ toolkit package.
|
||||
* Users can access the NPU plugin through the OpenVINO archives on
|
||||
the download page.
|
||||
|
||||
|
||||
Operating systems and developer environment
|
||||
#######################################################
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Linux
|
||||
|
||||
* Ubuntu 22.04 with Linux kernel 5.15+
|
||||
* Ubuntu 20.04 with Linux kernel 5.15+
|
||||
* RHEL 8 with Linux kernel 5.4
|
||||
* Red Hat Enterprise Linux 8 with Linux kernel 5.4
|
||||
|
||||
A Linux OS build environment requires:
|
||||
|
||||
* Python* 3.7-3.11
|
||||
* `Intel® HD Graphics Driver <https://downloadcenter.intel.com/product/80939/Graphics-Drivers>`__
|
||||
for inference on a GPU.
|
||||
Build environment components:
|
||||
|
||||
GNU Compiler Collection and CMake are needed for building from source:
|
||||
* Python* 3.8-3.11
|
||||
* `Intel® HD Graphics Driver <https://downloadcenter.intel.com/product/80939/Graphics-Drivers>`__
|
||||
required for inference on GPU
|
||||
* GNU Compiler Collection and CMake are needed for building from source:
|
||||
|
||||
* `GNU Compiler Collection (GCC) <https://www.gnu.org/software/gcc/>`__
|
||||
8.4 (RHEL 8) 9.3 (Ubuntu 20)
|
||||
* `CMake <https://cmake.org/download/>`__ 3.10 or higher
|
||||
* `GNU Compiler Collection (GCC) <https://www.gnu.org/software/gcc/>`__ 7.5 and above
|
||||
* `CMake <https://cmake.org/download/>`__ 3.10 or higher
|
||||
|
||||
To support CPU, GPU, GNA, or hybrid-core CPU capabilities, higher versions of kernel
|
||||
might be required for 10th Gen Intel® Core™ Processor,
|
||||
11th Gen Intel® Core™ Processors, 11th Gen Intel® Core™ Processors S-Series Processors,
|
||||
12th Gen Intel® Core™ Processors, 13th Gen Intel® Core™ Processors, or 4th Gen
|
||||
Intel® Xeon® Scalable Processors.
|
||||
Higher versions of kernel might be required for 10th Gen Intel® Core™ Processors,
|
||||
11th Gen Intel® Core™ Processors, 11th Gen Intel® Core™ Processors S-Series Processors,
|
||||
12th Gen Intel® Core™ Processors, 13th Gen Intel® Core™ Processors, Intel® Core™ Ultra
|
||||
Processors, or 4th Gen Intel® Xeon® Scalable Processors to support CPU, GPU, GNA or
|
||||
hybrid-cores CPU capabilities.
|
||||
|
||||
.. tab-item:: Windows* 10 and 11
|
||||
.. tab-item:: Windows
|
||||
|
||||
A Windows OS build environment requires:
|
||||
* Windows 10
|
||||
* Windows 11
|
||||
|
||||
Build environment components:
|
||||
|
||||
* `Microsoft Visual Studio 2019 <https://visualstudio.microsoft.com/vs/older-downloads/>`__
|
||||
* `CMake <https://cmake.org/download/>`__ 3.14 or higher
|
||||
* `Python 3.7-3.11 <http://www.python.org/downloads/>`__
|
||||
* `Intel® HD Graphics Driver <https://downloadcenter.intel.com/product/80939/Graphics-Drivers>`__ for inference on a GPU.
|
||||
* `CMake <https://cmake.org/download/>`__ 3.10 or higher
|
||||
* `Python* 3.8-3.11 <http://www.python.org/downloads/>`__
|
||||
* `Intel® HD Graphics Driver <https://downloadcenter.intel.com/product/80939/Graphics-Drivers>`__
|
||||
required for inference on GPU
|
||||
|
||||
.. tab-item:: macOS* 10.15 and above
|
||||
.. tab-item:: macOS
|
||||
|
||||
A macOS build environment requires:
|
||||
* macOS 10.15 and above
|
||||
|
||||
* `Xcode 10.3 <https://developer.apple.com/xcode/>`__
|
||||
* `Python 3.7-3.11 <http://www.python.org/downloads/>`__
|
||||
* `CMake 3.13 or higher <https://cmake.org/download/>`__
|
||||
Build environment components:
|
||||
|
||||
.. tab-item:: DL framework versions
|
||||
* `Xcode* 10.3 <https://developer.apple.com/xcode/>`__
|
||||
* `Python* 3.8-3.11 <http://www.python.org/downloads/>`__
|
||||
* `CMake <https://cmake.org/download/>`__ 3.10 or higher
|
||||
|
||||
* TensorFlow 1.15, 2.12
|
||||
* MxNet 1.9
|
||||
* ONNX 1.13
|
||||
.. tab-item:: DL frameworks versions:
|
||||
|
||||
* TensorFlow* 1.15, 2.12
|
||||
* MxNet* 1.9.0
|
||||
* ONNX* 1.14.1
|
||||
* PaddlePaddle* 2.4
|
||||
|
||||
Other DL Framework versions may be compatible with the current OpenVINO
|
||||
release, but only the versions listed here are fully validated.
|
||||
This package can be installed on other versions of DL Framework
|
||||
but only the version specified here is fully validated.
|
||||
|
||||
|
||||
.. note::
|
||||
@@ -148,4 +176,50 @@ Operating system and developer environment requirements
|
||||
|
||||
|
||||
|
||||
Legal Information
|
||||
+++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
You may not use or facilitate the use of this document in connection with any infringement
|
||||
or other legal analysis concerning Intel products described herein.
|
||||
|
||||
You agree to grant Intel a non-exclusive, royalty-free license to any patent claim
|
||||
thereafter drafted which includes subject matter disclosed herein.
|
||||
|
||||
No license (express or implied, by estoppel or otherwise) to any intellectual property
|
||||
rights is granted by this document.
|
||||
|
||||
All information provided here is subject to change without notice. Contact your Intel
|
||||
representative to obtain the latest Intel product specifications and roadmaps.
|
||||
|
||||
The products described may contain design defects or errors known as errata which may
|
||||
cause the product to deviate from published specifications. Current characterized errata
|
||||
are available on request.
|
||||
|
||||
Intel technologies' features and benefits depend on system configuration and may require
|
||||
enabled hardware, software or service activation. Learn more at
|
||||
`http://www.intel.com/ <http://www.intel.com/>`__
|
||||
or from the OEM or retailer.
|
||||
|
||||
No computer system can be absolutely secure.
|
||||
|
||||
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks
|
||||
of Intel Corporation in the U.S. and/or other countries.
|
||||
|
||||
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
|
||||
|
||||
Other names and brands may be claimed as the property of others.
|
||||
|
||||
Copyright © 2023, Intel Corporation. All rights reserved.
|
||||
|
||||
For more complete information about compiler optimizations, see our Optimization Notice.
|
||||
|
||||
Performance varies by use, configuration and other factors. Learn more at
|
||||
`www.Intel.com/PerformanceIndex <www.Intel.com/PerformanceIndex>`__.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -94,7 +94,7 @@ Detailed Guides
|
||||
API References
|
||||
##############
|
||||
|
||||
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.1/groupov_dev_api.html>`__
|
||||
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.1/groupie_transformation_api.html>`__
|
||||
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.2/groupov_dev_api.html>`__
|
||||
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.2/groupie_transformation_api.html>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
|
||||
The guides below provides extra API references needed for OpenVINO plugin development:
|
||||
|
||||
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.1/groupov_dev_api.html>`__
|
||||
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.1/groupie_transformation_api.html>`__
|
||||
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.2/groupov_dev_api.html>`__
|
||||
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.2/groupie_transformation_api.html>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -12,6 +12,9 @@
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_IR_and_opsets
|
||||
openvino_docs_ops_opset
|
||||
openvino_docs_operations_specifications
|
||||
openvino_docs_ops_broadcast_rules
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_IR_suitable_for_INT8_inference
|
||||
|
||||
The models, built and trained using various frameworks, can be large and architecture-dependent. To successfully run inference from any device and maximize the benefits of OpenVINO tools, you can convert the model to the OpenVINO Intermediate Representation (IR) format.
|
||||
|
||||
@@ -6,13 +6,6 @@
|
||||
:description: Learn the essentials of representing deep learning models in OpenVINO
|
||||
IR format and the use of supported operation sets.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_ops_opset
|
||||
openvino_docs_operations_specifications
|
||||
openvino_docs_ops_broadcast_rules
|
||||
|
||||
|
||||
This article provides essential information on the format used for representation of deep learning models in OpenVINO toolkit and supported operation sets.
|
||||
|
||||
@@ -18,8 +18,8 @@
|
||||
It performs element-wise activation function on a given input tensor, based on the following mathematical formula:
|
||||
|
||||
.. math::
|
||||
|
||||
Elu(x) = \left\
|
||||
|
||||
Elu(x) = \left\lbrace
|
||||
\begin{array}{r}
|
||||
x \quad \text{if } x > 0 \\
|
||||
\alpha(e^{x} - 1) \quad \text{if } x \leq 0
|
||||
|
||||
@@ -30,7 +30,7 @@ Given a list of probabilities x1, x2, ..., xn:
|
||||
|
||||
* For each probability x, replace it with a value :math:`e^{x}`.
|
||||
|
||||
* Create an array - discrete CDF (`Cumulative Distribution Function <https://en.wikipedia.org/wiki/Cumulative_distribution_function>`__) - the cumulative sum of those probabilities, ie. create an array of values where the ith value is the sum of the probabilities x1, ..., xi.
|
||||
* Create an array - discrete CDF (`Cumulative Distribution Function <https://hal.science/hal-00753950/file/PEER_stage2_10.1016%252Fj.spl.2011.03.014.pdf>`__) - the cumulative sum of those probabilities, ie. create an array of values where the ith value is the sum of the probabilities x1, ..., xi.
|
||||
* Divide the created array by its maximum value to normalize the cumulative probabilities between the real values in the range [0, 1]. This array is, by definition of CDF, sorted in ascending order, hence the maximum value is the last value of the array.
|
||||
* Randomly generate a sequence of double-precision floating point numbers in the range [0, 1].
|
||||
* For each generated number, assign the class with the lowest index for which the cumulative probability is less or equal to the generated value.
|
||||
@@ -44,12 +44,12 @@ Given a list of probabilities x1, x2, ..., xn:
|
||||
|
||||
**Example computations**:
|
||||
|
||||
Example 1 - 1D tensor
|
||||
Example 1 - simple 2D tensor with one batch
|
||||
|
||||
* Let ``probs`` = ``[0.1, 0.5, 0.4]``, ``num_samples`` = 5, ``log_probs`` = false, ``with_replacement`` = true
|
||||
* CDF of ``probs`` = ``[0.1, 0.1 + 0.5, 0.1 + 0.5 + 0.4]`` = ``[0.1, 0.6, 1]``
|
||||
* Randomly generated floats = ``[0.2, 0.4, 0.6, 0.8, 1]``
|
||||
* Assigned classes = ``[1, 1, 1, 2, 2]``
|
||||
* Let ``probs`` = ``[[0.1, 0.5, 0.4]]``, ``num_samples`` = 5, ``log_probs`` = false, ``with_replacement`` = true
|
||||
* CDF of ``probs`` = ``[[0.1, 0.1 + 0.5, 0.1 + 0.5 + 0.4]]`` = ``[[0.1, 0.6, 1]]``
|
||||
* Randomly generated floats = ``[[0.2, 0.4, 0.6, 0.8, 1]]``
|
||||
* Assigned classes = ``[[1, 1, 1, 2, 2]]``
|
||||
|
||||
Example 2 - 2D tensor, log probabilities
|
||||
|
||||
@@ -60,20 +60,20 @@ Example 2 - 2D tensor, log probabilities
|
||||
* Randomly generated floats = ``[[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1], [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]]``
|
||||
* Assigned classes = ``[[1, 1, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]``
|
||||
|
||||
Example 3 - 1D tensor, without replacement
|
||||
Example 3 - 2D tensor, without replacement
|
||||
|
||||
* Let ``probs`` = ``[0.1, 0.5, 0.4]``, ``num_samples`` = 2, ``log_probs`` = false, ``with_replacement`` = false
|
||||
* CDF of ``probs`` = ``[0.1, 0.6, 1]``
|
||||
* Randomly generated floats = ``[0.3, 0.2]``
|
||||
* Let ``probs`` = ``[[0.1, 0.5, 0.4]]``, ``num_samples`` = 2, ``log_probs`` = false, ``with_replacement`` = false
|
||||
* CDF of ``probs`` = ``[[0.1, 0.6, 1]]``
|
||||
* Randomly generated floats = ``[[0.3, 0.2]]``
|
||||
* In a loop:
|
||||
|
||||
* For a value of 0.3, a class with idx ``1`` is selected
|
||||
* Therefore, in CDF, for every class starting with idx ``1`` subtract the probability of class at idx ``1`` = ``probs[1]`` = 0.5
|
||||
* CDF = ``[0.1, 0.6 - 0.5, 1.0 - 0.5]`` = ``[0.1, 0.1, 0.5]``
|
||||
* Normalize CDF by dividing by last value: CDF = ``[0.2, 0.2, 1.0]``
|
||||
* CDF = ``[[0.1, 0.6 - 0.5, 1.0 - 0.5]]`` = ``[[0.1, 0.1, 0.5]]``
|
||||
* Normalize CDF by dividing by last value: CDF = ``[[0.2, 0.2, 1.0]]``
|
||||
* Take the next randomly generated float, here 0.2, and repeat until all random samples have assigned classes. Notice that for ``sampled values`` <= 0.2, only the class with idx ``0`` can be selected, since the search stops at the index with the first value satisfying ``sample value`` <= ``CDF probability``
|
||||
|
||||
* Assigned classes = ``[1, 2]``
|
||||
* Assigned classes = ``[[1, 2]]``
|
||||
|
||||
|
||||
**Attributes**:
|
||||
@@ -125,13 +125,13 @@ Example 3 - 1D tensor, without replacement
|
||||
|
||||
**Inputs**:
|
||||
|
||||
* **1**: ``probs`` - A 1D or 2D tensor of type `T_IN` and shape `[class_size]` or `[batch_size, class_size]` with probabilities. Allowed values depend on the *log_probs* attribute. The values are internally normalized to have values in the range of `[0, 1]` with the sum of all probabilities in the given batch equal to 1. **Required.**
|
||||
* **1**: ``probs`` - A 2D tensor of type `T_IN` and shape `[batch_size, class_size]` with probabilities. Allowed values depend on the *log_probs* attribute. The values are internally normalized to have values in the range of `[0, 1]` with the sum of all probabilities in the given batch equal to 1. **Required.**
|
||||
|
||||
* **2**: ``num_samples`` - A scalar or 1D tensor with a single element of type `T_SAMPLES` specifying the number of samples to draw from the multinomial distribution. **Required.**
|
||||
|
||||
**Outputs**:
|
||||
|
||||
* **1**: ``output``- A tensor with type specified by the attribute *convert_type* and shape depending on the rank of *probs*, either ``[num_samples]`` for one-dimensional *probs* or ``[batch_size, num_samples]`` for the two-dimensional one.
|
||||
* **1**: ``output``- A tensor with type specified by the attribute *convert_type* and shape ``[batch_size, num_samples]``.
|
||||
|
||||
**Types**
|
||||
|
||||
@@ -139,7 +139,7 @@ Example 3 - 1D tensor, without replacement
|
||||
* **T_SAMPLES**: 32-bit or 64-bit integers.
|
||||
|
||||
|
||||
*Example 1: 1D input tensor.*
|
||||
*Example 1: 2D input tensor with one batch.*
|
||||
|
||||
.. code-block:: xml
|
||||
:force:
|
||||
@@ -147,19 +147,21 @@ Example 3 - 1D tensor, without replacement
|
||||
<layer ... name="Multinomial" type="Multinomial">
|
||||
<data convert_type="f32", with_replacement="true", log_probs="false", global_seed="234", op_seed="148"/>
|
||||
<input>
|
||||
<port id="0" precision="FP32"> < !-- probs value: [0.1, 0.5, 0.4] -->
|
||||
<port id="0" precision="FP32"> < !-- probs value: [[0.1, 0.5, 0.4]] -->
|
||||
<dim>1</dim> < !-- batch size of 2 -->
|
||||
<dim>3</dim>
|
||||
</port>
|
||||
<port id="1" precision="I32"/> < !-- num_samples value: 5 -->
|
||||
</input>
|
||||
<output>
|
||||
<port id="3" precision="FP32" names="Multinomial:0">
|
||||
<dim>5</dim>
|
||||
<port id="3" precision="I32" names="Multinomial:0">
|
||||
<dim>1</dim> < !--dimension depends on input batch size -->
|
||||
<dim>5</dim> < !--dimension depends on num_samples -->
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
|
||||
*Example 2: 2D input tensor.*
|
||||
*Example 2: 2D input tensor with multiple batches.*
|
||||
|
||||
.. code-block:: xml
|
||||
:force:
|
||||
@@ -174,14 +176,14 @@ Example 3 - 1D tensor, without replacement
|
||||
<port id="1" precision="I32"/> < !-- num_samples value: 10 -->
|
||||
</input>
|
||||
<output>
|
||||
<port id="3" precision="FP32" names="Multinomial:0">
|
||||
<port id="3" precision="I32" names="Multinomial:0">
|
||||
<dim>2</dim> < !--dimension depends on input batch size -->
|
||||
<dim>10</dim> < !--dimension depends on num_samples -->
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
|
||||
*Example 3: 1D input tensor without replacement.*
|
||||
*Example 3: 2D input tensor without replacement.*
|
||||
|
||||
.. code-block:: xml
|
||||
:force:
|
||||
@@ -189,16 +191,18 @@ Example 3 - 1D tensor, without replacement
|
||||
<layer ... name="Multinomial" type="Multinomial">
|
||||
<data convert_type="f32", with_replacement="false", log_probs="false", global_seed="234", op_seed="148"/>
|
||||
<input>
|
||||
<port id="0" precision="FP32"> < !-- probs value: [0.1, 0.5, 0.4] -->
|
||||
<port id="0" precision="FP32"> < !-- probs value: [[0.1, 0.5, 0.4]] -->
|
||||
<dim>2</dim> < !-- batch size of 2 -->
|
||||
<dim>3</dim>
|
||||
</port>
|
||||
<port id="1" precision="I32"/> < !-- num_samples value: 2 -->
|
||||
</input>
|
||||
<output>
|
||||
<port id="3" precision="FP32" names="Multinomial:0">
|
||||
<port id="3" precision="I32" names="Multinomial:0">
|
||||
<dim>2</dim> < !-- batch size of 2 -->
|
||||
<dim>2</dim> < !-- 2 unique samples of classes -->
|
||||
</port>
|
||||
</output>
|
||||
</layer>
|
||||
|
||||
@endsphinxdirective
|
||||
@endsphinxdirective
|
||||
@@ -8,6 +8,7 @@
|
||||
|
||||
OpenVINO Development Tools package <openvino_docs_install_guides_install_dev_tools>
|
||||
Model Optimizer / Conversion API <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>
|
||||
Deploy Application with Deployment Manager <openvino_docs_install_guides_deployment_manager_tool>
|
||||
OpenVINO API 2.0 transition <openvino_2_0_transition_guide>
|
||||
Open Model ZOO <model_zoo>
|
||||
Apache MXNet, Caffe, and Kaldi <mxnet_caffe_kaldi>
|
||||
@@ -45,14 +46,21 @@ offering.
|
||||
when all major model frameworks became supported directly. For converting model
|
||||
files explicitly, it has been replaced with a more light-weight and efficient
|
||||
solution, the OpenVINO Converter (launched with OpenVINO 2023.1).
|
||||
|
||||
| :doc:`See how to use OVC <openvino_docs_model_processing_introduction>`
|
||||
| :doc:`See how to transition from the legacy solution <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
|
||||
|
||||
| **OpenVINO Deployment Manager**
|
||||
| *New solution:* the tool is no longer needed
|
||||
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
||||
|
|
||||
| It is recommended to explore alternative deployment solutions available in OpenVINO.
|
||||
| :doc:`See how to deploy locally <openvino_deployment_guide>`
|
||||
|
||||
|
||||
|
||||
| **Open Model ZOO**
|
||||
| *New solution:* users are encouraged to use public model repositories
|
||||
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
||||
| *Old solution:* discontinuation planned for OpenVINO 2025.0
|
||||
|
|
||||
| Open Model ZOO provided a collection of models prepared for use with OpenVINO,
|
||||
and a small set of tools enabling a level of automation for the process.
|
||||
@@ -77,7 +85,7 @@ offering.
|
||||
|
||||
| **Post-training Optimization Tool (POT)**
|
||||
| *New solution:* NNCF extended in OpenVINO 2023.0
|
||||
| *Old solution:* POT discontinuation planned for 2024
|
||||
| *Old solution:* POT discontinuation planned for 2024.0
|
||||
|
|
||||
| Neural Network Compression Framework (NNCF) now offers the same functionality as POT,
|
||||
apart from its original feature set. It is currently the default tool for performing
|
||||
@@ -86,6 +94,7 @@ offering.
|
||||
| :doc:`See how to use NNCF for model optimization <openvino_docs_model_optimization_guide>`
|
||||
| `Check the NNCF GitHub project, including documentation <https://github.com/openvinotoolkit/nncf>`__
|
||||
|
||||
|
||||
| **Old Inference API 1.0**
|
||||
| *New solution:* API 2.0 launched in OpenVINO 2022.1
|
||||
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
||||
@@ -94,6 +103,7 @@ offering.
|
||||
used but is not recommended. Its discontinuation is planned for 2024.
|
||||
| :doc:`See how to transition to API 2.0 <openvino_2_0_transition_guide>`
|
||||
|
||||
|
||||
| **Compile tool**
|
||||
| *New solution:* the tool is no longer needed
|
||||
| *Old solution:* deprecated in OpenVINO 2023.0
|
||||
@@ -101,21 +111,21 @@ offering.
|
||||
| Compile tool is now deprecated. If you need to compile a model for inference on
|
||||
a specific device, use the following script:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
|
||||
.. doxygensnippet:: docs/snippets/export_compiled_model.py
|
||||
:language: python
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
.. tab-item:: C++
|
||||
:language: python
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
|
||||
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [export_compiled_model]
|
||||
:language: cpp
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
| :doc:`see which devices support import / export <openvino_docs_OV_UG_Working_with_devices>`
|
||||
| :doc:`Learn more on preprocessing steps <openvino_docs_OV_UG_Preprocessing_Overview>`
|
||||
|
||||
@@ -149,7 +149,7 @@ For example, to install and configure dependencies required for working with Ten
|
||||
|
||||
Model conversion API support for TensorFlow 1.x environment has been deprecated. Use the ``tensorflow2`` parameter to install a TensorFlow 2.x environment that can convert both TensorFlow 1.x and 2.x models. If your model isn't compatible with the TensorFlow 2.x environment, use the `tensorflow` parameter to install the TensorFlow 1.x environment. The TF 1.x environment is provided only for legacy compatibility reasons.
|
||||
|
||||
For more details on the openvino-dev PyPI package, see `pypi.org <https://pypi.org/project/openvino-dev/2023.1.0>`__ .
|
||||
For more details on the openvino-dev PyPI package, see `pypi.org <https://pypi.org/project/openvino-dev/2023.2.0>`__ .
|
||||
|
||||
Step 5. Test the Installation
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
@@ -12,8 +12,11 @@
|
||||
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
|
||||
In 2023.1 OpenVINO release a new OVC (OpenVINO Model Converter) tool has been introduced with the corresponding Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent
|
||||
a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized and the transition guide from the legacy API to the new API is provided.
|
||||
In the 2023.1 OpenVINO release OpenVINO Model Converter was introduced with the corresponding
|
||||
Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent
|
||||
a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered
|
||||
legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized
|
||||
and the transition guide from the legacy API to the new API is provided.
|
||||
|
||||
Parameters Comparison
|
||||
#####################
|
||||
|
||||
@@ -6,13 +6,13 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model
|
||||
openvino_docs_MO_DG_Additional_Optimization_Use_Cases
|
||||
openvino_docs_MO_DG_FP16_Compression
|
||||
openvino_docs_MO_DG_Python_API
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
|
||||
Supported_Model_Formats_MO_DG
|
||||
Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>
|
||||
Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>
|
||||
Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>
|
||||
Compressing a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>
|
||||
Convert Models Represented as Python Objects <openvino_docs_MO_DG_Python_API>
|
||||
Model Optimizer Frequently Asked Questions <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>
|
||||
Supported Model Formats <Supported_Model_Formats_MO_DG>
|
||||
|
||||
.. meta::
|
||||
:description: Model conversion (MO) furthers the transition between training and
|
||||
|
||||
@@ -1,7 +1,13 @@
|
||||
# Convert Models Represented as Python Objects {#openvino_docs_MO_DG_Python_API}
|
||||
# [LEGACY] Convert Models Represented as Python Objects {#openvino_docs_MO_DG_Python_API}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Model Preparation <openvino_docs_model_processing_introduction>` article.
|
||||
|
||||
Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc.
|
||||
|
||||
``convert_model()`` has the ability available from the command-line tool, plus the ability to pass Python model objects, such as a PyTorch model or TensorFlow Keras model directly, without saving them into files and without leaving the training environment (Jupyter Notebook or training scripts). In addition to input models consumed directly from Python, ``convert_model`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO.
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
# Cutting Off Parts of a Model {#openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model}
|
||||
# [LEGACY] Cutting Off Parts of a Model {#openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
Sometimes, it is necessary to remove parts of a model when converting it to OpenVINO IR. This chapter describes how to do it, using model conversion API parameters. Model cutting applies mostly to TensorFlow models, which is why TensorFlow will be used in this chapter's examples, but it may be also useful for other frameworks.
|
||||
|
||||
Purpose of Model Cutting
|
||||
|
||||
@@ -1,7 +1,13 @@
|
||||
# Embedding Preprocessing Computation {#openvino_docs_MO_DG_Additional_Optimization_Use_Cases}
|
||||
# [LEGACY] Embedding Preprocessing Computation {#openvino_docs_MO_DG_Additional_Optimization_Use_Cases}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Conversion Parameters <openvino_docs_OV_Converter_UG_Conversion_Options>` article.
|
||||
|
||||
Input data for inference can be different from the training dataset and requires
|
||||
additional preprocessing before inference. To accelerate the whole pipeline including
|
||||
preprocessing and inference, model conversion API provides special parameters such as ``mean_values``,
|
||||
|
||||
@@ -1,7 +1,13 @@
|
||||
# Compressing a Model to FP16 {#openvino_docs_MO_DG_FP16_Compression}
|
||||
# [LEGACY] Compressing a Model to FP16 {#openvino_docs_MO_DG_FP16_Compression}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Conversion Parameters <openvino_docs_OV_Converter_UG_Conversion_Options>` article.
|
||||
|
||||
By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion.
|
||||
It results in creating a "compressed ``FP16`` model", which occupies about half of
|
||||
the original space in the file system. The compression may introduce a minor drop in accuracy,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Model Optimizer Frequently Asked Questions {#openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ}
|
||||
# [LEGACY] Model Optimizer Frequently Asked Questions {#openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
||||
@@ -1,8 +1,15 @@
|
||||
# Setting Input Shapes {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
|
||||
# [LEGACY] Setting Input Shapes {#openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Setting Input Shapes <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model>` article.
|
||||
|
||||
With model conversion API you can increase your model's efficiency by providing an additional shape definition, with these two parameters: `input_shape` and `static_shape`.
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn how to increase the efficiency of a model with MO by providing an additional shape definition with the input_shape and static_shape parameters.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Supported Model Formats {#Supported_Model_Formats_MO_DG}
|
||||
# [LEGACY] Supported Model Formats {#Supported_Model_Formats_MO_DG}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -6,17 +6,22 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
|
||||
openvino_docs_MO_DG_prepare_model_convert_model_tutorials
|
||||
Converting a TensorFlow Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>
|
||||
Converting an ONNX Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>
|
||||
Converting a PyTorch Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>
|
||||
Converting a TensorFlow Lite Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>
|
||||
Converting a PaddlePaddle Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>
|
||||
Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>
|
||||
|
||||
.. meta::
|
||||
:description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Supported Model Formats <Supported_Model_Formats>` article.
|
||||
|
||||
**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <openvino_ir>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
|
||||
|
||||
**PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Converting an ONNX Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX}
|
||||
# [LEGACY] Converting an ONNX Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -6,6 +6,14 @@
|
||||
:description: Learn how to convert a model from the
|
||||
ONNX format to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting an ONNX Model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_ONNX>` article.
|
||||
|
||||
|
||||
.. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
Converting an ONNX Model
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Converting a PaddlePaddle Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
|
||||
# [LEGACY] Converting a PaddlePaddle Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -7,6 +7,13 @@
|
||||
PaddlePaddle format to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a PaddlePaddle Model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_Paddle>` article.
|
||||
|
||||
|
||||
This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format.
|
||||
|
||||
.. note:: PaddlePaddle models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Converting a PyTorch Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
|
||||
# [LEGACY] Converting a PyTorch Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
PyTorch format to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a PyTorch Model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_PyTorch>` article.
|
||||
|
||||
This page provides instructions on how to convert a model from the PyTorch format to the OpenVINO IR format.
|
||||
|
||||
The conversion is a required step to run inference using OpenVINO API.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Converting a TensorFlow Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
|
||||
# [LEGACY] Converting a TensorFlow Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -6,6 +6,12 @@
|
||||
:description: Learn how to convert a model from a
|
||||
TensorFlow format to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a TensorFlow Model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` article.
|
||||
|
||||
|
||||
.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Converting a TensorFlow Lite Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite}
|
||||
# [LEGACY] Converting a TensorFlow Lite Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -6,6 +6,11 @@
|
||||
:description: Learn how to convert a model from a
|
||||
TensorFlow Lite format to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Converting a TensorFlow Lite Model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>` article.
|
||||
|
||||
To convert a TensorFlow Lite model, use the ``mo`` script and specify the path to the input ``.tflite`` model file:
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Model Conversion Tutorials {#openvino_docs_MO_DG_prepare_model_convert_model_tutorials}
|
||||
# [LEGACY] Model Conversion Tutorials {#openvino_docs_MO_DG_prepare_model_convert_model_tutorials}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -37,6 +37,12 @@
|
||||
:description: Get to know conversion methods for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described in the tutorials has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This section provides a set of tutorials that demonstrate conversion methods for specific
|
||||
TensorFlow, ONNX, and PyTorch models. Note that these instructions do not cover all use
|
||||
cases and may not reflect your particular needs.
|
||||
|
||||
@@ -8,6 +8,12 @@
|
||||
OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This tutorial explains how to convert the Attention OCR (AOCR) model from the `TensorFlow Attention OCR repository <https://github.com/emedvedev/attention-ocr>`__ to the Intermediate Representation (IR).
|
||||
|
||||
Extracting a Model from ``aocr`` Library
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
Pretrained models for BERT (Bidirectional Encoder Representations from Transformers) are
|
||||
`publicly available <https://github.com/google-research/bert>`__.
|
||||
|
||||
|
||||
@@ -6,6 +6,11 @@
|
||||
:description: Learn how to convert a BERT-NER model
|
||||
from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
The goal of this article is to present a step-by-step guide on how to convert PyTorch BERT-NER model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
|
||||
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This tutorial explains how to convert a CRNN model to OpenVINO™ Intermediate Representation (IR).
|
||||
|
||||
There are several public versions of TensorFlow CRNN model implementation available on GitHub. This tutorial explains how to convert the model from
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
model from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
The goal of this article is to present a step-by-step guide on how to convert a PyTorch Cascade RCNN R-101 model to OpenVINO IR. First, you need to download the model and convert it to ONNX.
|
||||
|
||||
Downloading and Converting Model to ONNX
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a DeepSpeech model
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
`DeepSpeech project <https://github.com/mozilla/DeepSpeech>`__ provides an engine to train speech-to-text models.
|
||||
|
||||
Downloading the Pretrained DeepSpeech Model
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This tutorial explains how to convert EfficientDet public object detection models to the Intermediate Representation (IR).
|
||||
|
||||
.. _efficientdet-to-ir:
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a F3Net model
|
||||
from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
`F3Net <https://github.com/weijun88/F3Net>`__ : Fusion, Feedback and Focus for Salient Object Detection
|
||||
|
||||
Cloning the F3Net Repository
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a FaceNet model
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Supported Model Formats <Supported_Model_Formats>` article.
|
||||
|
||||
`Public pre-trained FaceNet models <https://github.com/davidsandberg/facenet#pre-trained-models>`__ contain both training
|
||||
and inference part of graph. Switch between this two states is manageable with placeholder value.
|
||||
Intermediate Representation (IR) models are intended for inference, which means that train part is redundant.
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a Faster R-CNN model
|
||||
from ONNX to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
The instructions below are applicable **only** to the Faster R-CNN model converted to the ONNX file format from the `maskrcnn-benchmark model <https://github.com/facebookresearch/maskrcnn-benchmark>`__:
|
||||
|
||||
1. Download the pretrained model file from `onnx/models <https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn>`__ (commit-SHA: 8883e49e68de7b43e263d56b9ed156dfa1e03117).
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a GNMT model
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This tutorial explains how to convert Google Neural Machine Translation (GNMT) model to the Intermediate Representation (IR).
|
||||
|
||||
There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the `TensorFlow Neural Machine Translation (NMT) repository <https://github.com/tensorflow/nmt>`__ to the IR.
|
||||
|
||||
@@ -6,6 +6,11 @@
|
||||
:description: Learn how to convert a pre-trained GPT-2
|
||||
model from ONNX to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
`Public pre-trained GPT-2 model <https://github.com/onnx/models/tree/master/text/machine_comprehension/gpt-2>`__ is a large
|
||||
transformer-based language model with a simple objective: predict the next word, given all of the previous words within some text.
|
||||
|
||||
@@ -6,6 +6,11 @@
|
||||
:description: Learn how to convert a pre-trained Mask
|
||||
R-CNN model from ONNX to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
The instructions below are applicable **only** to the Mask R-CNN model converted to the ONNX file format from the `maskrcnn-benchmark model <https://github.com/facebookresearch/maskrcnn-benchmark>`__.
|
||||
|
||||
|
||||
@@ -7,6 +7,11 @@
|
||||
Filtering Model from TensorFlow to the OpenVINO Intermediate
|
||||
Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This tutorial explains how to convert Neural Collaborative Filtering (NCF) model to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
@@ -8,6 +8,12 @@
|
||||
Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
* Starting with the 2022.1 release, model conversion API can convert the TensorFlow Object Detection API Faster and Mask RCNNs topologies differently. By default, model conversion adds operation "Proposal" to the generated IR. This operation needs an additional input to the model with name "image_info" which should be fed with several values describing the preprocessing applied to the input image (refer to the :doc:`Proposal <openvino_docs_ops_detection_Proposal_4>` operation specification for more information). However, this input is redundant for the models trained and inferred with equal size images. Model conversion API can generate IR for such models and insert operation :doc:`DetectionOutput <openvino_docs_ops_detection_DetectionOutput_1>` instead of ``Proposal``. The `DetectionOutput` operation does not require additional model input "image_info". Moreover, for some models the produced inference results are closer to the original TensorFlow model. In order to trigger new behavior, the attribute "operation_to_add" in the corresponding JSON transformation configuration file should be set to value "DetectionOutput" instead of default one "Proposal".
|
||||
* Starting with the 2021.1 release, model conversion API converts the TensorFlow Object Detection API SSDs, Faster and Mask RCNNs topologies keeping shape-calculating sub-graphs by default, so topologies can be re-shaped in the OpenVINO Runtime using dedicated reshape API. Refer to the :doc:`Using Shape Inference <openvino_docs_OV_UG_ShapeInference>` guide for more information on how to use this feature. It is possible to change the both spatial dimensions of the input image and batch size.
|
||||
* To generate IRs for TF 1 SSD topologies, model conversion API creates a number of ``PriorBoxClustered`` operations instead of a constant node with prior boxes calculated for the particular input image size. This change allows you to reshape the topology in the OpenVINO Runtime using dedicated API. The reshaping is supported for all SSD topologies except FPNs, which contain hardcoded shapes for some operations preventing from changing topology input shape.
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a QuartzNet model
|
||||
from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
`NeMo project <https://github.com/NVIDIA/NeMo>`__ provides the QuartzNet model.
|
||||
|
||||
Downloading the Pre-trained QuartzNet Model
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a RCAN model
|
||||
from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
`RCAN <https://github.com/yulunzhang/RCAN>`__ : Image Super-Resolution Using Very Deep Residual Channel Attention Networks
|
||||
|
||||
Downloading and Converting the Model to ONNX
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert a RNN-T model
|
||||
from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This guide covers conversion of RNN-T model from `MLCommons <https://github.com/mlcommons>`__ repository. Follow
|
||||
the instructions below to export a PyTorch model into ONNX, before converting it to IR:
|
||||
|
||||
|
||||
@@ -7,10 +7,16 @@
|
||||
from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This tutorial explains how to convert a RetinaNet model to the Intermediate Representation (IR).
|
||||
|
||||
`Public RetinaNet model <https://github.com/fizyr/keras-retinanet>`__ does not contain pretrained TensorFlow weights.
|
||||
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2023.1/omz_models_model_retinanet_tf.html>`__.
|
||||
To convert this model to the TensorFlow format, follow the `Reproduce Keras to TensorFlow Conversion tutorial <https://docs.openvino.ai/2023.2/omz_models_model_retinanet_tf.html>`__.
|
||||
|
||||
After converting the model to TensorFlow format, run the following command:
|
||||
|
||||
|
||||
@@ -7,6 +7,11 @@
|
||||
Classification model from TensorFlow to the OpenVINO
|
||||
Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
`TensorFlow-Slim Image Classification Model Library <https://github.com/tensorflow/models/tree/master/research/slim/README.md>`__ is a library to define, train and evaluate classification models in TensorFlow. The library contains Python scripts defining the classification topologies together with checkpoint files for several pre-trained classification topologies. To convert a TensorFlow-Slim library model, complete the following steps:
|
||||
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
models from TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
The Wide and Deep models is a combination of wide and deep parts for memorization and generalization of object features respectively.
|
||||
These models can contain different types of object features such as numerical, categorical, sparse and sequential features. These feature types are specified
|
||||
through Tensorflow tf.feature_column API. Table below presents what feature types are supported by the OpenVINO toolkit.
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
:description: Learn how to convert an XLNet model from
|
||||
TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
Pretrained models for XLNet (Bidirectional Encoder Representations from Transformers) are
|
||||
`publicly available <https://github.com/zihangdai/xlnet>`__.
|
||||
|
||||
|
||||
@@ -7,6 +7,12 @@
|
||||
from PyTorch to the OpenVINO Intermediate Representation.
|
||||
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation.
|
||||
The PyTorch implementation is publicly available in `this GitHub repository <https://github.com/dbolya/yolact>`__.
|
||||
The YOLACT++ model is not supported, because it uses deformable convolutional layers that cannot be represented in ONNX format.
|
||||
|
||||
@@ -6,6 +6,11 @@
|
||||
:description: Learn how to convert YOLO models from
|
||||
TensorFlow to the OpenVINO Intermediate Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
This document explains how to convert real-time object detection YOLOv1, YOLOv2, YOLOv3 and YOLOv4 public models to the Intermediate Representation (IR). All YOLO models are originally implemented in the DarkNet framework and consist of two files:
|
||||
|
||||
|
||||
@@ -7,7 +7,12 @@
|
||||
Model on One Billion Word Benchmark to the OpenVINO Intermediate
|
||||
Representation.
|
||||
|
||||
.. danger::
|
||||
|
||||
The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications.
|
||||
|
||||
This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <tutorials>`.
|
||||
|
||||
Downloading a Pre-trained Language Model on One Billion Word Benchmark
|
||||
######################################################################
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# (Deprecated) Post-training Quantization with POT {#pot_introduction}
|
||||
# [Deprecated] Post-training Quantization with POT {#pot_introduction}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -12,15 +12,15 @@
|
||||
API Reference <pot_compression_api_README>
|
||||
Command-line Interface <pot_compression_cli_README>
|
||||
Examples <pot_examples_description>
|
||||
pot_docs_FrequentlyAskedQuestions
|
||||
Post-training Optimization Tool FAQ <pot_docs_FrequentlyAskedQuestions>
|
||||
(Experimental) Protecting Model <pot_ranger_README>
|
||||
|
||||
|
||||
|
||||
|
||||
.. note:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
For the needs of post-training optimization, OpenVINO™ provides a **Post-training Optimization Tool (POT)**
|
||||
For the needs of post-training optimization, OpenVINO provides a **Post-training Optimization Tool (POT)**
|
||||
which supports the **uniform integer quantization** method. This method allows moving from floating-point precision
|
||||
to integer precision (for example, 8-bit) for weights and activations during inference time. It helps to reduce
|
||||
the model size, memory footprint and latency, as well as improve the computational efficiency, using integer arithmetic.
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# API Reference {#pot_compression_api_README}
|
||||
# [Deprecated] API Reference {#pot_compression_api_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
Post-training Optimization Tool API provides a full set of interfaces and helpers that allow users to implement a custom optimization pipeline for various types of DL models including cascaded or compound models. Below is a full specification of this API:
|
||||
|
||||
DataLoader
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Use Post-Training Optimization Tool Command-Line Interface (Model Zoo flow){#pot_compression_cli_README}
|
||||
# [Deprecated] Use Post-Training Optimization Tool Command-Line Interface (Model Zoo flow){#pot_compression_cli_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -7,9 +7,10 @@
|
||||
:hidden:
|
||||
|
||||
Simplified Mode <pot_docs_simplified_mode>
|
||||
pot_configs_README
|
||||
Configuration File Description <pot_configs_README>
|
||||
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
Introduction
|
||||
####################
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Configuration File Description {#pot_configs_README}
|
||||
# [Deprecated] Configuration File Description {#pot_configs_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
The tool is designed to work with the configuration file where all the parameters required for the optimization are specified. These parameters are organized as a dictionary and stored in
|
||||
a JSON file. JSON file allows using comments that are supported by the ``jstyleson`` Python package.
|
||||
Logically all parameters are divided into three groups:
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Optimization with Simplified Mode {#pot_docs_simplified_mode}
|
||||
# [Deprecated] Optimization with Simplified Mode {#pot_docs_simplified_mode}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
Introduction
|
||||
####################
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Examples {#pot_examples_description}
|
||||
# [Deprecated] Examples {#pot_examples_description}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
API Examples <pot_example_README>
|
||||
Command-line Example <pot_configs_examples_README>
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This section provides a set of examples that demonstrate how to apply the post-training optimization methods to optimize various models from different domains. It contains optimization recipes for concrete models, that unnecessarily cover your case, but which should be sufficient to reuse these recipes to optimize custom models:
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Post-training Optimization Tool API Examples {#pot_example_README}
|
||||
# [Deprecated] Post-training Optimization Tool API Examples {#pot_example_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
Quantizing 3D Segmentation Model <pot_example_3d_segmentation_README>
|
||||
Quantizing for GNA Device <pot_example_speech_README>
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
|
||||
The Post-training Optimization Tool contains multiple examples that demonstrate how to use its :doc:`API <pot_compression_api_README>`
|
||||
to optimize DL models. All available examples can be found on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples>`__.
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Quantizing 3D Segmentation Model {#pot_example_3d_segmentation_README}
|
||||
# [Deprecated] Quantizing 3D Segmentation Model {#pot_example_3d_segmentation_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a 3D segmentation model.
|
||||
The `Brain Tumor Segmentation <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0002>`__ model from PyTorch is used for this purpose. A custom ``DataLoader`` is created to load images in NIfTI format from the `Medical Segmentation Decathlon BRATS 2017 <http://medicaldecathlon.com/>`__ dataset for 3D semantic segmentation task and the implementation of the Dice Index metric is used for the model evaluation. In addition, this example demonstrates how one can use image metadata obtained during image reading and preprocessing to post-process the model raw output. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/3d_segmentation>`__.
|
||||
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Quantizing Image Classification Model {#pot_example_classification_README}
|
||||
# [Deprecated] Quantizing Image Classification Model {#pot_example_classification_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a classification model.
|
||||
The `MobilenetV2 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224>`__ model from TensorFlow is used for this purpose.
|
||||
A custom ``DataLoader`` is created to load the `ImageNet <http://www.image-net.org/>`__ classification dataset and the implementation of Accuracy at top-1 metric is used for the model evaluation. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/classification>`__.
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Quantizing Cascaded Face detection Model {#pot_example_face_detection_README}
|
||||
# [Deprecated] Quantizing Cascaded Face detection Model {#pot_example_face_detection_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a face detection model.
|
||||
The `MTCNN <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mtcnn>`__ model from Caffe is used for this purpose.
|
||||
A custom ``DataLoader`` is created to load the `WIDER FACE <http://shuoyang1213.me/WIDERFACE/>`__ dataset for a face detection task
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Quantizing Object Detection Model with Accuracy Control {#pot_example_object_detection_README}
|
||||
# [Deprecated] Quantizing Object Detection Model with Accuracy Control {#pot_example_object_detection_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Toolkit API <pot_compression_api_README>` to quantize an object detection model in the :doc:`accuracy-aware mode <accuracy_aware_README>`. The `MobileNetV1 FPN <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd_mobilenet_v1_fpn_coco>`__ model from TensorFlow for object detection task is used for this purpose. A custom ``DataLoader`` is created to load the `COCO <https://cocodataset.org/>`__ dataset for object detection task and the implementation of mAP COCO is used for the model evaluation. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/object_detection>`__.
|
||||
|
||||
How to prepare the data
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Quantizing Semantic Segmentation Model {#pot_example_segmentation_README}
|
||||
# [Deprecated] Quantizing Semantic Segmentation Model {#pot_example_segmentation_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a segmentation model.
|
||||
The `DeepLabV3 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3>` model from TensorFlow is used for this purpose.
|
||||
A custom `DataLoader` is created to load the `Pascal VOC 2012 <http://host.robots.ox.ac.uk/pascal/VOC/voc2012/>`__ dataset for semantic segmentation task
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# Quantizing for GNA Device {#pot_example_speech_README}
|
||||
# [Deprecated] Quantizing for GNA Device {#pot_example_speech_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a speech model for :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>` device. Quantization for GNA is different from CPU quantization due to device specifics: GNA supports quantized inputs in INT16 and INT32 (for activations) precision and quantized weights in INT8 and INT16 precision.
|
||||
|
||||
This example contains pre-selected quantization options based on the DefaultQuantization algorithm and created for models from `Kaldi <http://kaldi-asr.org/doc/>`__ framework, and its data format.
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
# End-to-end Command-line Interface Example {#pot_configs_examples_README}
|
||||
# [Deprecated] End-to-end Command-line Interface Example {#pot_configs_examples_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
This tutorial describes an example of running post-training quantization for the **MobileNet v2 model from PyTorch** framework,
|
||||
particularly by the DefaultQuantization algorithm.
|
||||
The example covers the following steps:
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Post-training Optimization Tool FAQ {#pot_docs_FrequentlyAskedQuestions}
|
||||
# [Deprecated] Post-training Optimization Tool FAQ {#pot_docs_FrequentlyAskedQuestions}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. note::
|
||||
.. danger::
|
||||
|
||||
Post-training Optimization Tool has been deprecated since OpenVINO 2023.0.
|
||||
:doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for post-training quantization instead.
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
# Experimental: Protecting Deep Learning Model through Range Supervision ("RangeSupervision") {#pot_ranger_README}
|
||||
# [Deprecated] Experimental: Protecting Deep Learning Model through Range Supervision ("RangeSupervision") {#pot_ranger_README}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. danger::
|
||||
|
||||
Post-training Optimization Tool has been deprecated since OpenVINO 2023.0.
|
||||
:doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for post-training quantization instead.
|
||||
|
||||
Introduction
|
||||
####################
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Post-Training Quantization Best Practices {#pot_docs_BestPractices}
|
||||
# [Deprecated] Post-Training Quantization Best Practices {#pot_docs_BestPractices}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
|
||||
Saturation Issue <pot_saturation_issue>
|
||||
|
||||
.. danger:: Post-training Optimization Tool is deprecated since OpenVINO 2023.0. :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` is recommended for the post-training quantization instead.
|
||||
|
||||
The :doc:`Default Quantization <pot_default_quantization_usage>` of the Post-training Optimization Tool (POT) is
|
||||
the fastest and easiest way to get a quantized model. It requires only some unannotated representative dataset to be provided in most cases. Therefore, it is recommended to use it as a starting point when it comes to model optimization. However, it can lead to significant accuracy deviation in some cases. The purpose of this article is to provide tips to address this issue.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user