Compare commits

..

372 Commits

Author SHA1 Message Date
Ivan Tikhonov
40bf400b18 Add FakeQuantize op support in TS transformations (#17243)
* Add FQ op support in TS transformations

* codestyle

* Mark FQ as supported op in the TS ops list
2023-04-27 15:09:07 +04:00
Nikolay Shchegolev
22bb3af7df [CPU] Disable test case with sporadic failure. (#17256) 2023-04-27 14:06:33 +04:00
Sebastian Golebiewski
c0767a7e27 [DOCS] TensorFlow Lite FrontEnd updating dev docs (#17225)
* update with tflite

* Update index.md

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-27 13:58:25 +04:00
Anastasiia Pnevskaia
59e28f8d0d Disabled tests. (#17231) 2023-04-27 13:39:58 +04:00
Sebastian Golebiewski
40128cded1 update tuts (#17201) 2023-04-27 11:29:19 +02:00
Ryszard Jezierski
8005a3d0b0 Removed unneeded deprecated test code (#16939) 2023-04-26 23:53:10 +04:00
Ryszard Jezierski
561bf6d478 Removed deprecated parser tests (#17151) 2023-04-26 23:51:53 +04:00
Yuan Hu
cecd0e75a6 coverity Uninitialized scalar variable (#17182)
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-26 23:49:21 +04:00
Nesterov Alexander
dbaa1f0c0d [ARM CPU] Fix interpolate tests (#17171)
* fix interpolate bug

* fix interpolate bug - some tests

* fix interpolate bug - change init

* fix interpolate bug - shape fix

* fix interpolate bug - shape fix 2

* fix interpolate bug - add assert
2023-04-26 23:28:02 +04:00
Wilson Seok
03a428f50c [GPU] Fix remove redundant reorder to skip reorder fusing when sibling node doesn't support fused padding (#17041)
* initial fix

* add corresponding unit test

* skip reorder fusing when sibling node does not support fused padding

* fix data type of axis for win build

* Revert "fix data type of axis for win build"

This reverts commit 719ea75d7826aafc7bb94c1971586c33a9842f10.

* add static casting for win build
2023-04-26 16:53:23 +00:00
Sun Xiaoxia
7fc65ae3c5 fix threading test sporadic failure (#17230)
Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-04-26 20:27:49 +04:00
Maxim Vafin
10392644e3 [PT FE] Enable stable sort layer tests (#17229)
* [PT FE] Enable stable sort layer tests

* Remove unused code
2023-04-26 18:24:38 +02:00
Ivan Tikhonov
80519162ae Reduce the binary size of transformation lib (#17220)
* Replace opset with op version for TransposeSinking and SmartReshape transformations to reduce binary size

* replace opset with op version in some op_conversions transformations

* codestyle
2023-04-26 19:03:36 +04:00
Ekaterina Aidova
82ff7e17c9 use input parameter for building example_inputs (#17207)
* use input parameter for building example_inputs

* Update tools/mo/openvino/tools/mo/moc_frontend/pytorch_frontend_utils.py
2023-04-26 17:58:06 +04:00
Egor Duplenskii
f1bc402b38 [CPU] Pick fix for oneDNN v3.1 release (#17144) 2023-04-26 17:44:36 +04:00
Wang Wangwang
962df2cdcb [AUTO] Exclude other vendor's GPU device in default candidate list (#17063)
* [AUTO] Plugin takes only Intel dGPU as 1st priority

* Update test case

* Simplify the code

* Support more test cases in GetDeviceList API

* Add notIntelGPU to _deviceBlocklist in AUTO plugin

* Restore some code formats

* Update test cases

* Add some logs to GetValidDevice API

* Simplify the code

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-04-26 14:42:53 +01:00
Nikolay Shchegolev
c8ac7c9b82 [CPU] Infer_request crashes for SpaceToBatch operation. (#16974)
* [CPU] Infer_request crashes for SpaceToBatch operation.

* Fixes as per comments.

* Fixes as per comments 2.
2023-04-26 17:39:54 +04:00
Vladimir Paramuzov
6ed85178d5 [GPU] Fix layout propagation logic (#17199) 2023-04-26 14:20:48 +01:00
Edward Shogulin
14a14ecd76 [LPT] Precision restriction customization extending: tests (#17196)
* [LPT] Precision restriction customization extending

* comments fix: refactoring

* [LPT] Precision restriction customization extending: tests
2023-04-26 16:53:04 +04:00
Tomasz Adamowicz
546581bcce [Gna][coverity] fixes for issue type AUTO_CAUSES_COPY (#17192)
* [Gna][coverity] fixes for AUTO_CAUSES_COPY CID: 1491505, 1491595, 1502494, 1502500, 1504698, 1504769, 1507058

* update afte review

* adding const specifier to auto where needed
2023-04-26 13:32:54 +01:00
Ilya Lavrenov
cfbfa18f34 Fixed WASM build in update docker container / new dependencies (#17224) 2023-04-26 16:32:36 +04:00
Edward Shogulin
e593cf8545 [LPT] Precision restriction customization extending (#17147)
* [LPT] Precision restriction customization extending

* comments fix: refactoring
2023-04-26 13:29:09 +01:00
Alexandra Sidorova
a032d67cc7 [CPU] Fixed enforcebf16 condition for transformation pipeline (#17157)
* [CPU] Fixed enforcebf16 condition for transformation pipeline

* [Snippets][CPU][Tests] Added test with bf16
2023-04-26 16:13:01 +04:00
Irina Efode
ca92eb96ad [CONFORMANCE] Fix Runner on Win (#17221) 2023-04-26 13:03:20 +01:00
Zlobin Vladimir
de30d8523d State single value is uese (#15458)
Ticket EISW-60868
2023-04-26 14:50:03 +04:00
Ilya Lavrenov
da91b33763 ARM32 ACL kernels in oneDNN (#17142)
* ARM32 ACL kernels in oneDNN

* Fixed review comments

* Fixed ERF

* Disabled several eltwise tests on arm32
2023-04-26 13:50:10 +04:00
Vitaliy Urusovskij
02bfa7804b Add copyright (#17218) 2023-04-26 13:44:31 +04:00
Luwei Zhou
6cb6c5958a Fix the SDL issues. (#17107)
* Fix the SDL issues.

* Applied review comments.

* Update Slice test case to test none-const axis input.
2023-04-26 13:35:36 +04:00
Chenhu Wang
737864bdc7 [CPU] layout alignment to improve perf for interpolate pillow modes (#17079)
* infer planar layout with [1,2] axis as nhwc layout pass and kernel

* leftover comments apply

* comment apply
2023-04-26 11:33:17 +02:00
Ivan Tikhonov
95ca54d0ab Update ConstantFolding transformation to support Gather with dynamic input (#16973)
* ConstFold Gather op in case of dynamic dims in data input

* Update ConstantFolding transformation to support Gather with dynamic input; add test

* always mark ShapeOf nodes as can_be_folded

* add additional checks for fused_names in the gather test

---------

Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
2023-04-26 13:22:47 +04:00
Vladimir Paramuzov
ce5f65af14 [GPU] Use hash of test name for random generator initialization (#17213) 2023-04-26 12:52:38 +04:00
Ekaterina Aidova
6389f423bf [PT FE]: implement scaled dot product attention (#17178)
* [PT FE]: implement scaled dot product attention

* Apply suggestions from code review

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

* Update src/frontends/pytorch/src/op/scaled_dot_product_attention.cpp

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>

---------

Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
2023-04-26 12:51:02 +04:00
Ekaterina Aidova
5857c4438b [PT FE]: switch on tracing as main path if example inputs provided (#17194) 2023-04-26 12:50:43 +04:00
Eddy Kim
09265083ed [GPU] fixed a missing data type (#17200)
* fixed missing data type

* updated the resolution for better accuracy check
2023-04-26 08:28:18 +00:00
Roman Kazantsev
7cf9d109e8 [TF FE] Implement optimal conversion of body graphs (#17211)
* [TF FE] Implement optimal conversion of body graphs

Preliminary setting input shapes and types for body graph InputModel
provides more optimal conversion of body-graphs.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix build issue

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-26 12:12:54 +04:00
Maciej Smyk
5682e178dd DOCS shift to rst - Opsets D (#17205)
* Update Operations_specifications.md

* Update Divide_1.md

* Update DFT_7.md

* Update DetectionOutput_8.md

* Update DetectionOutput_1.md

* Update DetectionOutput_1.md

* Update DepthToSpace_1.md

* Update DeformablePSROIPooling_1.md

* Update DeformableConvolution_8.md

* Update DeformableConvolution_1.md

* Update DeformableConvolution_8.md

* fix

* fix

* Update DFT_7.md

* Update DFT_7.md

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-04-26 10:11:13 +02:00
Mateusz Tabaka
dfaa4e7bd6 Add ConvertSubtractWithConstant to MOCTransformations (#17058)
* Add ConvertSubtractWithConstant to MOCTransformations

Ticket: CVS-62419

* fix test_mo_import_from_memory tests

* move test file

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-26 11:37:42 +04:00
Mateusz Tabaka
da4316845f ConvMulFusion - handle ConvolutionBackpropData with 3 inputs (#17145)
* ConvMulFusion - handle ConvolutionBackpropData with 3 inputs

Ticket: 98769

* add using

* use compare functions
2023-04-26 11:37:31 +04:00
Sungeun Kim
3c485feea8 removed case to choose onednn impl for deconv (#17108)
- in_dt(f16) wei_dt(f16) out_dt(f32)
2023-04-26 13:20:11 +09:00
Egor Duplenskii
dabd5ee412 [CPU][TESTS] Fix cmake test dependencies (#17202)
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com>
2023-04-26 01:17:12 +04:00
Gorokhov Dmitriy
edec7bb897 [CORE] Disable fp32->fp16 optimized constant conversion impl (#17189) 2023-04-25 15:50:24 +00:00
Maciej Smyk
72533a7da1 DOCS shift to rst - Quantizing Models with Accuracy Control, Documentation, Get Started & Learn OpenVINO (#16997)
* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* Update AccuracyAwareQuantizationUsage.md

* rst

* fixes
2023-04-25 16:06:34 +02:00
Maciej Smyk
49b5d039db DOCS shift to rst - Opsets B (#17169)
* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_1.md

* Update BatchNormInference_5.md

* Update BatchToSpace_2.md

* Update BinaryConvolution_1.md

* Update Broadcast_1.md

* Update Broadcast_3.md

* Update Bucketize_3.md

* fix

* fix-2
2023-04-25 16:06:17 +02:00
Anastasiia Pnevskaia
acd424bb5e Show message with suggestion to try legacy FE in case of conversion error (#17088)
* Moved exception checks to _convert(), added suggestion to try legacy TF in case of conversion fail.

* Added test.

* Added send_conversion_result() method.

* Small correction.

* Update tools/mo/openvino/tools/mo/convert_impl.py

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Moved test_suggest_legacy_fe() test to check_info_messages_test.py.

* Removed not needed import.

* Small correction.

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-25 13:57:01 +00:00
Ilya Lavrenov
57d4ca27e6 Revert "Proper ACL version detection (#17152)" (#17206)
This reverts commit 1aec450fc6.
2023-04-25 17:36:18 +04:00
Przemyslaw Wysocki
923b6f297c [PyOV] Move environment markers to requirements.txt files (#17113)
* WIP

* WIP

* Debug

* WIP

* Expand function to other setup.pies

* Revert mxnet

* Update docstring'

* restore defusedxml

* Update tools/mo/requirements.txt

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Code review

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-25 13:25:21 +00:00
Vladislav Golubev
a8278ba4a6 [LPT] FQ reference implementation reused in foldFakeQuantize function (#17096)
* [LPT] reused reference FQ implementation in fold_fake_quantize

* [LPT] Removed legacy parameters

* Added plugin tests with per-channel FQ for GrConv wo reshape

* Apply folding only in the case when FQ data input is constant

* EliminateFQ fix
2023-04-25 14:08:01 +01:00
Aleksandr Voron
43a42fa9cd fix (#17179) 2023-04-25 16:50:37 +04:00
Evgenya Stepyreva
cd4c012f08 LogicalNot: convert precision (#17061)
* CVS-108362 LogicalNot: convert precision

* Test
2023-04-25 12:43:44 +00:00
Ilya Lavrenov
2e3deb8d8f Windows arm64 support for CPU plugin (#17075)
* ARM32 support

* ARM32 support

* Fixed packaging

* Windows arm64 support

* Updated submodule

* 32 bits support in Intel CPU plugin

* Fixed FIndAcl.cmake

* Enable proper conditional  compilation for Windows ARM64

* Enable proper conditional  compilation for Windows ARM64

* Updated submodule

* Updated submodule

* Updated submodule

* Updated submodule

* Updated submodule

* Added template_extension to CPU func tests dependencies

* Updated submodule

* Enabled runtime model tests

* Updated submodule

* Submodule update
2023-04-25 16:41:28 +04:00
Maxim Vafin
d423491bcb Fix Scatter value infer for fully dynamic value (#17165)
* Fix issue with dynamic Scatter in MO IR Reader

* Only normalize for 1D tensors

* Add test
2023-04-25 16:38:49 +04:00
Vitaliy Urusovskij
11a2b75161 Fix TSAN issue No2 in GNA plugin (#17185)
* Fix TSAN issue No2 in GNA plugin

* Misprint
2023-04-25 16:32:06 +04:00
Jan Iwaszkiewicz
512b186231 [PyOV] Enable group_convolution_backprop test (#17186) 2023-04-25 12:19:56 +00:00
Evgenya Stepyreva
ee4ccec190 TensorFlow Lite FrontEnd: documentation changes (#17187)
* First glance doc changes

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow_Lite.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-04-25 16:18:24 +04:00
Oleg Pipikin
27210b6505 Fix Coverity issue #1505788 (#17173) 2023-04-25 16:13:42 +04:00
Oleg Pipikin
ab879f143c Add check to avoid out of bounds segfault in scatterNDupdate (#17066)
* Add check to avoid out of bounds segfault in scatterNDupdate

* Fix code style
2023-04-25 16:13:14 +04:00
Aleksandr Voron
6e11645018 [CPU] Add axis check to ACL Reduce isSupported method (#17188)
* fix

* fix2
2023-04-25 16:11:50 +04:00
Sergey Shlyapnikov
0a5975bdfa [GPU] Add real kernels' execution timings collection for DumpProfilingData debug option (#15797) 2023-04-25 14:33:08 +04:00
Ilya Lavrenov
1aec450fc6 Proper ACL version detection (#17152) 2023-04-25 14:05:52 +04:00
Sungeun Kim
8c09a128ac [GPU] update weights_layout for GroupConv 1d spatial (#17109)
* update weights_layout for GroupConv 1d spatial
2023-04-25 18:54:54 +09:00
Georgy Krivoruchko
3f07c8b48b [TF FE] Added MetaGraph file format (#16524)
* Separeted SavedModelVariablesIndex class from Saved Model

* Renamed SavedModelVariablesIndex class

* Enabled Tensorflow MetaGraph

* Enabled Tensorflow MetaGraph

* Covered VariableV2 and Assign nodes

* Applied review comments

* Added tests

* Added names to input/output ports too

* Fixed naming for using with MO

* Applied part of review comments

* Renamed meta.cpp and saved_model.cpp

* Applied shared_ptr for memory management of PtrNode

* Fixing CI

* Prevent cycles while passing thru graph

* Released requirement for Checkpointable Object Graph

* Changed naming approach to align port order

* Changed renaming order (before reordering)

* Added a Placeholder translator which checks updated shape

* WA missing Identity name

* Fix CI and restored lost translators after rebase

* WA for output names

* Removing unused params after cutting a model

* Prevents crash in case VariableV2 appears in freezed model

* Fixed saved model in case no variables.index found, but
variables exists

* Changed approach for handling native formats support

* Aligned behavior with freezing .meta files

* Fixed behavior for cutting a model by input tensor

* Applied review comments
2023-04-25 13:46:06 +04:00
Maciej Kwapulinski
9c01de4b6e [GNA] fix: embedded export is available for embedded targets only (#17105)
* fix: embedded export is available for embedded targets only

* [GNA] functional tests fix - embedded export should NOT be possible on non-embedded target

* [GNA] tests added/justified to process both negative and positive path
2023-04-25 10:45:47 +01:00
Andrew Kwangwoong Park
72906ca242 [GPU] Fix i8/u8 representation error for clamp due to overflow (#17183)
* [GPU] Fix i8 representation error for clamp due to overflow

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix to not include in ocl code

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-25 09:41:01 +00:00
Ekaterina Aidova
39ed9a624f [PT FE]: extend batch norm to support training mode (#17040) 2023-04-25 11:27:00 +02:00
Vladimir Paramuzov
f736c71feb [GPU] Fix reshape split for dynamic models + accuracy fix for SAM (#16911) 2023-04-25 09:21:31 +00:00
Alexandra Sidorova
9247906879 [Snippets][CPU] Fixed coverity (#17094) 2023-04-25 09:12:58 +00:00
hyunback kim
19f8f5a3a7 [GPU] Disable oneDNN post-op Prelu in FC,gemm (#17084)
* [GPU] Disable oneDNN post-op relu

Only disable Prelu fusion in Fc, gemm
 - check additional data input

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-25 18:06:22 +09:00
Yuan Hu
2255bb25fd fix input issuse of ScatterNDUpdate conformance test (#16406)
* fix input issuse of ScatterNDUpdate conformance test

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix typo and optimize temporary variable

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-25 13:00:22 +04:00
Vladimir Paramuzov
ca1102b855 [GPU] Support MVN cases with axis=-1 w/o decomposition (#17020) 2023-04-25 12:59:03 +04:00
Katarzyna Mitrus
0617ce9089 Set ONNX opset in Reduce ops layer tests (#17170) 2023-04-25 10:38:56 +02:00
Ilya Lavrenov
22aee08958 Revert "[CPU] Fix data race in concurrent compile_model calls (#17164)" (#17184)
This reverts commit 8879ef53a7.
2023-04-25 12:01:02 +04:00
Nikita Malinin
e37288fbcc [POT] Added inference shape for in-place statistics (#17114)
* Added inference shape for inplace statistics

* Update graph_builder
2023-04-25 11:14:34 +04:00
Vitaliy Urusovskij
5533de5dd8 Fix TSAN issue in GNA plugin (#17163) 2023-04-25 10:33:06 +04:00
Aleksandr Voron
10f53cb40b [CPU] Force NCHW layout for ACL Interpolate executor (#17121)
* fix

* fix 2nd case
2023-04-25 10:05:15 +04:00
Alexandra Sidorova
4750523c81 [Snippets][CPU][Test] Allow tokenize MHA without machine dependancy (#17064) 2023-04-25 09:40:11 +04:00
Egor Duplenskii
478725c719 [CPU] Reorganize function tests. Remove legacy bfloat16 tests (#17130) 2023-04-25 09:32:54 +04:00
Yuan Hu
e79db660ce [CPU]GroupConvolutionLayer CPU test for AMX (#13539) 2023-04-25 09:21:17 +04:00
Vladimir Paramuzov
d1f1fa2b39 [GPU] Enable broadcast transition pass (#17172) 2023-04-25 09:04:37 +04:00
Vladimir Paramuzov
3bb0fb61f6 [GPU] Support 8d tensors in activation and quantize primitives (#16947) 2023-04-25 09:02:54 +04:00
Sun Xiaoxia
6663367183 Xiaoxia/fix performance regression (#17036)
* add _streams_info_table in Executor config

* change useHyperThreading init value

* restore cmake

* fix comments

* add calling enableCpuPinning property

* fix judgment about number of sockets in init_stream

* fix test case compile issue

* fix ci test case fail issue

* modify GetPerformanceStreams calling position

* add affinity in get_cpu_pinning

* modify ecore judgement

* add no binding core on ADL

* fix ci issue, add get_num_numa_nodes()

* fix code style

* fix StreamsHasHigherPriority issue

* fix according to comments

* fix performance degression

* fix code style

* code style

* fix warning

* fix ci test failed

* fix ImportNetwork issue

* fix ci test case issue

* fix smoke_CachingSupportCase_CPU issue

* add ExportOptimalNumStreamsTest test

* modify test name

* modify ExportOptimalNumStreams test

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-04-25 04:35:47 +00:00
Chen Peter
28e54e75ea Update MULTI doc per current implementation (#17045)
* Update MULTI doc per current implementation

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Update the description of Multi-Device execution mode

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Remove sample code and video

1. Remove the sample code for removed behaviors
2. Remove the video to avoid confusion

Signed-off-by: Peter Chen <peter.chen@intel.com>

---------

Signed-off-by: Peter Chen <peter.chen@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-25 10:28:48 +08:00
Pawel Raasz
38a5ee719d Remove unused lambda capture (#17160) 2023-04-25 00:39:40 +00:00
Egor Duplenskii
8879ef53a7 [CPU] Fix data race in concurrent compile_model calls (#17164) 2023-04-25 00:01:03 +00:00
Anastasiia Pnevskaia
00847cba7d Fix of tf.GenericFunction conversion in convert_model() (#17125)
* Added GenericFunction support, fixed tf.Function test.

* Added test, added TF version checks.

* Small correction

* Removed Trackable type support.

* Small correction.
2023-04-24 22:57:56 +00:00
Taylor Yeonbok Lee
ce23ce00f1 [GPU] Fixed fused_primitive_desc to have -1 value for dep_start_idx (#17099)
* Fixed fused_primitive_desc to have -1 value for dep_start_idxt b

* Fixed dgpu i8 errors
2023-04-24 22:21:58 +00:00
Roman Kazantsev
3830125e3b [TF FE] Report the full list of unsupported operations (#17143) 2023-04-24 21:33:07 +00:00
Eddy Kim
d972a71b4c [GPU] Fixed the prepare_quantization pass to support grouped_weights_shape (#17093)
* fixed to support grouped_weights_shape

* added grouped_weights unit tests
2023-04-24 14:21:50 -07:00
Piotr Krzemiński
22a81e0e58 [PT FE] Enable stable tests for sort & argsort (#16415)
* [PT FE] Enable stable tests for sort & argsort

* Update test_argsort.py

* [PT FE] Update to opset11

* [PT FE] Remove redundant argument from argsort test

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-25 01:21:16 +04:00
Maksim Kutakov
9fce01f8cc [CPU] Remove legacy dynamic batch processing from the plugin (#17052)
* Intermediate state

* Remove old dyn batch path in the new api

* Remove legacy dyn batch support

* Remove dyn batch support field from the config

* Revert changes to the common part

* Revert accidental change in the test file

* Minor fixes

* Fix support for dyn batch without setting current

* Typo fix
2023-04-25 01:18:10 +04:00
Evgenya Stepyreva
758ec32001 CVS-108963 Coverity fixes (#17161) 2023-04-25 01:03:56 +04:00
yanlan song
64b5a4595a Bell/use cpu for dynamic models (#17149)
* clean up multi code path

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* potential locking issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove unecessary variable

Signed-off-by: fishbell <bell.song@intel.com>

* clear redundunt return syntax

Signed-off-by: fishbell <bell.song@intel.com>

* still use cpu for dynamic models

Signed-off-by: fishbell <bell.song@intel.com>

* merge master

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-25 01:01:11 +04:00
Jade Cho
5c21dcec4d [GPU] Fix detection output kernel build error on dGPU (#17150)
+ Check local memory size used in the kernel and choose proper kernel.
+ 	Select DO_STAGE_0_CAFFE instead of DO_STAGE_0_CAFFE_OPT
2023-04-25 01:00:26 +04:00
Vladislav Golubev
a6b1544acf Review comments applied (#17168) 2023-04-25 00:59:03 +04:00
Mateusz Mikolajczyk
8e5b0650a0 [PT FE] Fix for prim::Constant optional or containing list of tensors (#16754)
* Fix Constant list of tensor

* Write TorchScript transformation

* Handle Optional Tensor Constants

* Improve tests

* Add comments

* Try fix flake
2023-04-24 22:56:42 +02:00
Evgenya Stepyreva
b452dab8f0 TypeRelaxed<>::clone_with_new_inputs thread safety fix (#16881)
* TypeRelaxed<>::clone_with_new_inputs thread safety fix

* Style

* Make TypeRelaxed<BaseOp>::clone_with_new_inputs copy node the same way as copy ctor of ov::Node

* Removed mutex field from intel_cpu::GraphContext

* Removed all about has_type_relaxed_ops field from the snippets subgraph

* Clonning test
2023-04-25 00:51:18 +04:00
Ilya Lavrenov
83cc2277b4 Fixed compilation with sanitizer (#17175) 2023-04-25 00:44:16 +04:00
Alina Kladieva
f39ab0dbc9 Upper-bound for patchelf (#17177) 2023-04-24 19:52:55 +02:00
Wanglei Shen
10c56708fd update auto architecture document in GitHub for 2023.0 release (#17141)
* update auto architecture doc

* update auto architecture doc

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* update for comments

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-24 15:44:34 +00:00
Tomasz Adamowicz
86ed1e93b6 [Gna] [coverity]fixes (#17122)
* [Coverity] Fix: CID 1502468 - Not restoring ostream format

* [Coverity] Fix: CID 1502524 - Dereference null return value

* [Coverity] Fix: CID 1509007 - Uncaught exception

* [Coverity] Fix: CID 1505779, 1505781, 1505783 and 1505786 - Dereference null return value

* [Coverity] Fix: CID 1502503 - Using invalid iterator

* Revert "[Coverity] Fix: CID 1502524 - Dereference null return value"

This reverts commit b605a493ae.
2023-04-24 14:04:30 +01:00
Maksim Kutakov
f8522a6ea1 [CPU] Rnn weights repacking (#16992) 2023-04-24 15:48:57 +04:00
Vladislav Golubev
f410658d32 [LPT] AddTransformation fix (#17076)
* [LPT] AddTransformation: constants on 0's input support

* AddTransformation: new test instances

* codestyle
2023-04-24 12:15:01 +01:00
Edward Shogulin
a3f14366d9 [LPT] Extending EliminateFakeQuantize transformation (two interval boundaries) (#17140)
* [LPT] EliminateFakeQuantize extending

* tests

* folding quick fix
2023-04-24 11:58:00 +01:00
Ilya Lavrenov
a34ef680f2 Made plugins.hpp generation to be CONFIG dependent (#17139) 2023-04-24 14:48:45 +04:00
Vladimir Paramuzov
faba5fb71e [Transformations] Add threshold for const comparison in Gelu fusion pass to fuse with fp16 precision (#17042) 2023-04-24 14:37:31 +04:00
Vladimir Paramuzov
e8ae1e41ea [GPU] Skip FC fake alignment for some vector by matrix multiplications (#17051) 2023-04-24 14:34:50 +04:00
dependabot[bot]
eac265722f Update networkx requirement from <=2.8.8 to <=3.1 in /tools/pot (#16745)
Updates the requirements on [networkx](https://github.com/networkx/networkx) to permit the latest version.
- [Release notes](https://github.com/networkx/networkx/releases)
- [Commits](https://github.com/networkx/networkx/compare/networkx-0.23...networkx-3.1)

---
updated-dependencies:
- dependency-name: networkx
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-24 13:37:35 +04:00
hyunback kim
63f5c2f0e7 [GPU] Fix levit-128s accuracy issue (#17136)
* [GPU] Fix levit-128s accuracy issue

Wrong batch dims for fused eltwise of gemm.
-> The issue is getting incorrect batch size of fused eltwise used by gemm.
     Its rank is different from src tensor. Eltwise tensor rank was reduced by mistake.
     It is only reproduce in batch 1 and full tensor. 
     The batch size in here means all of non spatial dims, but previous implementation was default batch dim role.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-24 18:16:00 +09:00
Pavel Esir
6ff0cad127 Fix mixed precision inference for quantized IRs (#16785)
* disable mixed precision inference for quantized IRs

* typo fix

* improved solution, disable mixed precision in quantized IRs selectively only for float nodes

* minor typos correction

* added unit-tests

* renamed rt_info

* updated list of nodes for which FQ is propagated; updated unit-tests

* fix failing build
2023-04-24 13:13:04 +04:00
Maxim Vafin
01065338ef Fix MO IR Reader extender for StridedSlice to support empty begin and end masks (#17019) 2023-04-24 13:08:28 +04:00
Tatiana Savina
aa5b6ecac2 DOCS shift to rst - Opset S (#17158)
* ops to rst

* fix errors

* formula fix

* change code

* console directive

* vsplit try hoghlight

* fix code snippets

* comment fixes

* fix list
2023-04-24 11:02:30 +02:00
Tatiana Savina
b3ea6ceefa DOCS shift to rst - Opset R (#17159)
* ops to rst

* sphinx transition

* try html tag

* try comment

* try code directive

* try code directive

* try highlight

* try concole directive

* try line directive

* add highlight for code

* another directive

* introduce consoke directive

* add code format
2023-04-24 11:02:09 +02:00
Fang Xu
656d7fe380 prebuilt oneTBB binaries for ARM64 (#16904)
* use oneTBB for arm64

* force THREADING=TBB

* test: remove TBB_DIR for linux arm64

* update linux and mac arm64 packages

* update SHA256

* add comment

* disable add_rpath for tbb libraries on mac arm64

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-04-24 09:48:47 +04:00
Daniil Lyakhov
7997354359 POT is depricated (#16758) 2023-04-24 09:37:57 +04:00
Vladimir Paramuzov
219a0eebdc [GPU] Fix 1d onednn convolutions (#17038) 2023-04-24 09:24:56 +04:00
Min, Byungil
bb0be3c177 [GPU] Resolve failed onednn tests (#16990)
* [GPU] Resolve failed unit-tests on dGPU

+ Modified unit-tests of asymetric conv with per channel(WA for oneDNN issue)
+ Modified conv unit-tests with padded input or output
+ For testing oneDNN conv, it needs to query oneDNN about format. Applied this to conv tests.
+ Modified accuracy checking logic in unit-tests which have different format on dGPU.
+ reorder from fsv16 to bfyx should not be optimized out if not aligned by 16

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-24 14:11:35 +09:00
Ilya Lavrenov
11c3623ebb Fixed compilation errors on Linux arm64 (#17138) 2023-04-23 21:34:37 +04:00
yanlan song
fed06fcb91 resubmit PR#17006 (#17137)
* clean up multi code path

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* potential locking issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove unecessary variable

Signed-off-by: fishbell <bell.song@intel.com>

* clear redundunt return syntax

Signed-off-by: fishbell <bell.song@intel.com>

* WR build issue on buntu 2004

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-23 11:56:07 +00:00
Gorokhov Dmitriy
2c450ced24 [CPU] Fixed JIT Reorder impl on Apple targets (#17134) 2023-04-23 01:09:03 +04:00
Ilya Lavrenov
26029c2d48 Enabled runtime model tests (#17131) 2023-04-22 11:07:36 +04:00
Ilya Lavrenov
462cdb54f8 Enabled convolution_backprop_quantize_type CPU tests on non-x64 (#17123) 2023-04-22 01:45:14 +04:00
Ilya Lavrenov
46f8ebfaec Revert "Fix C API unite test case error (#17012)" (#17128)
This reverts commit 63c0089128.
2023-04-22 01:44:34 +04:00
Ilya Lavrenov
fbc28297ec Enabled C-API tests on ARM platform (#17119)
* Enabled C-API tests on ARM platform

* Fixed ARM CPU plugin test on streams
2023-04-21 22:55:18 +04:00
Ilya Lavrenov
d7b775f583 Updated onednn submodule (#17126) 2023-04-21 20:47:37 +04:00
Aleksandr Voron
e31b00c299 [CPU] Enable Python test test_infer_request.test_infer_mixed_values with bool for ARM (#17111)
* Update test_infer_request.py

* enable all py tests
2023-04-21 19:52:32 +04:00
Anastasiia Pnevskaia
50a6c88ea3 Fix of crashes of convert_model() when executed for different frameworks (#16968)
* Fix of class conflicts in different frameworks.

* Remove commented code.

* Moved FakeQuantWithMinMaxVars to common part.

* Fixed BOM package test.

* Removed not needed code.

* Removed not needed code.
2023-04-21 19:29:38 +04:00
Maksim Kutakov
793bbb6ee2 Remove dyn batch support from onednn i8 ref conv (#17106) 2023-04-21 17:44:00 +04:00
Jan Iwaszkiewicz
88cb428763 [PyOV][DOCS] Added Python advanced inference documentation (#17090)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-21 15:22:33 +02:00
Maciej Smyk
c4b155edc2 DOCS shift to rst - Opsets C (#17112) 2023-04-21 13:30:07 +02:00
yanlan song
304991f88b Revert "Clean up unused code (#17006)" (#17110)
This reverts commit 359b444558.
2023-04-21 15:26:01 +04:00
Tomasz Dołbniak
6ea9cc7149 ONNX FE - model loading fix (#17091)
* Path retrieval fix

* More detailed messages in the failing test

* Exe path with model name

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-04-21 15:25:26 +04:00
Jade Cho
8fbd78fb07 [GPU] Fix a bug of fusing eltwise sum post-op. (#17078)
+ When input of eltwise is full-tensor constant layer, use binary add
instead of sum as post-op on oneDNN.
2023-04-21 20:17:35 +09:00
Nesterov Alexander
6ad80576b7 [ARM CPU] Fix smoke_if tests (#17095)
* fix smoke if

* fix smoke if - arm32

* review fix
2023-04-21 14:45:22 +04:00
HARI CHAND BALASUBRAMANIAM
6b44902bf2 Update bug.md (#16880)
Update the OpenVINO GitHub issue submission template.  To allow the submitter to provide more information when submitting an issue.
2023-04-21 02:27:39 -07:00
Sun Xiaoxia
b22d0641cb fix streams is not correct by latency mode (#17101) 2023-04-21 09:21:14 +01:00
Yury Gaydaychuk
4ae7e1ff61 [CPU] Commit slider: safe file opening (#16755) 2023-04-21 11:42:42 +04:00
Vladislav Golubev
31efdfd00d [Transformations] BroadcastTransition transformation (#16861) 2023-04-21 11:35:04 +04:00
Chen Xu
70d80a750f [CPU] Reduce node asymmetrical precision optimization (#16829) 2023-04-21 11:00:16 +04:00
Mingyu Kim
ba23e2290e [GPU] Choose onednn impl for reorder (#17077)
* [GPU] Choose onednn impl for reorder
* [GPU] Add unit test
2023-04-21 13:56:58 +09:00
yanlan song
359b444558 Clean up unused code (#17006)
* clean up multi code path

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* potential locking issue

Signed-off-by: fishbell <bell.song@intel.com>

* remove unecessary variable

Signed-off-by: fishbell <bell.song@intel.com>

* clear redundunt return syntax

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-21 04:23:55 +00:00
Sun Xiaoxia
c186ffdf0d Xiaoxia/stream process refactor (#16692)
* add _streams_info_table in Executor config

* change useHyperThreading init value

* restore cmake

* fix comments

* add calling enableCpuPinning property

* fix judgment about number of sockets in init_stream

* fix test case compile issue

* fix ci test case fail issue

* modify GetPerformanceStreams calling position

* add affinity in get_cpu_pinning

* modify ecore judgement

* add no binding core on ADL

* fix ci issue, add get_num_numa_nodes()

* fix code style

* fix StreamsHasHigherPriority issue

* fix according to comments

* merge master

* fix build issue

* fix template plugin test case failed issue

* fix build issue

* fix cpu test failed

* Update plugin.cpp

---------

Co-authored-by: Wanglei Shen <wanglei.shen@intel.com>
2023-04-21 01:38:32 +00:00
hyunback kim
344db564fc [GPU] Fix dump graph failure issue in levit-128s model. (#17055)
* [GPU] Fix dump_graph failure issue in levit-128s model.

1. to_string() in strided_slice always access begin/end/stride param id from dependencies
    regardless of max dependencies.
2. Add an exception in dump_full_node(). It helps below.
   - Avoid a dump failure. Usually, graph dump are used during debugging,
      which reduces unnecessary debugging time due to graph dump failure.
   - You can immediately see which node has failed, making it easy to find it.

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-21 09:14:47 +09:00
Wanglei Shen
14d4fcf827 enable smoke_SetConfigAffinity for ARM (#17092) 2023-04-20 20:25:35 +01:00
Anastasia Kuporosova
a8b5ccc03f [PyOV] Check for glibc version in python test (#17081)
* [PyOV] Check for glibc version in python test

* fix for no glibc
2023-04-20 19:28:55 +04:00
Karol Blaszczak
0c12ee6015 [DOCS] fix for copyright and trademark glyphs (#17021) 2023-04-20 14:11:16 +02:00
Karol Blaszczak
dcfa1f6881 [DOCS] bring back conda guide 23.0 (#17031) 2023-04-20 14:09:07 +02:00
Wanglei Shen
70e0eed075 update default affinity for macOS (#17080) 2023-04-20 11:50:04 +00:00
Mateusz Bencer
77a5d1aa03 [ONNX FE] Fixed handling duplicates during graph extraction (#17071) 2023-04-20 11:10:09 +00:00
Vladislav Golubev
f100c36ac9 [LPT] Revert changes in fold_reshape (#17068) 2023-04-20 11:43:59 +01:00
Yuan Hu
e53fc86988 [CPU] [Coverity] fix Uninitialized issue in node mvn (#16980)
* fix uninit issue in node mvn

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* Revert "fix uninit issue in node mvn"

This reverts commit 45e68725f3.

* fix Uninitialized issue in MVNAttrs ctor

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-20 12:34:49 +02:00
Yuan Hu
bef25ddf43 [CPU] resubmit pr for optimize shape infer of Reshape (#16942)
* Revert "Revert "[CPU] optimize shape infer of Reshape (#16537)" (#16703)"

This reverts commit 06cacfe2a7.

* fix reshape connext with nonzero issue

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* add nonzero connect with reshape testcase

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* add debug code

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* fix test case issue

fix shape_nonzero testcase issue
fix a bug in origin test case

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

* Revert "add debug code"

This reverts commit c305464c8c.

* fix other review comments except test case

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>

---------

Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
2023-04-20 12:34:21 +02:00
Maksim Kutakov
70c3979602 [CPU] Execute constants in order with the create primitives calls (#16795) 2023-04-20 14:22:57 +04:00
Mikhail Ryzhov
0f7e6de346 [GNA] WA to fix config parsing of scale factor map (#17060)
* WA to fix config parsing

* clang fix

* excluded json
2023-04-20 10:51:23 +01:00
Maciej Smyk
7d574e3114 DOCS shift to rst - Opsets (#17059) 2023-04-20 10:59:35 +02:00
Maxim Vafin
552143c9cd [MO] Fix Interpolate-11 in MO (#17002)
* Fix Interpolate-11 in MO

* Add forgotten file

* Fix output type of TopK-11

* Do not force precision on port 1 for mode scales

* Update tools/mo/openvino/tools/mo/ops/interpolate.py

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-20 09:51:38 +02:00
Anastasiia Pnevskaia
5026aa044a Removed naming of inputs in MO Python API PyTorch tests. (#17070)
* Removed naming of inputs in MO Python API PyTorch tests.

* Fixed coping of data.

* Small correction.

* Small correction.

* Small fix.
2023-04-20 11:49:45 +04:00
Marcin Kusmierski
4e6a129672 [GNA] Fix tests configuration to ensure that 3_5 target is tested too (#17046) 2023-04-20 09:18:36 +02:00
Nesterov Alexander
d00731c0ab [ARM CPU] Fix tests for eltwise layer (#16917) 2023-04-20 09:57:29 +04:00
Taylor Yeonbok Lee
5bded05ae6 [GPU] Improve shape infer performance (#17039)
* [Dynamic shape] Improve shape infer performance for igpu by preventing copy from usm_device to usm host from lock()

* Fixed is_shape_infer_dep to use pointer instead of unique_id becuase unique_id may not be set
2023-04-20 03:23:52 +00:00
Tomasz Dołbniak
1bd9a1e01c Passing tests re-enabled (#17067) 2023-04-20 01:55:42 +01:00
Ekaterina Aidova
f9fbcbe419 update omz submodule (#16986) 2023-04-20 03:53:39 +04:00
Ilya Churaev
71880aadd3 Deprecate set batch method (#17057)
* Deprecate set batch method

* Fixed some errors

* Suppress warning in tests

* Fixed warning in GPU

* Deprecate python
2023-04-19 20:21:18 +00:00
Ilya Lavrenov
1ec22a3180 32 bits support in Intel CPU plugin (#16900) 2023-04-19 22:10:20 +04:00
Eddy Kim
fab8236af3 [GPU] Fixed OneDNN fc+sum fusion serialization (#16988)
* fixed onednn fc+sum fusion serialization

* removed the white list for sum post op fusion

* added deconv fusing caching tests
2023-04-19 09:43:27 -07:00
Pawel Raasz
4c3a4a8992 Correct inf bound check for 32-bit in shape infer (#17047) 2023-04-19 19:33:01 +04:00
Nesterov Alexander
3d33cb2b43 [ARM CPU] Fix eltwise op tests (Divide) (#17029)
* update skip list

* skip change

* fix divide

* review fixes

* review fixes #2
2023-04-19 18:52:09 +04:00
Egor Duplenskii
39f843fb78 [CPU] Move to oneDNN 3.1 release version (#16721) 2023-04-19 18:26:30 +04:00
Tomasz Dołbniak
d230ad9313 Interpolate op cleanup (#17026) 2023-04-19 15:47:29 +02:00
Evgenya Stepyreva
497a19edf6 CVS-102308 Read tflite model to vector (#17048) 2023-04-19 13:27:41 +00:00
Pawel Raasz
d7083fb4db Improve slice and strided slice shape inference (#16940)
when start, stop are interval values
2023-04-19 16:20:29 +04:00
Vitaliy Urusovskij
a611104b12 FQ tests leftovers (#17009)
* Try to return skipped test after FQ fix

* Copy FQ broadcast case from CPU to TEMPL tests

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-19 12:32:44 +01:00
Tatiana Savina
921bebc1ec change ov version (#17056) 2023-04-19 11:28:41 +00:00
Mateusz Tabaka
7338257e00 Fix transformations tests on 32 bit build (#17043)
Ticket: 104593
2023-04-19 11:28:00 +00:00
Artyom Anokhov
bb6a3251a8 README.md: Added Conda Budge (#17025)
* README.md: Added Conda Budge

* README: Moved Conda badge after PyPI status
2023-04-19 12:35:31 +02:00
Egor Duplenskii
4ce5548c9a [GNA] fix compilation warning (#17027)
Which becomes error with '-Werror'
2023-04-19 10:00:24 +00:00
Marcin Kusmierski
90b485715a [GNA] Fix tests failing due to dependency to CI environment state (#17007) 2023-04-19 11:42:15 +02:00
Vladislav Golubev
00a4fc514c Review comments applied (#16856) 2023-04-19 10:11:47 +01:00
Szymon Irzabek
a8c7c19cb9 [GNA] Fix channel multiplier calculation (#17010) 2023-04-19 11:01:27 +02:00
Xuejun Zhai
63c0089128 Fix C API unite test case error (#17012)
* Fix C API unite test case error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix test error with relative path

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
Co-authored-by: River Li <river.li@intel.com>
2023-04-19 11:26:12 +04:00
Chenhu Wang
34b3abc0e2 [CPU][Snippets]fix candidate merged node's subgraph inputs have common subgraph input (#16249) 2023-04-19 11:12:52 +04:00
Tingqian Li
1525f6cc16 [CPU] WA: Stop fusing per-OC eltwise into Matmul with input rank >4 (#16824) 2023-04-19 11:11:04 +04:00
Vladimir Paramuzov
dbd20ec799 [GPU] Added try/catch for device detection loop to skip platforms which throw an exception (#17011) 2023-04-19 11:05:24 +04:00
Chenhu Wang
498486588e [CPU]interpolate-11 support (#16698) 2023-04-19 11:05:09 +04:00
Ilya Churaev
ca0b30c082 Added components relationships on architecture page (#17037) 2023-04-19 10:51:23 +04:00
Shen, Wanglei
626caf7f2a update file location for 2023.0 release (#17034) 2023-04-19 10:38:23 +04:00
Wilson Seok
2401b0aa3c [GPU] Skip reorder_node_to_split to avoid change of input data type for ondenn kernel support (#16827)
* skip reorder_node_to_split when new input data type of onednn kernel is not supported
* update layout_optimizer and add unit test
2023-04-19 15:00:55 +09:00
Marcin Kusmierski
1281074e15 [GNA] Fix for GNA 3_5 fixing tests after review (#16954)
* [GNA] Fix review comments for Conovolution2DLayer tests

* [GNA] fix review comments for smoke_ConvolutionPoolingStrideNotEqualWindowTest_Above

* [GNA] Fix review comments to GNAPWLExtraSegmentsTestFixture

* [GNA] Fix review comments to smoke_LSTMCellBasicCommon
2023-04-19 07:31:34 +02:00
Kelvin Choi
bd8ca523b9 [GPU] Fix proposal sort condition (#16981) 2023-04-18 21:05:32 -07:00
Ilya Lavrenov
3ad3a90e98 Enabled several arm64 tests (#17032) 2023-04-19 02:35:32 +04:00
Anastasia Kuporosova
9f250edc7f [PyOV] use generator is multi config (#17004)
* [PyOV]- use generator is multi config

* use ov

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-18 22:04:22 +00:00
Maksim Kutakov
38d97709d1 [CPU] Remove allocation by the upper bound (#16666) 2023-04-19 00:25:58 +04:00
Maksim Kutakov
531b5a3657 [CPU] Optimize TBB usage in the parallel dynamic shapes processing (#16517) 2023-04-19 00:25:03 +04:00
Aleksandr Voron
d4ac0b0e79 MultipleLSTMCellTest fix (#17015) 2023-04-18 23:27:45 +04:00
Anastasiia Pnevskaia
078f28911b Fixed parsing of 'layout' param (#16999)
* Fixed layout parsing.

* Small correction.

* Removed wrong change.
2023-04-18 22:43:38 +04:00
Roman Kazantsev
e93c8e1b1c [TF FE] Skip one Keras ConvLSTM2D test (#17028)
* [TF FE] Mark one Keras ConvLSTM2D test with xfail

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Change to skip

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-18 22:28:30 +04:00
Ilya Lavrenov
d5cc696e00 Removed contrib repo usage from Linux ARM64 Azure Pipeline (#17016)
* Removed contrib repo usage from Linux ARM64

* Removed contrib repo usage from Linux ARM64
2023-04-18 21:33:49 +04:00
Ilya Churaev
566ef01a3f Remove constructors for ov Exceptions (#16938)
* Remove constructors for ov Exceptions

* Fixed linux build

* Fixed ONNX Frontend

* Fixed paddle

* Fixed exceptions in tests

* Deprecate constructors for ov::Exception

* Suppress some warnings

* Merge several exceptions

* Some small changes

* Suppress more warnings

* More warnings

* mode warnings

* Suppress more warnings

* More warnings
2023-04-18 21:02:26 +04:00
Mateusz Mikolajczyk
441dad2eea Fix bug with reshape on empty tensor (#17014)
* Fix empty tensor reshape

* Add test
2023-04-18 20:56:03 +04:00
Vladislav Golubev
e6341917cd [LPT] PullReshapeThroughDequantization transformation fix (#16395)
* PullReshapeThroughDequantization fix

* Added a test-case
2023-04-18 15:22:31 +01:00
Katarzyna Mitrus
2a5c69abc6 [ONNX FE] Fix ONNX DequantizeLinear-13 import dynamic shape (#16966) 2023-04-18 13:03:55 +02:00
Marcin Kusmierski
d5123056bb [GNA] Fix issues with GNA 3.5 - Fix pooling for Convolution1D and Convolution2D (#16734)
* [GNA] Fix 1D Pooling realized as part of 2D Convolution

* [GNA] Fix pooling for GNA_SW_FP32 mode when fused with Convolution2d

* [GNA] Fix ConvolutionPoolingStrideNotEqualWindowTest tests for 3_5
2023-04-18 11:41:04 +02:00
Tatiana Savina
e3fdfc4e09 DOCS shift to rst Plugin updated (#17000)
* shift to rst

* test snippets

* test build fixes

* change code block

* test new path

* change path

* add cancel

* change note format

* add docs

* change path to snippet

* change path to snippet

* change list format

* fix list

* fix snippets path

* fix format

* fix lists

* fix snippet

* compiled model doc fix

* change indentation

* small fixes to format
2023-04-18 10:59:15 +02:00
Mikhail Ryzhov
f97eeb59d5 [GNA] Fixed cases when FQ is not the 1st layer (#16602)
* Fixed cases when FQ is not the 1st layer

* clang formatted

* Added support of Gather
2023-04-18 10:43:31 +02:00
Pavel Esir
d70d8509c3 [FP16][IE] exclude MVN and NormalizeL2 from precision sensitive marking (#16953)
* exclude MVN from mixed infer

* fix align_mixed_fp32_fp16_types_test.cpp

* fix unit-tests for convert_precision.cpp

* code style fix
2023-04-18 16:20:49 +09:00
Pawel Raasz
3494edeed2 Fix Cast util functor when cast from floating point to integer (#16959)
* Fix cast to helper from floating point to integer
when floating value is out-of-range of integer

* Fix negative float cast if outside integer range
2023-04-18 07:29:31 +04:00
Min, Byungil
bf2870a63b [GPU] Resolved failed unit-tests (#16618)
+ Resolved issues related to deconv
+ Modified test-cases for conv, fc.
+ In fc unit-tests, tiny tensors showed unexpected behavior. Modified tensor size a little
+ Bugfix in get_test_stream

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-18 11:22:43 +09:00
Ilya Lavrenov
d15cdc81cd Fixed multi-config generators (#17003) 2023-04-18 02:44:38 +04:00
Shen, Wanglei
3f9cc0112a Hot Fix: using all small core as Ecore (#16978)
* using all small core as Ecore

* add test case
2023-04-18 00:06:36 +04:00
Ilya Lavrenov
adc733f1e9 Enabled several ARM CPU tests (#16995)
* Enabled several ARM CPU tests

* Removed not-valid tests

* Fixed several template plugin tests

* Removed non-working suppressions

* Disabled 2 tests on ARM CPU
2023-04-17 22:44:43 +04:00
Egor Duplenskii
e52445dda4 [CPU] Clean up temporary debug toggles (#16972) 2023-04-17 22:00:37 +04:00
Sofya Balandina
c14d0d7389 [conformance] Fix itaration of ops_list when recalculating tests counters (#16993) 2023-04-17 21:57:56 +04:00
Roman Kazantsev
ae06322cb7 [TF FE] Correct layer test for ConvLSTM2D and add to the pre-commit (#16996) 2023-04-17 17:54:19 +00:00
Mikhail Ryzhov
14f38bfde8 [GNA] Reverted internal overload correction (#16962)
* reverted overload correction

* added comment

* Enabled tests

* Revert merge error

This reverts commit daed290452.
2023-04-17 17:39:58 +00:00
Ivan Tikhonov
930441b223 TransposeSinking: Gather and ReverseSequence (#16532)
* Resolve the performance issues in TransposeSinking transformation

* codestyle

* fix warning as error, fix tests failures

* fix ts for Concat and Reduce

* Fix TransposeReduceBackward

* fix the issue in TransposeFuse transformation

* fix TransposeReduce transformations

* Fix TransposeReduction, fix TransposeSinkingSplit, add unsqueeze support

* delete debug print

* Add additional validations

* fix node validation

* Fix validate for split, revert changes for concat, add BatchToSpace/SpaceToBatch

* Add SpaceToBatch/BatchToSpace

* fix TS for Interpolate + codestyle

* fix gna build

* Support TS for Interpolate, VariadicSplit, IsInf, IsNan, IsFinite + refactoring

* add the missed line

* add include

* TransposeSinking tests refactoring: part1

* TransposeSinking tests refactoring: part2

* Add limited support for StridedSlice op

* codestye

* TransposeReduction: skip the case when 2nd input for Squeeze is not provided

* Transpose sinking tests refactoring: part 3. + Revert changes in MOC.

* fix build

* codestyle

* Add tests for TS backward transformations, update TransposeSinkingFuse transformation, delete StridedSlice transformation prototype + tests refactoring

* fix unary tests

* Fix warning as error on Windows

* Add new tests for Unsqueeze/Squeeze; refactoring; remove debug code

* TransposeSinking: add support for Slice op

* Add descriptions to the transformations, add additional checks

* fix a warning

* TransposeSinking Rafactoring part2: move the transformations to a separate folder, align namespaces

* TransposeSinking refactoring: class names, namespaces

* codestyle

* resolve merge conflicts

* codestyle

* TSReduction refactoring, move Unsqueeze/Squeeze transformations to separate files, added limited support for Reshape op + tests

* fix minor mistakes

* fix warnings

* Added TSSlice transformation to TSGeneral, created TransposeSinkingGeneral alias in ov::pass namespace

* refactoring

* codestyle

* fix TSSqueeze/TSUnsqueeze transformations

* delete debug serialize

* remove TransposeSinking from MOC

* fix TSSqueeze/TSUnsqueeze transformations in case of Reshape op

* delete debug code

* fix unit tests, revert changes for TSSlice transformation

* TransposeSinking: Add gather support

* TransposeSinking: add support for Gather, ReverseSequence ops; Fix TSReduction, TSSqueeze, TSUnsqueeze transformations

* fix new constants shape

* fix TSReduction, TSSqueeze, TSUnsqueeze transformations; codestyle

* fix TSGather

* Fix TSGather transformation, add tests

* Updated TSGather transformation, updated the tests

* fix TSGather, codestyle

* Add missing files for TS testing

* fix TS for ReverseSequence op; codestyle

* revert local changes

* fix warnings

* delete const folding passes

* disable constant folding for shapeOf subgraph only

* correct thirdparty versions

* codestyle
2023-04-17 16:38:48 +00:00
Ilya Lavrenov
f4fe8400a7 Generic ARM fixes (#16994) 2023-04-17 20:37:10 +04:00
Anastasia Kuporosova
f9098cd67c [PyOV] Mark add_openvnio_libs as internal (#16971)
* [PyOV] Mark add_openvnio_libs as internal

* fix flake8
2023-04-17 17:34:13 +01:00
Vitaliy Urusovskij
47f0d72f02 Fix broadcasting issue in FQ ref implementation (#16812) 2023-04-17 20:33:07 +04:00
Aleksandr Voron
496a608a28 [CPU] ReduceMean fix for ACL Executor (#16987)
* reduce fix

* enable gru, rnn and lstm tests
2023-04-17 19:17:50 +04:00
Karol Blaszczak
1471a6e8de [DOCS] benchmarks new page (#16620) 2023-04-17 16:43:57 +02:00
Ilya Churaev
25826bfe7d Added deprecation of nv12 legacy API (#16982)
* Added deprecation of nv12 legacy API

* Added new files

* Change macros

* Suppress warnings for preprocessing

* Suppress warnings in tests

* Suppress warnings for Windows
2023-04-17 14:13:43 +00:00
Anastasiia Pnevskaia
dc2fa65224 Support of unnamed saved_model_dir in MO Python API (#16542)
* Added support of unnamed saved_model_dir.

* Switch TF2 layer tests for unnamed saved_model_dir.

* Added test.

* Correction of comment.

* Removed unnecessary pytest mark.

* Code correction, added comment.
2023-04-17 17:20:27 +04:00
Ilya Lavrenov
4a997de4a3 Disabled failed ARM CPU tests (#16989) 2023-04-17 15:34:56 +04:00
Shen, Wanglei
98393c0da1 update number of threads per stream on Ecore to 2 for aggressive model on hybrid platform (#16857)
* update number of threads per stream on Ecore to 2 when  aggressive model run on hybrid platform

* update for corner case and add test case
2023-04-17 18:42:55 +08:00
Jan Iwaszkiewicz
816c0f76e2 [PyOV] Deprecate PerformanceMode.UNDEFINED and refactor deprecation (#16965) 2023-04-17 12:38:28 +02:00
bstankix
7c41d78b5d Add OVMS benchmarks (#16984)
* Add ovms support for Graph Builder

* Add new OVMS dataset
2023-04-17 12:27:58 +02:00
Katarzyna Mitrus
834e611bde [Interpolate-11] Additional tests for Interpolate-11 reference implementation (#16956)
* bf precision tests

* i32 prec tests

* Default axes test

* Add f16 prec tests

* i8 prec tests

* Update eval types in the new file
2023-04-17 11:39:09 +02:00
Przemyslaw Wysocki
9ca85eb363 [PyOV] Update docs with Python 3.11 (#16366) 2023-04-17 11:33:15 +02:00
Przemyslaw Wysocki
d72d833a96 [PyOV] Enable Python 3.11 (#15144)
* Bump ONNX version

* Bump protobuf

* Add xfails and skips

* Add tickets

* Skip ONNX Serialization tests

* Compile ONNX with C++17

* Force cpp17 - 2

* Use MSVC check

* Relax python reqs, enable 311 in azure

* Fix setupvars error

* Ignore watchdog error

* Update tensorflow

* Minor change

* Bump onnx to 1.13.1

* Bump protobuf to 3.20.3

* Debug test tf

* Xfail tests in comp

* Update comp tests

* Update tf reqs

* Remove deprecated ONNX function

* Align PDPD FE protobuf req with 2.4.1

* Satisfy dependency review

* Attempt to fix dependency review

* Revert pdpd protobuf

* Skip pdpd tests

* Fix MO-TF-PB test

* Skip TF test case

* Enable py311 on rest of jobs

* Try disabling pdpd req

* Exclude pdpd form cmake

* Update .ci/azure/linux.yml

Fixed unmerged merge-conflict

* CR

* Fix reqs

* Skip pdpd tests

* Disable pdpd tests building in cmake

* Skip another pdpd cmake

* Add file

* Add paddle constraint to tests

* Disable paddle reqs

* Debug prints

* Skip TF test if Python ver is 3.11

* Apply Mish cr comments

* Debug

* Debug

* Constrain tensorflow_addons

* Fix pdpd skipping

* Add debug prints

* Update skips

* Remove prints

* Minor change

* Update OMZ commit

* Fix some tests

* Minor change

* Disable pdpd at all

* Disable pdpd at all

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-17 13:30:17 +04:00
Wang Wangwang
589bd6d076 Add the implementation to GetExecGraphInfo API in AUTO plugin (#16979) 2023-04-17 17:24:07 +08:00
Ilya Churaev
aa1f26a2b7 Enable more tests for Template plugin (#16874)
* Enable more tests for Template plugin

* Removed deprecated API

* Fixed typo

* Added internal properties

* Removed incorrect tests

* Fixed code style

* Enabled some tests
2023-04-17 07:07:09 +00:00
Andrew Kwangwoong Park
7282728cec [GPU] Fix incomplete condition for NMS shape inference (#16960)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-16 22:41:57 -07:00
Eddy Kim
9b9c31d46b [GPU] Updated to allocate memory in order of size while deserializing (#16867)
* updated to allocate memory in order of size while deserializing

* fix windows build error

* updated to check dependencies between not connected nodes
2023-04-16 22:33:57 -07:00
Egor Duplenskii
175db3523a [CPU] Add few tests to smoke scope (#16963) 2023-04-17 09:04:57 +04:00
Taylor Yeonbok Lee
c96a5c4b70 Fix prepare padding which was not handling group size properly (#16977) 2023-04-16 21:42:03 -07:00
Ilya Lavrenov
31398bb3eb Fixed deprecated API warnings (#16949) 2023-04-17 07:19:53 +04:00
Roman Kazantsev
18da874c57 [MO] Remove use of mapping file and its generation (#16944)
* [MO] Remove use of mapping file and its generation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix pylinter findings

* Remove usage of mapping file in the layer tests

* Fixing layer tests for legacy frontend

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-15 10:38:33 +00:00
Andrew Kwangwoong Park
507b3251ef [GPU] Fix to skip reorder optimization during post_optimize_graph phase (#16908)
* [GPU] Fix to skip reorder optimization during post_optimize_graph phase

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Apply comment

Signed-off-by: Andrew Park <andrew.park@intel.com>

* update condition to check empty padding

Signed-off-by: Andrew Park <andrew.park@intel.com>

* add condition to check batch size

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-15 02:24:06 +00:00
Taylor Yeonbok Lee
824a5aa7fb [GPU] Fix nonzero issue in constant propagate (#16933)
* Fix gather_nonzero not to be marked as constant.
Even though count nonzero is to be turned into a constant, gather nonzero still cannot infer shape at the moment of propagate constant.

* Apply the fix only for gather_non_zero
2023-04-14 23:16:34 +00:00
Sofya Balandina
9f3bc22e7a [apiConformance] Refactoor core_integration.cpp (#16416) 2023-04-14 23:15:41 +00:00
Roman Kazantsev
4ba0ac5476 [MO][TF FE] Support delayed batch setting (#16937)
* [TF FE] Support delayed batch setting

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Cover BOM list

* Add unit-tests for batch setting with layout

* Apply code-review: check batch size

* Apply code-review: default index for any dimension

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-14 22:35:43 +00:00
Edward Shogulin
8bdc5bc85f [LPT] Support ONNX quantized models coming from ORT PTQ (#14811)
* [LPT] FakeQuantize fuse

* GPU & CPU tests alignment

* refactoring & comments

* doc quick fix

* quick fix
2023-04-14 21:22:55 +00:00
Ilya Lavrenov
de2e9faa58 Corrected pattern for Linux ARm64 tests disablement 2023-04-14 22:28:58 +04:00
Gorokhov Dmitriy
cc6fd80d0a [CPU] Fixed Softmax and TopK nodes initilization for ARM devices (#16950) 2023-04-14 22:13:42 +04:00
Oleg Pipikin
7ce40996e5 Fix copy constructor and assignment for ov::Any (#16757)
* Fix copy constructor and assignment for ov::Any

* Fix1

* Apply comments

* Add test

* Fix code style

* Fix2

* Fix3
2023-04-14 22:12:18 +04:00
Aleksandr Voron
dc941f69ae fix (#16969) 2023-04-14 22:09:50 +04:00
Aleksandr Voron
fe98b8ee13 reduce 6d+ fix (#16931) 2023-04-14 22:09:22 +04:00
Ilya Lavrenov
df5ada8b19 Skipped failed tests on Linux ARM64 (#16970) 2023-04-14 21:56:28 +04:00
Liubov Talamanova
0c0aa5c997 [POT] Fix POT CI (#16955) 2023-04-14 17:21:01 +00:00
Tomasz Jankowski
129670ab1e [Transformations] Fix Parameter name override while removing Select node (#16934)
Details:
Applies valid node replacement method which avoids Parameter name override

Tickets: 101209

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
2023-04-14 18:36:25 +02:00
Sun Xiaoxia
25058da48f Hot Fix: threading is disable with "threading=omp" (#16923)
* fix omp threading disable

* the divisor and dividend are reversed
2023-04-14 20:24:28 +04:00
Ilya Lavrenov
f5c2db73d5 Moved heavy OVInferConsistencyTest tests to nightly (#16967) 2023-04-14 20:02:06 +04:00
Anastasiia Pnevskaia
24c9d95779 Support of unnamed input for MO Python API. (#16373)
* Support of unnamed input for MO Python API.

* Code correction, tests fix.

* Small fix.

* Added tests for unnamed input, code fixes.

* Small code correction.

* Removed code comment.

* Added tests, fixed bugs.

* Minor corrections, added comments.

* Code refactoring.

* Added defaults for InputCutInfo.

* Fixed error.

* Small fixes.

* Removed wrong change.

* Fixed error.

* Corrected input description.
2023-04-14 19:37:46 +04:00
Irina Efode
ae34720818 [CONFORMANCE] Add check devices in parallelization over devices (#16964)
* [CONFORMANCE] Add check devices in parallelization over devices

* Remove extra
2023-04-14 15:50:35 +01:00
Vladimir Paramuzov
231569db16 [GPU] Fix group axis value for blocking desc (#16936) 2023-04-14 14:42:21 +00:00
Tatiana Savina
cf12f92fae DOCS shift to rst - IR articles (#16437)
* add IR documentation
2023-04-14 14:11:00 +00:00
Tatiana Savina
9e5be9ad24 DOCS shift to rst Advanced topics (#16454) 2023-04-14 16:06:59 +02:00
Ilya Lavrenov
9b38e5168f Updated oneDNN to fix crash on aarch64 Linux (#16961) 2023-04-14 17:49:21 +04:00
Irina Efode
d07fa6f80e [CONFORMANCE] Fix Opset filters (#16928)
* [CONFORMANCE] Fix filters related to opsets

* fix

* Fix op_summary

* Update op_summary.cpp

* fix

* fix
2023-04-14 17:42:42 +04:00
Irina Efode
fd824cf036 [CONFORMANCE] Correct passrate when added skipped tests (#16844)
* init

* Refactor

* Static and dynamic approach

* next

* fix

* small fixes

* fix
2023-04-14 17:00:19 +04:00
Ilya Lavrenov
b9f82e37b9 Removed WAs from packaging scripts related to old ARM plugin (#16952) 2023-04-14 16:17:12 +04:00
Vitaliy Urusovskij
04a4971481 Small docs fixes (#16945) 2023-04-14 15:14:48 +04:00
Aleksandr Voron
2c7cbdb293 [TEMPLATE] Skip TopK tests for ARM (#16946)
* skip topk tests for arm

* changed macros

* added include
2023-04-14 14:50:10 +04:00
Marcin Kusmierski
d6f7e5e84d [GNA] Fix UT for adding extra segments to PWL-s after convolution (#16732) 2023-04-14 11:25:10 +02:00
Maciej Kwapulinski
435a79a2a3 Fix stride height setting in input_conv test (#16813) 2023-04-14 08:53:24 +02:00
Marcin Kusmierski
67aa807892 Fix smoke_LSTMCellBasicCommon for GNA 3.5 (#16924) 2023-04-14 08:43:30 +02:00
Egor Duplenskii
e98bd0dae4 [CPU] Correct crop in FQ optimized formula (#16887) 2023-04-14 10:43:05 +04:00
Michael Frank Hansen
a7228534af DOCS Adding results for RPL-S (#16862)
* Adding results for RPL-S
* Create OVMS-benchmark-data.csv
2023-04-14 08:01:59 +02:00
Aleksandr Voron
55fa8da5e4 [CPU] MVN 1D fix in ACL Executor (#16930) 2023-04-14 10:00:20 +04:00
Xuejun Zhai
802742e59f split evaluate_map.cpp to small files (#16216)
* Split evaluate_map.cpp

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix compiler error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI build error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI build error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI build error

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Fix CI format issues

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Add op v7::Gelu

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-04-14 06:57:19 +04:00
Pavel Esir
68f46ff9a1 [MO] compress_to_fp16=False by default (#16854)
* compress_to_fp16=False by default

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* note abound RAM consumption for FP16 compressed models

* detailed notion about RAM usage

* update 'get_compression_message()'

* corrected get_compression_message: remove infor about RAM

* fix pytorch convert layer tests

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-04-14 01:16:41 +00:00
Ilya Lavrenov
de8f34c8f0 Fixed plugin name in tests for ARM CPU (#16932) 2023-04-13 22:35:21 +04:00
Ilya Lavrenov
85f9d1392c Used cmake interface in ARM compute (#16929) 2023-04-13 22:35:03 +04:00
Luwei Zhou
6aeb054e48 [CPU] Use ONEDNN3.x weight/dest scale API to optimize perf (#16805)
* [LPT][CPU] Added callback for AddTransformation

* [WIP] Convolution scales fusion

* Force to use weight sclae to test performance.

* Update on interface.

* Use weight scale to adapt to ONEDNN 3.x API changes.

* Update the code.

* Update ONEDNN fix for gemm_x8s8s32x_conv kernel

* Fix the bug in ONEDNN and deconvFusingScale.

* Fuse FC Bias when having DQscale.

* WR to perf regression on

* Update onednn version.

* Fix bug and clean code.

* FC fusing dq scale bug fix.

* Add more comments and debug information.

* Fix CI issues.

* Merge ONEDNN changes.

* Fix CI issues and bugs.

* Apply review comments.

* Update comments.

* Apply reveiw comments.

* Avoid using LPT BiasAttribute RTInfo.

* Applied review comments.

---------

Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
2023-04-13 19:02:48 +02:00
Maxim Vafin
25015f9790 [PT FE] Support prim::DictConstruct on the output (#16894)
* Support dict on the output

* Preserve output order
2023-04-13 16:42:17 +00:00
Aleksandr Voron
0426c645eb Fix for ambiguous overloaded function call (#16927)
* fix

* change type to unsigned long long
2023-04-13 19:28:30 +04:00
Shen, Wanglei
3461064507 update benchmark_app to remove setting UNDEFINED with -hint none (#16695)
* Remove setting ov::hint::PerformanceMode::UNDEFINED from benchmark_app

* update benchmark_app

* update python code and description

* update python code

* fix code style issue

* update python code

* update c++ app
2023-04-13 14:29:13 +00:00
Maxim Vafin
c592ecd44e [MO] Fix legacy If (#16613)
* Fix legacy If

* Add test for If op

* Small fix
2023-04-13 18:10:40 +04:00
bstankix
5795a50a22 [docs] Update switchers 5 (#16925) 2023-04-13 16:07:53 +02:00
Egor Duplenskii
a016e4e6bb [IE_TESTS] Avoid any extra work for the skipped tests (#16915)
i.e. do not clone the function if it is unnecessary
2023-04-13 13:23:38 +00:00
Vladimir Paramuzov
5299f26168 [GPU] Handle unsupported eltwise fusion for onednn gemm in dynamic cases (#16875)
* [GPU] Handle unsupported eltwise fusion for onednn gemm in dynamic cases

* Update src/plugins/intel_gpu/tests/fusions/gemm_fusion_test.cpp

Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>

---------

Co-authored-by: Sergey Shlyapnikov <Sergeishlyapnikov@gmail.com>
2023-04-13 15:55:44 +04:00
Roman Lyamin
656428bc4f [GPU] Skip kernel logic for Concat fix (#16885) 2023-04-13 15:55:05 +04:00
Min, Byungil
da7ee613a3 [GPU] Disable oneDNN failed TCs on dGPU (#16853)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-13 20:41:29 +09:00
Taylor Yeonbok Lee
df6557cfad [GPI]Fixed not to allocate internal buffer with size 0 (#16899)
* Fixed not to allocate internal buffer with size 0

* Fixed unittest failure
2023-04-13 10:32:12 +00:00
Ilya Churaev
f70954bda9 Fixed build for macOS with LLVM from brew (#16907) 2023-04-13 10:20:30 +00:00
Ilya Churaev
ad2dc4d479 Fixed ARM CPU tests. (#16910)
* Use name from OUTPUT_NAME property

* Fixed plugins without OUTPUT_NAME
2023-04-13 13:29:42 +04:00
Karol Blaszczak
7782d85b26 [DOCS] model caching update to GPU (#16909)
Update GPU.md
Update Model_caching_overview.md

Co-authored-by: Eddy Kim <eddy.kim@intel.com>
2023-04-13 11:09:16 +02:00
Mateusz Tabaka
5d80bca16e [TF frontend] Add test for Split->Conv->Concat scenario (#16816) 2023-04-13 10:42:17 +02:00
Jan Iwaszkiewicz
63c5be3ed2 [PyOV] Fix models checking and ensure correct destructor calls in tests (#16814) 2023-04-13 10:37:05 +02:00
Gorokhov Dmitriy
ae350c7107 [CPU] Fixed unused-private-field compilation errors (#16905) 2023-04-13 12:20:18 +04:00
Anastasiia Pnevskaia
4921d1ad28 Fix for slowdown of convert_model() after multiple runs (#16751)
* Used singleton class for version check.

* Moved VersionChecker to utitl/version.py, added tests.

* Minor corrections.

* Sort imports.

* Small correction.

* Small correction.
2023-04-13 11:59:11 +04:00
Nikolay Shchegolev
061ba1d773 [CPU] Convert i64->i32 for Reference node. (#16797) 2023-04-13 11:55:53 +04:00
Xuejun Zhai
e238bfc1d0 Fix C API test failed with debug version on Windows & MacOS (#16903)
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-04-13 10:59:42 +04:00
Wang Wangwang
1037f24c46 [AUTO] Remove exclusive_asyc_requests property from AUTO plugin (#16840)
* Remove exclusive_asyc_requests property from AUTO plugin

* Update test case

* Add test case to test incorrect config

* Remove the test case related to exclusive_asyc_requests property of AUTO plugin
2023-04-13 06:35:42 +00:00
Vladimir Paramuzov
67c07ccebe [GPU] Support 7D and 8D tensors (#16810) 2023-04-13 09:04:14 +04:00
Tomasz Dołbniak
dcf6fb1e1a Allow stable sort in TopK when sorting by indices (#16811)
* Allow stable sort in TopK when sorting by indices

* Clarification of stable sorting by index and unblocked test

* XFAIL the test again

* Clarification of sorting by indices

* Revert of changes in previous versions op TopK (spec)
2023-04-13 05:26:01 +02:00
Vladislav Golubev
9c6d287a58 [LPT] GroupConvolution plugin tests: test class corrected to restore behavior in arm plugin instances (#16883)
* [LPT] GroupConvolution plugin tests: restored test params default values

* return FQOnData shape automatic generation
2023-04-13 01:41:44 +01:00
Min, Byungil
1ba87971d1 [GPU] fix unit-test seg fault error on dGPU (#16879)
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-04-13 09:20:06 +09:00
Aleksandr Voron
73be9d31b6 Skip CPU tests on ARM platform (#16891)
* [CPU] ARM architecture support

This patch extends existing CPU plugin capabilities with ARM CPUs optimized support

* Fixed undefined reference in unit tests

* refactoring

* Fixed Eltwise node behavior for ARM

* init commit

* tests passed

* fix skip failures

* Apply suggestions from code review

---------

Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-04-13 02:34:36 +04:00
Ilya Lavrenov
86142b0f4b Fixed compilation with gcc-12 (#16895) 2023-04-13 02:24:19 +04:00
Maciej Kwapulinski
0e975ffbb6 [GNA] smoke_MemoryTest suite enabled for transformation=NONE (#16481)
* relative threshold for smoke_MemoryTest suite adjusted for GNA

* smoke_MemoryTest suite enabled

* GNA MemoryTest. TransformNone: input changed to [5-10]. TransformLatency is disabled.

* RR comments applied

* RR2 comments applied

* RR3 comments applied

* clang-format-9 fix

* RR4 comments applied
2023-04-12 21:16:08 +00:00
Karol Blaszczak
65a49e903c Update prerelease_information.md (#16898) 2023-04-12 20:04:25 +00:00
Ilya Lavrenov
418f70abb0 Improvements related to arm support (#16892) 2023-04-12 23:02:57 +04:00
Taylor Yeonbok Lee
bee357bcf8 Fix softmax perf of stable diffusion (#16869) 2023-04-12 12:01:31 -07:00
Ilya Lavrenov
298bf15a1b Debian / RPM changes for ARM CPU plugin (#16871) 2023-04-12 23:00:07 +04:00
Aleksandr Voron
9b5ca2bb6a Add ACL license (#16889) 2023-04-12 19:53:47 +04:00
Wang, Yang
86d7c97fa9 Update the logic of benchmark app property setting (#16427)
* 1. refine the logic to ov::device::properties setting.
2. the config overrides will be performed if same config setting is came from CMD line.-a

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update configuration sample file within README.md.

* Update.

* Update.

* 1. Update configuration example file within REAMDME.md for Python version.
2. implement the config DEVICE_PROPERTIES value convertation between the string type and dictionary of Python type.
3. Update the configuration file loading and dumping logic.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update.

* Update.

* Update.

* Update.

* Update.

* 1. Enable configs to be interchangeable between C++ and Python.
2. Update perf_count showing logic.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Revert the logic of showing show performance counters.

* Update help msg for loading config option.

---------

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2023-04-12 15:32:54 +00:00
Gorokhov Dmitriy
c283d21215 [CPU] ARM architecture support (#15256)
* [CPU] ARM architecture support

This patch extends existing CPU plugin capabilities with ARM CPUs optimized support
2023-04-12 18:42:05 +04:00
Sofya Balandina
a368e10fff [apiConformance] Stop work after crash and save report (#16539) 2023-04-12 18:31:39 +04:00
Irina Efode
c2a90f4c01 [CONFORMANCE] Fix error with import sigkill (#16884) 2023-04-12 18:21:57 +04:00
Wang Wangwang
c2c2143f45 clean AB property from virtual plugin and core global config (#16877)
* Benchmark_app set ov::hint::allow_auto_batching through compile_model

* Remove the process about allow_auto_batching in set_property of core

* Remove allow_auto_batching and auto_batch_timeout property from AUTO plugin

* Reserve the info logs and add API to check auto_batching

* Update test case, rm AB property test from core config tests

* Update some API in AUTO plugin config
2023-04-12 17:37:57 +04:00
Tomasz Dołbniak
fb49228fec Pillow modes in the preprocessor's resize mechanism (#16601) 2023-04-12 15:30:42 +02:00
Sofya Balandina
ed5148b75f [apiConformance] Refactor io_tensor tests (#16348) 2023-04-12 17:22:01 +04:00
Vladimir Paramuzov
7d4496bb12 [GPU] Remove unused constants from the graph (#16873) 2023-04-12 16:52:26 +04:00
Mateusz Bencer
e737e18b02 [ONNX FE] Fix Squeeze v1 (#16865) 2023-04-12 14:33:49 +02:00
Sergey Shlyapnikov
997f60f1c3 [GPU] Fix shape_of shape inference optimization (#16863) 2023-04-12 15:44:34 +04:00
Mateusz Tabaka
bdd79fe931 CompressQuantizeWeights - use f32 precision when computing scale and zero point (#16794)
Ticket: 101825
2023-04-12 12:42:39 +02:00
Szymon Irzabek
496fe7a7db [GNA] Extend unsupported concat detection to include cascaded concat with convolution (#16756) 2023-04-12 12:19:42 +02:00
Przemyslaw Wysocki
69d6ef33fc [PyOV] Align and bump numpy, further tidy up requirements (#16652)
* Align numpy

* Simplify the rest

* Minor change

* Minor change

* Restart CI

* Update paddle reqs
2023-04-12 13:14:38 +04:00
Marcin Kusmierski
b755d17090 [GNA] Fix plugin crash when infinite loop discovered. (#16770) 2023-04-12 10:00:52 +02:00
Maxim Vafin
23c90aecea Add support for opset10 and opset11 in MO IR Reader (#16742)
* Add support for opset10 and opset11 in MO IR Reader

* Fix unique

* Refactor tests

* Fix Unique shape infer

* Update tests

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Apply review feedback

* Fix BOM tests

* Update tools/mo/unit_tests/mo/utils/ir_reader/ops_test.py

* Improve error log

* Fix test fails when using pytest

* Add changes forgotten in last commit

* Fix error message

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-04-12 11:35:52 +04:00
Ivan Tikhonov
132dceb146 Delete redandant node copies in TSSqueeze, TSUnsqueeze and TSReduction transformations (#16753)
* Delete redandant node copies in TSSqueeze, TSUnsqueeze and TSReduction transformations, add new tests

* codestyle

* codestyle
2023-04-12 11:30:48 +04:00
Bo Liu
4bb9222c6e fix Paddle unit tests unexpected exceptions and seg fault issue (#16808)
* fix Paddle unit tests unexpected exceptions and seg fault issue

* parse confine from reqfile to keep algin with other requirements

* Apply suggestions from code review

* Apply suggestions from code review
2023-04-12 11:13:25 +04:00
Karol Blaszczak
4b16c7554e [DOCS] minor fixes for front_ext and pre-notes (#16866) 2023-04-12 07:55:34 +02:00
Roman Lyamin
f8aacf3b19 [GPU] Small fix for gather_nonzero (#16858) 2023-04-12 09:15:49 +04:00
Steve Yoo
0312d8cf1b Skip asymmetric compensation if its type is not data, and add its unittests (#16494) 2023-04-11 20:16:25 -07:00
Roman Lyamin
2312ec79a2 [GPU] Skip failing lstm tests (#16868) 2023-04-12 02:10:42 +02:00
Edward Shogulin
586dd4fb0a [Snippets] BF16 enforce in snippets (#16587) 2023-04-12 01:12:17 +02:00
Anastasia Kuporosova
31aa35b646 [PyOv] remove commented functions without implementation (#16864) 2023-04-12 01:07:29 +04:00
Przemyslaw Wysocki
ea213f687a Fix regex (#16850) 2023-04-12 01:06:54 +04:00
Wang, Yang
3740ba9226 [IE Sample] incorrect nstreams retrieved from plugin (#16849)
* Retrieve the ov::num_streams through compiledModel rather than through plugin.

* Update python version.
2023-04-12 01:06:20 +04:00
Ivan Tikhonov
920900fbda Delete the redundant check in convert method of TF FrontEnd class (#16846)
* remove a check in convert method

* delete unused variables and comment

* leave only one pass::Manager in normalize method
2023-04-12 01:05:16 +04:00
Ilya Churaev
4a43753e02 Enable some tests for Template plugin (#16832)
* Remove the skip of template plugin tests

* Enable some skipped tests for template plugin

* Added cancel callback, collect per-layer statistic, fixed tests

* Fixed template tests

* Rename internal API terminate to cancel

* Fixed windows tests

* Fixed logic with performance counters
2023-04-12 01:02:28 +04:00
Ian Hunter
209db8a29b Update ie_common.h (#16860) 2023-04-12 00:52:02 +04:00
Andrew Kwangwoong Park
63b16baa7e [GPU] Fix strided slice clamped negative begin with negative stride (#16843)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-04-11 11:52:22 -07:00
Roman Kazantsev
9e89b6c5f6 [TF FE] Support NonMaxSuppression with named outputs (#16835)
* [TF FE] Support NonMaxSuppression with named outputs

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Simplify the test for NMS named outputs

* Share a script for test model generation

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-11 19:14:59 +02:00
Taylor Yeonbok Lee
7513e9dee1 [GPU] Applied w/a to resolve softmax accuracy issue (#16818)
* Applied w/a to resolve softmax accuracy issue
The original impl resulted in accuracy issue if leftover is not aligned with subgroup size.
(e.g., for shape [1024, 306] where the lws = 32, itemsNum = 9, leftover = 18, subgroup size = 16)
In such a case, the result got wrong if subgroup block read/write is used.
As a w/a, not to use subgroup block read/write if leftover is not aligned with nsubgroup size.
However we can come up with better itenNum size / lefover handling in the follot bwing up work.

* Fix build error & minor revise

* Fix condition
2023-04-11 10:01:22 -07:00
Mateusz Tabaka
4fbd094cba BroadcastConstRangeReplacement - skip unsqueeze if Broadcast input is 1D (#16851)
Ticket: 106636
2023-04-11 17:59:03 +02:00
Vladislav Golubev
98afdc848a [LPT] ConvolutionTransformation: support for a new per channel dequantization representation (#16687)
* [LPT][TESTS] GrConv: added test cases with per channel dq on weights and without reshape

* FoldFQ: don't transform FQ with quantization by several dimensions

* ConvolutionTransformation: supported GrConv with per channel dq on weights and without reshape

* fold_reshape: refactoring
2023-04-11 14:07:23 +02:00
Vladislav Golubev
296c2d6603 [Transforamtions] NonZero horizontal fusion: review leftovers (#16639)
* Review comments applied

* codestyle

* review comments applied
2023-04-11 15:42:43 +04:00
Ekaterina Aidova
ca2265395d [PT FE]: fix aten::mean behaviour for provided dtype (#16790) 2023-04-11 14:29:29 +04:00
Ekaterina Aidova
d41663694c [PT FE]: aten::gather (#16784)
* [PT FE]: aten::gather

* add detach and sign
2023-04-11 14:28:05 +04:00
Ekaterina Aidova
d407bc1b3b [PT FE] fix invalid reshape shape after aten::index (#16821)
* [PT FE] fix invalid reshape shape after aten::index

* support aten::index_select
2023-04-11 12:41:59 +03:00
Eddy Kim
f6ee6e92f8 [GPU] fixed loop serialization logic for multi-stream execution (#16838)
* fixed loop serialization logic for multi-stream execution

* fixed the multistream unit test
2023-04-11 12:40:37 +04:00
Roman Kazantsev
f991f92f8c [TF FE] Test ResourceGather operation and fix debug caps (#16819)
* [TF FE] Test ResourceGather operation and fix debug caps

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix test generation script

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-11 11:33:32 +04:00
yanlan song
527c2dad2a query capacity before popping (#16828)
* query capacity before popping

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-11 11:25:29 +04:00
Mingyu Kim
615177ae09 [GPU] Update onednn version to latest v3.1 (#16848) 2023-04-11 15:05:35 +09:00
Roman Lyamin
234fe92931 [GPU] MVN 1d dynamic batch case fix (#16826) 2023-04-11 09:42:51 +04:00
Oleg Pipikin
efc647a512 [Snippets][CPU] Fix cycle dependency check in snippets tokenizer (#16760) 2023-04-10 22:36:29 +04:00
Ilya Churaev
81821f3dbb Remove vopset typo (#16833)
* Remove vopset typo

* remove ::
2023-04-10 19:50:06 +04:00
Ilya Lavrenov
f1d6725477 Removed legacy src files from inference library (#16839) 2023-04-10 19:26:09 +04:00
dependabot[bot]
81af7f52cb Bump pytest from 7.2.0 to 7.3.0 in /src/bindings/python (#16830)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 7.2.0 to 7.3.0.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/7.2.0...7.3.0)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-10 16:50:12 +04:00
Ilya Lavrenov
feb08c408f Return benchmark_tool to openvino-dev wheel (#16834) 2023-04-10 16:34:51 +04:00
Ilya Lavrenov
023dc1fa3d Remove warnings during cython call (#16831) 2023-04-10 16:28:15 +04:00
Roman Kazantsev
f36ee94b4b [TF FE] Correct SpaceToBatch layer test (#16823)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-04-10 14:41:02 +04:00
Evgenya Stepyreva
bc7a121a20 Removes legacy transformations from CNNNetworkNGraphImpl::reshape (#15853)
* Removes legacy transformations from CNNNetworkNGraphImpl::reshape

* Removes legacy transformations from CNNNetworkNGraphImpl::reshape

* 6 more models propagate shape more precise

* Removes legacy includes

* Fix invalidation

* Test change

* win fix

* Ilyas suggestion

* Unary ops -- removed shape relying on the output of the op, used shapes from the input tensor instead

* Code clean up

* Equal: bounds evaluation

* Equal: bounds evaluation

* Restrict TypeRelaxed from partial_value propagation

* TypeRelaxed: propagate lower/upper bounds

* Remove debug prints

* fix build

* GPU shape inference problem fixed

* Generate Proposals: better dynamic shape propagation

* Style
2023-04-10 12:36:56 +02:00
Ilya Churaev
b921bf2e29 Remove redundant copy of ov::Any in has_rt_info method (#16802)
* Remove redundant copy

* Fixed Python segfault and avoid a copy of ov::Any
2023-04-10 13:56:35 +04:00
Sergey Shlyapnikov
2075dcb7c3 [GPU] Fix Interpolate assert (#16806) 2023-04-10 12:29:01 +04:00
Sofya Balandina
ed50d3782c [apiConformance] Define mandatory scope for infer requiest tests (#16418) 2023-04-10 12:27:20 +04:00
Irina Efode
b7bf760516 [CONFORMANCE] Add re-run of interapted tests to avoid not reported tests (#16782)
* [CONFORMANCE] Add re-run of interapted tests to avod not reported tests

* Fix mistake with interapted

* test

* Remove extra prints
2023-04-10 12:23:59 +04:00
Wang Wangwang
57684e28ff [AUTO] Remove cache_dir property from AUTO plugin (#16775)
* Remove cache_dir property from AUTO plugin

* Pass the secondary property to hardware plugin

* Update test case

* Update test case, meta plugin will pass the properties to device without checking
2023-04-10 11:42:24 +04:00
Sergey Shlyapnikov
48dee7c30a [GPU] Fix missed weights params update (#16815) 2023-04-10 10:28:06 +04:00
Kelvin Choi
c7fe5ca73b [Coverity] Resource leak in primitive_inst.cpp (#16771) 2023-04-10 10:27:09 +04:00
Egor Duplenskii
b5a0497c19 [CPU][TESTS] Fix cmake subset target (#16710)
cmake iterates over a list and cannot iterate over space separated string
2023-04-10 10:00:35 +04:00
hyunback kim
f4179e8ee4 [GPU] Add to check FC bias data-type logic in issued kernel selection. (#16628)
* Fix unit test failure with broadcast primitive
* After introduce shape canonicalization, static broadcast unit test failed.
* Guilty commit is https://github.com/openvinotoolkit/openvino/pull/16166

Signed-off-by: hyunback <hyunback.kim@intel.com>
2023-04-10 14:54:15 +09:00
Chenhu Wang
d1a23e964e [CPU] Store emitter keep source vec values intact (#16313) 2023-04-10 09:51:50 +04:00
Chen Peter
13874b31e9 [AUTO] Initialize variable / reduce variable copy (#16743)
* [AUTO] Initialize variable / reduce variable copy

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Be compatible with C++11

https://stackoverflow.com/questions/18184096

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Fix C7555

C7555 “use of designated initializers requires at least ‘/std:c++latest’” in extern “C” code.

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Init in constructor and use auto const &

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Fix cpplint issue

common.hpp:72:  You don't need a ; after a }

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Support all possible parameter numbers (0-6)

Signed-off-by: Peter Chen <peter.chen@intel.com>

* Fix cpplint issues

Signed-off-by: Peter Chen <peter.chen@intel.com>

---------

Signed-off-by: Peter Chen <peter.chen@intel.com>
2023-04-10 10:21:40 +08:00
Szymon Irzabek
8c69100439 [GNA] Fix tests which create convolution with stride > kernel size on height dimension (#16804) 2023-04-07 15:42:51 +02:00
yanlan song
769353df00 Support dynamic output models with all possible devices instead of CPU only (#15594)
* with dynamic output models, do not use intermediate IE blobs

Signed-off-by: fishbell <bell.song@intel.com>

* enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* add some log/comment

Signed-off-by: fishbell <bell.song@intel.com>

* refine and enable tests

Signed-off-by: fishbell <bell.song@intel.com>

* change implementation

Signed-off-by: fishbell <bell.song@intel.com>

* fix issue with 1.0API

Signed-off-by: fishbell <bell.song@intel.com>

* enable unit test

Signed-off-by: fishbell <bell.song@intel.com>

* integrate test with folder change

Signed-off-by: fishbell <bell.song@intel.com>

* clean up cmake

Signed-off-by: fishbell <bell.song@intel.com>

* fix warnings

Signed-off-by: fishbell <bell.song@intel.com>

* fix conflict with master

Signed-off-by: fishbell <bell.song@intel.com>

* optimize common mock infer request

Signed-off-by: fishbell <bell.song@intel.com>

* rebase with master

Signed-off-by: fishbell <bell.song@intel.com>

* resolve merge conflict

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-04-07 20:44:36 +08:00
Daria Mityagina
8c40bfd9c7 detected vulnerability with shared_ptr (#16791) 2023-04-07 16:25:05 +04:00
Sofya Balandina
c6fc8e5adc [apiConformance] Exec_network_base refactor and define mandatory scope (#16413) 2023-04-07 16:17:50 +04:00
Ivan Tikhonov
72952bdc45 Disable ConstantFolding for ShapeOf subgraph in TS transformation (#16765)
* Disable ConstantFolding for ShapeOf expressions in TS transformation

* update ModelWithEmptyTensorListAndPushBack: add ShapeOf subgraph
2023-04-07 14:50:59 +04:00
Maxim Vafin
8b7e6878e8 [TF FE] Better support for named ports in tensorflow frontend (#16697)
* Fix in create_same_type_const_scalar; accurate updating type for parameter when inlining function call body

* Added Unique to the list of operations with named output ports (another MUSE fix)

* Draft: working version of extension with named ports in TF

* Merge fixes

* Refactor and productize POC

* Clean up

* Fix build

* Fix code style

* Fix lib so extension test

* Fix namespaces

* Remove usage of Any from CreatorFunction

* Fix build

* Fix arm build

* Apply review feedback

* Fix build after merge

* Apply suggestions from code review

---------

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
2023-04-07 14:16:23 +04:00
Zlobin Vladimir
1eb6ad20c3 Update open_model_zoo submodule (#16779)
Fix model serialize

Ticket 107646
2023-04-07 12:45:49 +04:00
1866 changed files with 64084 additions and 36676 deletions

View File

@@ -31,14 +31,6 @@ pr:
- 'tools/*'
- 'tests/layer_tests/*'
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
variables:
- group: github
@@ -56,7 +48,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
ANDROID_TOOLS: $(WORK_DIR)/android_tools
@@ -66,7 +57,7 @@ jobs:
SHARE_DIR: /mount/cinfsshare/onnxtestdata
CCACHE_DIR: $(SHARE_DIR)/ccache/master/android_arm64
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -76,7 +67,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -121,11 +112,6 @@ jobs:
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
submodules: 'true'
path: openvino_contrib
- script: |
set -e
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
@@ -147,20 +133,15 @@ jobs:
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DCMAKE_TOOLCHAIN_FILE=$(ANDROID_TOOLS)/ndk-bundle/build/cmake/android.toolchain.cmake
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
-DANDROID_STL=c++_shared
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
-DENABLE_TESTS=ON
-DBUILD_java_api=ON
-DBUILD_nvidia_plugin=OFF
-DBUILD_custom_operations=OFF
-DENABLE_INTEL_GPU=ON
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache

View File

@@ -100,7 +100,7 @@ jobs:
BUILD_PYTHON: $(WORK_DIR)/build_python
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -110,7 +110,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -172,7 +172,8 @@ jobs:
# For running Python API tests
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running Paddle frontend unit tests
python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
# For running ONNX frontend unit tests
python3 -m pip install -r $(REPO_DIR)/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests

View File

@@ -31,14 +31,6 @@ pr:
- 'tools/*'
- 'tests/layer_tests/*'
resources:
repositories:
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
ref: master
variables:
- group: github
@@ -54,34 +46,18 @@ jobs:
system.debug: true
VSTS_HTTP_RETRY: 5
VSTS_HTTP_TIMEOUT: 200
PYTHON_ARM_VERSION: "3.10.6"
PYTHON_EXEC: "python3.10"
OPENVINO_ARCH: 'aarch64'
NUM_PROC: 1
BUILD_TYPE: Release
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
OPENCV_REPO_DIR: $(OPENVINO_REPO_DIR)/../opencv
ONETBB_REPO_DIR: $(OPENVINO_CONTRIB_REPO_DIR)/../oneTBB
BUILD_PYTHON: $(WORK_DIR)/build_python
BUILD_OPENCV: $(WORK_DIR)/build_opencv
BUILD_ONETBB: $(WORK_DIR)/build_onetbb
BUILD_OPENVINO: $(WORK_DIR)/build
BUILD_OPENVINO_PYTHON: $(WORK_DIR)/build_python
CROSSENV_DIR: $(WORK_DIR)/cross_env
INSTALL_OPENVINO: $(WORK_DIR)/install_openvino
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
INSTALL_ONETBB: $(WORK_DIR)/build/extras/oneTBB
INSTALL_ONETBB_PACKAGE: $(INSTALL_OPENVINO)/extras/oneTBB
INSTALL_OPENCV: $(INSTALL_OPENVINO)/extras/opencv
WORK_DIR: $(Pipeline.Workspace)/_w
SHARE_DIR: /mount/cinfsshare/onnxtestdata
TMP_DIR: /mnt/tmp
OPENVINO_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64
OPENCV_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64_opencv
ONETBB_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64_onetbb
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -91,7 +67,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -121,15 +97,13 @@ jobs:
- script: |
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
mkdir -p $(BUILD_ONETBB) $(BUILD_OPENCV) $(BUILD_OPENVINO) $(BUILD_OPENVINO_PYTHON) $(BUILD_PYTHON)
mkdir -p $(INSTALL_ONETBB) $(INSTALL_ONETBB_PACKAGE) $(INSTALL_OPENVINO) $(INSTALL_PYTHON) $(INSTALL_OPENCV)
mkdir -p $(BUILD_OPENVINO)
mkdir -p $(INSTALL_OPENVINO)
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
sudo mkdir -p $(SHARE_DIR)
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(SHARE_DIR) -o vers=4,minorversion=1,sec=sys
mkdir -p $(OPENVINO_CCACHE_DIR)
mkdir -p $(OPENCV_CCACHE_DIR)
mkdir -p $(ONETBB_CCACHE_DIR)
displayName: 'Make directories'
- checkout: self
@@ -137,56 +111,25 @@ jobs:
submodules: 'true'
path: openvino
- checkout: openvino_contrib
clean: 'true'
submodules: 'true'
path: openvino_contrib
- script: |
set -e
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
$(OPENVINO_CONTRIB_REPO_DIR)/modules/arm_plugin/scripts/install_build_dependencies.sh
python3 -m pip install --upgrade pip
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/requirements.txt
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
env:
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
USE_CCACHE: 1
OPENCV_CCACHE_DIR: $(OPENCV_CCACHE_DIR)
ONETBB_CCACHE_DIR: $(ONETBB_CCACHE_DIR)
PYTHON_ARM_VERSION: $(PYTHON_ARM_VERSION)
NUM_PROC: $(NUM_PROC)
BUILD_PYTHON: $(BUILD_PYTHON)
WORK_DIR: $(WORK_DIR)
INSTALL_PYTHON: $(INSTALL_PYTHON)
BUILD_TYPE: $(BUILD_TYPE)
OPENVINO_REPO_DIR: $(OPENVINO_REPO_DIR)
BUILD_ONETBB: $(BUILD_ONETBB)
INSTALL_ONETBB: $(INSTALL_ONETBB)
INSTALL_OPENCV: $(INSTALL_OPENCV)
PYTHON_EXEC: $(PYTHON_EXEC)
ONETBB_REPO_DIR: $(ONETBB_REPO_DIR)
OPENCV_REPO_DIR: $(OPENCV_REPO_DIR)
BUILD_OPENCV: $(BUILD_OPENCV)
INSTALL_OPENVINO: $(INSTALL_OPENVINO)
# install dependencies needed to build CPU plugin for ARM
sudo -E apt --assume-yes install scons crossbuild-essential-arm64
# Speed up build
sudo -E apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
displayName: 'Install dependencies'
- script: |
set -e
/usr/local/bin/$(PYTHON_EXEC) -m pip install -U pip
/usr/local/bin/$(PYTHON_EXEC) -m pip install crossenv
/usr/local/bin/$(PYTHON_EXEC) -m crossenv $(INSTALL_PYTHON)/bin/$(PYTHON_EXEC) $(CROSSENV_DIR)
source $(CROSSENV_DIR)/bin/activate
build-pip3 install -U pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
cross-pip3 install -U pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
displayName: 'Create crossenv'
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
@@ -194,15 +137,9 @@ jobs:
-DENABLE_TESTS=ON
-DENABLE_DATA=OFF
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DTHREADING=TBB
-DTBB_DIR=$(INSTALL_ONETBB)/lib/cmake/TBB
-DCMAKE_VERBOSE_MAKEFILE=ON
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules/arm_plugin
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
-DCMAKE_C_COMPILER_LAUNCHER=ccache
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
-DCMAKE_C_LINKER_LAUNCHER=ccache
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC)
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
-S $(OPENVINO_REPO_DIR)
@@ -220,31 +157,6 @@ jobs:
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE) --target install
displayName: 'Install OpenVINO ARM plugin'
- script: |
source $(CROSSENV_DIR)/bin/activate
cmake \
-GNinja \
-DENABLE_PYTHON=ON \
-DENABLE_WHEEL=ON \
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake \
-DOpenVINODeveloperPackage_DIR=$(BUILD_OPENVINO) \
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
-S $(OPENVINO_REPO_DIR)/src/bindings/python \
-B $(BUILD_OPENVINO_PYTHON)
deactivate
displayName: 'CMake OpenVINO python binding'
- script: cmake --build $(BUILD_OPENVINO_PYTHON) --parallel --config $(BUILD_TYPE)
env:
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
CCACHE_BASEDIR: $(Pipeline.Workspace)
CCACHE_MAXSIZE: 50G
displayName: 'Build OpenVINO python binding'
- script: cmake --build $(BUILD_OPENVINO_PYTHON) --parallel --target install
displayName: 'Install OpenVINO python binding'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: $(Build.ArtifactStagingDirectory)

View File

@@ -59,7 +59,7 @@ jobs:
INSTALL_DIR: $(WORK_DIR)/install_pkg
SETUPVARS: $(INSTALL_DIR)/setupvars.sh
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -69,7 +69,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -123,12 +123,11 @@ jobs:
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DENABLE_CPPLINT=OFF
-DENABLE_GAPI_PREPROCESSING=OFF
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_FASTER_BUILD=ON
-DENABLE_PROFILING_ITT=ON
-DSELECTIVE_BUILD=COLLECT
@@ -152,11 +151,10 @@ jobs:
- task: CMake@1
inputs:
cmakeArgs: >
-GNinja
-DSELECTIVE_BUILD=ON
-DSELECTIVE_BUILD_STAT=$(BUILD_DIR)/*.csv
-S $(REPO_DIR)
-B $(BUILD_DIR)
-S $(REPO_DIR)
displayName: 'CMake CC ON'
- script: cmake --build $(BUILD_DIR) --parallel --config $(BUILD_TYPE) --target openvino_intel_cpu_plugin openvino_ir_frontend

View File

@@ -33,7 +33,7 @@ jobs:
SHARE_DIR: /mount/cinfsshare/onnxtestdata
CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_coverity
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -43,7 +43,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash
@@ -106,10 +106,9 @@ jobs:
inputs:
# Coverity has too many PARSE_ERROR errors with ENABLE_FASTER_BUILD=ON. Disabling FASTER_BUILD.
cmakeArgs: >
-GNinja
-G "Ninja Multi-Config"
-DENABLE_CPPLINT=OFF
-DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
-DENABLE_FASTER_BUILD=OFF
-DENABLE_STRICT_DEPENDENCIES=OFF
-DBUILD_nvidia_plugin=OFF

View File

@@ -127,7 +127,7 @@ jobs:
python3 -m pip install -r /root/repos/openvino/src/bindings/python/requirements.txt &&
cmake -GNinja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DDENABLE_CPPLINT=OFF \
-DENABLE_CPPLINT=OFF \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
-DOPENVINO_EXTRA_MODULES=/root/repos/openvino_contrib/modules/nvidia_plugin \
-DENABLE_INTEL_CPU=OFF \

View File

@@ -56,7 +56,7 @@ jobs:
ONNXRUNTIME_UTILS: $(REPO_DIR)/.ci/azure/ci_utils/onnxruntime
ONNXRUNTIME_BUILD_DIR: $(ONNXRUNTIME_REPO_DIR)/build
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
steps:
- task: UsePythonVersion@0
@@ -66,7 +66,7 @@ jobs:
disableDownloadFromRegistry: false
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- bash: |
#!/bin/bash

View File

@@ -73,11 +73,11 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
versionSpec: '3.11.2'
addToPath: true
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- script: |
@@ -113,10 +113,6 @@ jobs:
lfs: 'true'
path: testdata
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
- script: |
brew install cython
brew install automake
@@ -127,7 +123,8 @@ jobs:
- script: |
export PATH="/usr/local/opt/cython/bin:$PATH"
cmake -GNinja \
cmake \
-G Ninja \
-DENABLE_CPPLINT=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \

View File

@@ -73,7 +73,7 @@ jobs:
INSTALL_DIR: $(WORK_DIR)\install_pkg
INSTALL_TEST_DIR: $(INSTALL_DIR)\tests
SETUPVARS: $(INSTALL_DIR)\setupvars.bat
PYTHON_DIR: C:\hostedtoolcache\windows\Python\3.10.7\x64
PYTHON_DIR: C:\hostedtoolcache\windows\Python\3.11.2\x64
CMAKE_VERSION: 3.24.0
CMAKE_CMD: $(WORK_DIR)\cmake-$(CMAKE_VERSION)-windows-x86_64\cmake-$(CMAKE_VERSION)-windows-x86_64\bin\cmake.exe
OV_CMAKE_TOOLCHAIN_FILE: $(REPO_DIR)\cmake\toolchains\mt.runtime.win32.toolchain.cmake
@@ -84,26 +84,26 @@ jobs:
- script: |
rd /Q /S $(WORK_DIR) & mkdir $(WORK_DIR)
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
rd /Q /S $(WORK_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.10.7
rd /Q /S $(BUILD_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.10.7\x64
rd /Q /S $(WORK_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.11.2
rd /Q /S $(BUILD_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.11.2\x64
rd /Q /S $(BUILD_SAMPLES_DIR) & mkdir $(BUILD_SAMPLES_DIR)
rd /Q /S $(BUILD_SAMPLES_TESTS_DIR) & mkdir $(BUILD_SAMPLES_TESTS_DIR)
displayName: 'Make dir'
- script: curl -O https://www.python.org/ftp/python/3.10.7/python-3.10.7-amd64.exe
- script: curl -O https://www.python.org/ftp/python/3.11.2/python-3.11.2-amd64.exe
displayName: 'Download Python'
workingDirectory: $(WORK_DIR)
- script: |
python-3.10.7-amd64.exe /passive InstallAllUsers=0 Include_launcher=0 TargetDir=C:\hostedtoolcache\windows\Python\3.10.7\x64
cp C:\hostedtoolcache\windows\Python\3.8.2\x64.complete C:\hostedtoolcache\windows\Python\3.10.7\x64.complete
python-3.11.2-amd64.exe /passive InstallAllUsers=0 Include_launcher=0 TargetDir=C:\hostedtoolcache\windows\Python\3.11.2\x64
cp C:\hostedtoolcache\windows\Python\3.8.2\x64.complete C:\hostedtoolcache\windows\Python\3.11.2\x64.complete
displayName: 'Install Python'
workingDirectory: $(WORK_DIR)
- task: UsePythonVersion@0
displayName: 'Use Python'
inputs:
versionSpec: '3.10'
versionSpec: '3.11.2'
disableDownloadFromRegistry: true
- script: |
@@ -142,7 +142,8 @@ jobs:
python -m pip install -r $(REPO_DIR)\src\bindings\python\wheel\requirements-dev.txt
python -m pip install -r $(REPO_DIR)\src\bindings\python\requirements.txt
rem For running Paddle frontend unit tests
python -m pip install -r $(REPO_DIR)\src\frontends\paddle\tests\requirements.txt
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#python -m pip install -r $(REPO_DIR)\src\frontends\paddle\tests\requirements.txt
rem For running ONNX frontend unit tests
python -m pip install -r $(REPO_DIR)\src\frontends\onnx\tests\requirements.txt
rem For running TensorFlow frontend unit tests
@@ -165,21 +166,21 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" ^
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) ^
-G "Ninja Multi-Config" ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) ^
-DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) ^
-DENABLE_FASTER_BUILD=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_TESTS=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
-DENABLE_STRICT_DEPENDENCIES=OFF ^
-DENABLE_PYTHON=ON ^
-DBUILD_nvidia_plugin=OFF ^
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose" ^
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.10.7\x64\python.exe" ^
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.10.7\x64\include" ^
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.10.7\x64\libs\python310.lib" ^
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.11.2\x64\python.exe" ^
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.11.2\x64\include" ^
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.11.2\x64\libs\python311.lib" ^
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules ^
-DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
-DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^

View File

@@ -65,11 +65,11 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
versionSpec: '3.11.2'
addToPath: true
architecture: 'x64'
githubToken: $(auth_token)
displayName: Setup Python 3.10
displayName: Setup Python 3.11
name: setupPython
- script: |
@@ -78,6 +78,8 @@ jobs:
python --version
where java
java -version
where cmake
cmake --version
wmic computersystem get TotalPhysicalMemory
wmic cpu list
wmic logicaldisk get description,name
@@ -110,7 +112,8 @@ jobs:
- script: |
set PATH=$(WORK_DIR)\ninja-win;%PATH%
call "$(MSVS_VARS_PATH)" && cmake -GNinja ^
call "$(MSVS_VARS_PATH)" && cmake ^
-G Ninja ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
@@ -145,12 +148,12 @@ jobs:
displayName: 'List csv files'
- script: |
call "$(MSVS_VARS_PATH)" && cmake -G"Visual Studio 16 2019" ^
call "$(MSVS_VARS_PATH)" && cmake ^
-G "Visual Studio 16 2019" ^
-DVERBOSE_BUILD=ON ^
-DENABLE_CPPLINT=OFF ^
-DENABLE_GAPI_PREPROCESSING=OFF ^
-DENABLE_FASTER_BUILD=ON ^
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
-DENABLE_PROFILING_ITT=OFF ^
-DSELECTIVE_BUILD=ON ^
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^

View File

@@ -1,5 +1,5 @@
---
name: Bug
name: Bug
about: Create a report to help us improve
title: "[Bug]"
labels: bug, support_request
@@ -8,19 +8,28 @@ assignees: ''
---
##### System information (version)
<!-- Example
- OpenVINO => 2020.4
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
- Problem classification: Model Conversion
<!-- Please use this template to submit a new issue and provide all the necessary information to expedite the response.
Example
- OpenVINO Source => Runtime /pip install / GitHub
- OpenVINO Version => Version 2022.3 / Github Master Branch / tag 2023.0
- Operating System / Platform => Windows 64 Bit / Ubuntu 20
- Compiler => Visual Studio 2017 / Cmake
- Problem classification: Model Conversion /Accuracy/TensorFlow FE
- Device use: CPU / GPU / HDDL
- Framework: TensorFlow (if applicable)
- Model name: ResNet50 (if applicable)
- Model name: ResNet50 and the link to pre-train modal (if applicable)
Please provide us with the link to your model or attach .zip file.
-->
- OpenVINO=> :grey_question:
- OpenVINO Source=> :grey_question:
- OpenVINO Version=> :grey_question:
- Operating System / Platform => :grey_question:
- Compiler => :grey_question:
- Problem classification => :grey_question:
- Device use: => :grey_question:
- Framework => :grey_question:
- Model name => :grey_question:
##### Detailed description
<!-- your description -->

3
.gitmodules vendored
View File

@@ -69,3 +69,6 @@
[submodule "thirdparty/snappy"]
path = thirdparty/snappy
url = https://github.com/google/snappy.git
[submodule "ARMComputeLibrary"]
path = src/plugins/intel_cpu/thirdparty/ComputeLibrary
url = https://github.com/ARM-software/ComputeLibrary.git

View File

@@ -2,11 +2,12 @@
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
[![Stable release](https://img.shields.io/badge/version-2022.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.2.0)
[![Stable release](https://img.shields.io/badge/version-2022.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2022.3.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks)
![Azure DevOps builds (branch)](https://img.shields.io/azure-devops/build/openvinoci/b2bab62f-ab2f-4871-a538-86ea1be7d20f/13?label=Public%20CI)
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino)
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino/badges/version.svg)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
</div>

View File

@@ -53,7 +53,7 @@ if(THREADING STREQUAL "OMP")
update_deps_cache(OMP "${OMP}" "Path to OMP root folder")
debug_message(STATUS "intel_omp=" ${OMP})
ie_cpack_add_component(omp HIDDEN)
ov_cpack_add_component(omp HIDDEN)
file(GLOB_RECURSE source_list "${OMP}/*${CMAKE_SHARED_LIBRARY_SUFFIX}*")
install(FILES ${source_list}
DESTINATION ${OV_CPACK_RUNTIMEDIR}
@@ -96,6 +96,7 @@ function(ov_download_tbb)
if(WIN32 AND X86_64)
# TODO: add target_path to be platform specific as well, to avoid following if
# build oneTBB 2021.2.1 with Visual Studio 2019 (MSVC 14.21)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win.zip"
TARGET_PATH "${TEMP}/tbb"
@@ -108,7 +109,8 @@ function(ov_download_tbb)
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "f42d084224cc2d643314bd483ad180b081774608844000f132859fca3e9bf0ce")
elseif(LINUX AND X86_64)
elseif(LINUX AND X86_64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin.tgz"
TARGET_PATH "${TEMP}/tbb"
@@ -122,12 +124,37 @@ function(ov_download_tbb)
ENVIRONMENT "TBBROOT"
SHA256 "321261ff2eda6d4568a473cb883262bce77a93dac599f7bd65d2918bdee4d75b")
elseif(APPLE AND X86_64)
# build oneTBB 2021.2.1 with OS version 11.4
RESOLVE_DEPENDENCY(TBB
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "c57ce4b97116cd3093c33e6dcc147fb1bbb9678d0ee6c61a506b2bfe773232cb"
USE_NEW_LOCATION TRUE)
elseif(WIN32 AND AARCH64)
# build oneTBB 2021.2.1 with Visual Studio 2022 (MSVC 14.35)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win-arm64.zip"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "09fe7f5e7be589aa34ccd20fdfd7cad9e0afa89d1e74ecdb008a75d0af71d6e1"
USE_NEW_LOCATION TRUE)
elseif(LINUX AND AARCH64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-arm64.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "6b87194a845aa9314f3785d842e250d934e545eccc4636655c7b27c98c302c0c"
USE_NEW_LOCATION TRUE)
elseif(APPLE AND AARCH64)
# build oneTBB 2021.2.1 with export MACOSX_DEPLOYMENT_TARGET=11.0
RESOLVE_DEPENDENCY(TBB
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac-arm64.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "15d46ef19501e4315a5498af59af873dbf8180e9a3ea55253ccf7f0c0bb6f940"
USE_NEW_LOCATION TRUE)
else()
message(WARNING "Prebuilt TBB is not available on current platform")
endif()

View File

@@ -201,7 +201,7 @@ macro(ov_add_frontend)
${frontend_root_dir}/src
${CMAKE_CURRENT_BINARY_DIR})
ie_add_vs_version_file(NAME ${TARGET_NAME}
ov_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
@@ -273,7 +273,7 @@ macro(ov_add_frontend)
set(dev_component "${OV_CPACK_COMP_CORE_DEV}")
# TODO: whether we need to do it configuralbe on Windows installer?
ie_cpack_add_component(${lib_component} HIDDEN)
ov_cpack_add_component(${lib_component} HIDDEN)
if(OV_FRONTEND_LINKABLE_FRONTEND)
set(export_set EXPORT OpenVINOTargets)

View File

@@ -18,7 +18,7 @@ function(ov_native_compile_external_project)
set(multiValueArgs CMAKE_ARGS NATIVE_TARGETS)
cmake_parse_arguments(ARG "" "${oneValueRequiredArgs};${oneValueOptionalArgs}" "${multiValueArgs}" ${ARGN})
if(YOCTO_AARCH64)
if(YOCTO_AARCH64 OR EMSCRIPTEN)
# need to unset several variables which can set env to cross-environment
foreach(var SDKTARGETSYSROOT CONFIG_SITE OECORE_NATIVE_SYSROOT OECORE_TARGET_SYSROOT
OECORE_ACLOCAL_OPTS OECORE_BASELIB OECORE_TARGET_ARCH OECORE_TARGET_OS CC CXX
@@ -31,10 +31,17 @@ function(ov_native_compile_external_project)
endif()
endforeach()
# set root path
if(YOCTO_AARCH64)
set(root_path "$ENV{OECORE_NATIVE_SYSROOT}")
elseif(EMSCRIPTEN)
set(root_path "$ENV{EMSDK}")
endif()
# filter out PATH from yocto locations
string(REPLACE ":" ";" custom_path "$ENV{PATH}")
foreach(path IN LISTS custom_path)
if(NOT path MATCHES "^$ENV{OECORE_NATIVE_SYSROOT}")
if(DEFINED root_path AND NOT path MATCHES "^${root_path}")
list(APPEND clean_path "${path}")
endif()
endforeach()
@@ -81,6 +88,17 @@ function(ov_native_compile_external_project)
endif()
endif()
if(compile_flags)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_FLAGS=${compile_flags}" "-DCMAKE_C_FLAGS=${compile_flags}")
endif()
if(DEFINED CMAKE_CXX_COMPILER_LAUNCHER)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}")
endif()
if(DEFINED CMAKE_C_COMPILER_LAUNCHER)
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}")
endif()
ExternalProject_Add(${ARG_TARGET_NAME}
# Directory Options
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"
@@ -89,12 +107,9 @@ function(ov_native_compile_external_project)
INSTALL_DIR "${ARG_NATIVE_INSTALL_DIR}"
# Configure Step Options:
CMAKE_COMMAND
${NATIVE_CMAKE_COMMAND}
"${NATIVE_CMAKE_COMMAND}" -E env ${cmake_env}
"${NATIVE_CMAKE_COMMAND}"
CMAKE_ARGS
"-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}"
"-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}"
"-DCMAKE_CXX_FLAGS=${compile_flags}"
"-DCMAKE_C_FLAGS=${compile_flags}"
"-DCMAKE_POLICY_DEFAULT_CMP0069=NEW"
"-DCMAKE_INSTALL_PREFIX=${ARG_NATIVE_INSTALL_DIR}"
${ARG_CMAKE_ARGS}
@@ -102,7 +117,7 @@ function(ov_native_compile_external_project)
${ARG_NATIVE_SOURCE_SUBDIR}
# Build Step Options:
BUILD_COMMAND
${NATIVE_CMAKE_COMMAND}
"${NATIVE_CMAKE_COMMAND}"
--build "${CMAKE_CURRENT_BINARY_DIR}/build"
--config Release
--parallel

View File

@@ -27,6 +27,8 @@ elseif(PYTHON_VERSION_MINOR EQUAL 9)
set(clang_version 12)
elseif(PYTHON_VERSION_MINOR EQUAL 10)
set(clang_version 14)
elseif(PYTHON_VERSION_MINOR EQUAL 11)
set(clang_version 14)
else()
message(WARNING "Cannot suggest clang package for python ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}")
endif()

View File

@@ -66,11 +66,11 @@ endmacro()
ov_cpack_set_dirs()
#
# ie_cpack_add_component(NAME ...)
# ov_cpack_add_component(NAME ...)
#
# Wraps original `cpack_add_component` and adds component to internal IE list
#
function(ie_cpack_add_component name)
function(ov_cpack_add_component name)
if(NOT ${name} IN_LIST IE_CPACK_COMPONENTS_ALL)
cpack_add_component(${name} ${ARGN})

View File

@@ -4,13 +4,13 @@
cmake_policy(SET CMP0007 NEW)
set(newContent " <plugin name=\"${IE_DEVICE_NAME}\" location=\"${IE_PLUGIN_LIBRARY_NAME}\">")
set(newContent " <plugin name=\"${OV_DEVICE_NAME}\" location=\"${OV_PLUGIN_LIBRARY_NAME}\">")
if(IE_PLUGIN_PROPERTIES)
if(OV_PLUGIN_PROPERTIES)
set(newContent "${newContent}
<properties>")
foreach(props IN LISTS IE_PLUGIN_PROPERTIES)
foreach(props IN LISTS OV_PLUGIN_PROPERTIES)
string(REPLACE ":" ";" props "${props}")
list(GET props 0 key)
@@ -27,4 +27,4 @@ endif()
set(newContent "${newContent}
</plugin>")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -6,11 +6,15 @@ include(CMakeParseArguments)
set(PLUGIN_FILES "" CACHE INTERNAL "")
function(ie_plugin_get_file_name target_name library_name)
function(ov_plugin_get_file_name target_name library_name)
set(LIB_PREFIX "${CMAKE_SHARED_MODULE_PREFIX}")
set(LIB_SUFFIX "${IE_BUILD_POSTFIX}${CMAKE_SHARED_MODULE_SUFFIX}")
set("${library_name}" "${LIB_PREFIX}${target_name}${LIB_SUFFIX}" PARENT_SCOPE)
get_target_property(LIB_NAME ${target_name} OUTPUT_NAME)
if (LIB_NAME STREQUAL "LIB_NAME-NOTFOUND")
set(LIB_NAME ${target_name})
endif()
set("${library_name}" "${LIB_PREFIX}${LIB_NAME}${LIB_SUFFIX}" PARENT_SCOPE)
endfunction()
if(NOT TARGET ov_plugins)
@@ -18,7 +22,7 @@ if(NOT TARGET ov_plugins)
endif()
#
# ie_add_plugin(NAME <targetName>
# ov_add_plugin(NAME <targetName>
# DEVICE_NAME <deviceName>
# [PSEUDO_DEVICE]
# [PSEUDO_PLUGIN_FOR <actual_device>]
@@ -32,25 +36,25 @@ endif()
# [ADD_CLANG_FORMAT]
# )
#
function(ie_add_plugin)
function(ov_add_plugin)
set(options SKIP_INSTALL PSEUDO_DEVICE ADD_CLANG_FORMAT AS_EXTENSION SKIP_REGISTRATION)
set(oneValueArgs NAME DEVICE_NAME VERSION_DEFINES_FOR PSEUDO_PLUGIN_FOR)
set(multiValueArgs DEFAULT_CONFIG SOURCES OBJECT_LIBRARIES CPPLINT_FILTERS)
cmake_parse_arguments(IE_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
cmake_parse_arguments(OV_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_PLUGIN_NAME)
if(NOT OV_PLUGIN_NAME)
message(FATAL_ERROR "Please, specify plugin target name")
endif()
if(NOT IE_PLUGIN_DEVICE_NAME)
message(FATAL_ERROR "Please, specify device name for ${IE_PLUGIN_NAME}")
if(NOT OV_PLUGIN_DEVICE_NAME)
message(FATAL_ERROR "Please, specify device name for ${OV_PLUGIN_NAME}")
endif()
# create and configure target
if(NOT IE_PLUGIN_PSEUDO_PLUGIN_FOR)
set(input_files ${IE_PLUGIN_SOURCES})
foreach(obj_lib IN LISTS IE_PLUGIN_OBJECT_LIBRARIES)
if(NOT OV_PLUGIN_PSEUDO_PLUGIN_FOR)
set(input_files ${OV_PLUGIN_SOURCES})
foreach(obj_lib IN LISTS OV_PLUGIN_OBJECT_LIBRARIES)
list(APPEND input_files $<TARGET_OBJECTS:${obj_lib}>)
add_cpplint_target(${obj_lib}_cpplint FOR_TARGETS ${obj_lib})
endforeach()
@@ -61,120 +65,122 @@ function(ie_add_plugin)
set(library_type STATIC)
endif()
add_library(${IE_PLUGIN_NAME} ${library_type} ${input_files})
add_library(${OV_PLUGIN_NAME} ${library_type} ${input_files})
if(IE_PLUGIN_VERSION_DEFINES_FOR)
ov_add_version_defines(${IE_PLUGIN_VERSION_DEFINES_FOR} ${IE_PLUGIN_NAME})
if(OV_PLUGIN_VERSION_DEFINES_FOR)
ov_add_version_defines(${OV_PLUGIN_VERSION_DEFINES_FOR} ${OV_PLUGIN_NAME})
endif()
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
if(NOT BUILD_SHARED_LIBS)
# to distinguish functions creating plugin objects
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE
IE_CREATE_PLUGIN=CreatePluginEngine${IE_PLUGIN_DEVICE_NAME}
OV_CREATE_PLUGIN=CreatePluginEngine${IE_PLUGIN_DEVICE_NAME})
if(IE_PLUGIN_AS_EXTENSION)
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE
IE_CREATE_PLUGIN=CreatePluginEngine${OV_PLUGIN_DEVICE_NAME}
OV_CREATE_PLUGIN=CreatePluginEngine${OV_PLUGIN_DEVICE_NAME})
if(OV_PLUGIN_AS_EXTENSION)
# to distinguish functions creating extensions objects
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE
IE_CREATE_EXTENSION=CreateExtensionShared${IE_PLUGIN_DEVICE_NAME})
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE
IE_CREATE_EXTENSION=CreateExtensionShared${OV_PLUGIN_DEVICE_NAME})
endif()
endif()
ie_add_vs_version_file(NAME ${IE_PLUGIN_NAME}
FILEDESCRIPTION "OpenVINO Runtime ${IE_PLUGIN_DEVICE_NAME} device plugin library")
ov_add_vs_version_file(NAME ${OV_PLUGIN_NAME}
FILEDESCRIPTION "OpenVINO Runtime ${OV_PLUGIN_DEVICE_NAME} device plugin library")
target_link_libraries(${IE_PLUGIN_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
target_link_libraries(${OV_PLUGIN_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
if(WIN32)
set_target_properties(${IE_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${IE_PLUGIN_NAME})
set_target_properties(${OV_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${OV_PLUGIN_NAME})
endif()
if(CMAKE_COMPILER_IS_GNUCXX AND NOT CMAKE_CROSSCOMPILING)
target_link_options(${IE_PLUGIN_NAME} PRIVATE -Wl,--unresolved-symbols=ignore-in-shared-libs)
target_link_options(${OV_PLUGIN_NAME} PRIVATE -Wl,--unresolved-symbols=ignore-in-shared-libs)
endif()
set(custom_filter "")
foreach(filter IN LISTS IE_PLUGIN_CPPLINT_FILTERS)
foreach(filter IN LISTS OV_PLUGIN_CPPLINT_FILTERS)
string(CONCAT custom_filter "${custom_filter}" "," "${filter}")
endforeach()
if (IE_PLUGIN_ADD_CLANG_FORMAT)
add_clang_format_target(${IE_PLUGIN_NAME}_clang FOR_TARGETS ${IE_PLUGIN_NAME})
if (OV_PLUGIN_ADD_CLANG_FORMAT)
add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_TARGETS ${OV_PLUGIN_NAME})
else()
add_cpplint_target(${IE_PLUGIN_NAME}_cpplint FOR_TARGETS ${IE_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
add_cpplint_target(${OV_PLUGIN_NAME}_cpplint FOR_TARGETS ${OV_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
endif()
add_dependencies(ov_plugins ${IE_PLUGIN_NAME})
add_dependencies(ov_plugins ${OV_PLUGIN_NAME})
# install rules
if(NOT IE_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
string(TOLOWER "${IE_PLUGIN_DEVICE_NAME}" install_component)
if(NOT OV_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
string(TOLOWER "${OV_PLUGIN_DEVICE_NAME}" install_component)
if(IE_PLUGIN_PSEUDO_DEVICE)
if(OV_PLUGIN_PSEUDO_DEVICE)
set(plugin_hidden HIDDEN)
endif()
ie_cpack_add_component(${install_component}
DISPLAY_NAME "${IE_PLUGIN_DEVICE_NAME} runtime"
DESCRIPTION "${IE_PLUGIN_DEVICE_NAME} runtime"
ov_cpack_add_component(${install_component}
DISPLAY_NAME "${OV_PLUGIN_DEVICE_NAME} runtime"
DESCRIPTION "${OV_PLUGIN_DEVICE_NAME} runtime"
${plugin_hidden}
DEPENDS ${OV_CPACK_COMP_CORE})
if(BUILD_SHARED_LIBS)
install(TARGETS ${IE_PLUGIN_NAME}
install(TARGETS ${OV_PLUGIN_NAME}
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
COMPONENT ${install_component})
install(TARGETS ${IE_PLUGIN_NAME}
install(TARGETS ${OV_PLUGIN_NAME}
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
COMPONENT ${install_component})
else()
ov_install_static_lib(${IE_PLUGIN_NAME} ${install_component})
ov_install_static_lib(${OV_PLUGIN_NAME} ${install_component})
endif()
endif()
endif()
# Enable for static build to generate correct plugins.hpp
if(NOT IE_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
if(NOT OV_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
# check that plugin with such name is not registered
foreach(plugin_entry IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
list(GET plugin_entry -1 library_name)
list(GET plugin_entry 0 plugin_name)
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
if(plugin_name STREQUAL "${OV_PLUGIN_DEVICE_NAME}" AND
NOT library_name STREQUAL ${OV_PLUGIN_NAME})
message(FATAL_ERROR "${OV_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
endif()
endforeach()
# append plugin to the list to register
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
list(APPEND PLUGIN_FILES "${OV_PLUGIN_DEVICE_NAME}:${OV_PLUGIN_NAME}")
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
set(${OV_PLUGIN_DEVICE_NAME}_CONFIG "${OV_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
set(${OV_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${OV_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
set(${OV_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${OV_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
endif()
endfunction()
function(ov_add_plugin)
ie_add_plugin(${ARGN})
function(ie_add_plugin)
ov_add_plugin(${ARGN})
endfunction()
#
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>)
# ov_register_in_plugins_xml(MAIN_TARGET <main target name>)
#
macro(ie_register_plugins_dynamic)
# Registers plugins in plugins.xml files for dynamic plugins build
#
macro(ov_register_in_plugins_xml)
set(options)
set(oneValueArgs MAIN_TARGET)
set(multiValueArgs)
cmake_parse_arguments(IE_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
cmake_parse_arguments(OV_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_REGISTER_MAIN_TARGET)
if(NOT OV_REGISTER_MAIN_TARGET)
message(FATAL_ERROR "Please, define MAIN_TARGET")
endif()
# Unregister <device_name>.xml files for plugins from current build tree
set(config_output_file "$<TARGET_FILE_DIR:${IE_REGISTER_MAIN_TARGET}>/plugins.xml")
set(config_output_file "$<TARGET_FILE_DIR:${OV_REGISTER_MAIN_TARGET}>/plugins.xml")
foreach(name IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" name "${name}")
@@ -183,12 +189,12 @@ macro(ie_register_plugins_dynamic)
message(FATAL_ERROR "Unexpected error, please, contact developer of this script")
endif()
list(GET name 0 device_name)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "IE_PLUGIN_NAME=${device_name}"
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-D "OV_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "OV_PLUGIN_NAME=${device_name}"
-D "OV_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IEDevScripts_DIR}/plugins/unregister_plugin_cmake.cmake"
COMMENT
"Remove ${device_name} from the plugins.xml file"
@@ -209,15 +215,15 @@ macro(ie_register_plugins_dynamic)
# create plugin file
set(config_file_name "${CMAKE_BINARY_DIR}/plugins/${device_name}.xml")
ie_plugin_get_file_name(${name} library_name)
ov_plugin_get_file_name(${name} library_name)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "IE_CONFIG_OUTPUT_FILE=${config_file_name}"
-D "IE_DEVICE_NAME=${device_name}"
-D "IE_PLUGIN_PROPERTIES=${${device_name}_CONFIG}"
-D "IE_PLUGIN_LIBRARY_NAME=${library_name}"
-D "OV_CONFIG_OUTPUT_FILE=${config_file_name}"
-D "OV_DEVICE_NAME=${device_name}"
-D "OV_PLUGIN_PROPERTIES=${${device_name}_CONFIG}"
-D "OV_PLUGIN_LIBRARY_NAME=${library_name}"
-P "${IEDevScripts_DIR}/plugins/create_plugin_file.cmake"
COMMENT "Register ${device_name} device as ${library_name}"
VERBATIM)
@@ -227,17 +233,24 @@ macro(ie_register_plugins_dynamic)
# Combine all <device_name>.xml files into plugins.xml
if(ENABLE_PLUGINS_XML)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
COMMENT
"Registering plugins to plugins.xml config file"
VERBATIM)
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
-D "OV_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "OV_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
COMMENT
"Registering plugins to plugins.xml config file"
VERBATIM)
endmacro()
#
# ov_register_plugins()
#
macro(ov_register_plugins)
if(BUILD_SHARED_LIBS AND ENABLE_PLUGINS_XML)
ov_register_in_plugins_xml(${ARGN})
endif()
endmacro()
@@ -245,24 +258,13 @@ endmacro()
# ie_register_plugins()
#
macro(ie_register_plugins)
if(BUILD_SHARED_LIBS)
ie_register_plugins_dynamic(${ARGN})
endif()
ov_register_plugins(${ARGN})
endmacro()
#
# ov_register_plugins()
# ov_target_link_plugins(<TARGET_NAME>)
#
macro(ov_register_plugins)
if(BUILD_SHARED_LIBS)
ie_register_plugins_dynamic(${ARGN})
endif()
endmacro()
#
# ie_target_link_plugins(<TARGET_NAME>)
#
function(ie_target_link_plugins TARGET_NAME)
function(ov_target_link_plugins TARGET_NAME)
if(BUILD_SHARED_LIBS)
return()
endif()
@@ -283,6 +285,10 @@ endfunction()
#
# ov_generate_plugins_hpp()
#
# Generates plugins.hpp file for:
# - static plugins build
# - cases when plugins.xml file is disabled
#
function(ov_generate_plugins_hpp)
set(device_mapping)
set(device_configs)
@@ -298,7 +304,7 @@ function(ov_generate_plugins_hpp)
list(GET name 0 device_name)
if(BUILD_SHARED_LIBS)
list(GET name 1 library_name)
ie_plugin_get_file_name(${library_name} library_name)
ov_plugin_get_file_name(${library_name} library_name)
list(APPEND device_mapping "${device_name}:${library_name}")
else()
if(${device_name}_PSEUDO_PLUGIN_FOR)
@@ -322,12 +328,16 @@ function(ov_generate_plugins_hpp)
endforeach()
# add plugins to libraries including ov_plugins.hpp
ie_target_link_plugins(openvino)
ov_target_link_plugins(openvino)
if(TARGET inference_engine_s)
ie_target_link_plugins(inference_engine_s)
ov_target_link_plugins(inference_engine_s)
endif()
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ov_plugins.hpp")
if(OV_GENERATOR_MULTI_CONFIG AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.20)
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/$<CONFIG>/ov_plugins.hpp")
else()
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ov_plugins.hpp")
endif()
set(plugins_hpp_in "${IEDevScripts_DIR}/plugins/plugins.hpp.in")
add_custom_command(OUTPUT "${ov_plugins_hpp}"
@@ -348,7 +358,7 @@ function(ov_generate_plugins_hpp)
VERBATIM)
# for some reason dependency on source files does not work
# so, we have to use explicit target and make it dependency for inference_engine
# so, we have to use explicit target and make it dependency for inference_engine_obj
add_custom_target(_ov_plugins_hpp DEPENDS ${ov_plugins_hpp})
add_dependencies(inference_engine_obj _ov_plugins_hpp)
endfunction()

View File

@@ -8,18 +8,18 @@ set(file_content
</plugins>
</ie>")
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${file_content}")
if(NOT EXISTS "${OV_CONFIG_OUTPUT_FILE}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${file_content}")
endif()
# get list of plugin files
file(GLOB plugin_files "${IE_CONFIGS_DIR}/*.xml")
file(GLOB plugin_files "${OV_CONFIGS_DIR}/*.xml")
function(check_plugin_exists plugin_name outvar)
set(${outvar} OFF PARENT_SCOPE)
# check if config file already has this plugin
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
foreach(line IN LISTS content)
string(REGEX MATCH "location=\"([^\"]*)\"" location "${line}")
@@ -44,7 +44,7 @@ endforeach()
# add plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content)
set(already_exists_in_xml OFF)
foreach(line IN LISTS content)
@@ -77,4 +77,4 @@ ${content}")
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -2,16 +2,16 @@
# SPDX-License-Identifier: Apache-2.0
#
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
if(NOT EXISTS "${OV_CONFIG_OUTPUT_FILE}")
return()
endif()
# remove plugin file
file(REMOVE "${IE_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
file(REMOVE "${OV_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
# remove plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content)
set(skip_plugin OFF)
foreach(line IN LISTS content)
@@ -32,4 +32,4 @@ foreach(line IN LISTS content)
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -2,18 +2,18 @@
# SPDX-License-Identifier: Apache-2.0
#
set(IE_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
set(OV_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(OV_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
set(OV_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
set(IE_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(IE_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
set(IE_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
set(IE_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2021, Intel Corporation")
set(IE_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
set(OV_VS_VER_COMPANY_NAME_STR "Intel Corporation")
set(OV_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
set(OV_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
set(OV_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2021, Intel Corporation")
set(OV_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
#
# ie_add_vs_version_file(NAME <name>
# ov_add_vs_version_file(NAME <name>
# FILEDESCRIPTION <file description>
# [COMPANY_NAME <company name>]
# [FILEVERSION <file version>]
@@ -25,7 +25,7 @@ set(IE_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
# [FILEVERSION_QUAD <name>]
# [PRODUCTVERSION_QUAD <name>])
#
function(ie_add_vs_version_file)
function(ov_add_vs_version_file)
if(NOT WIN32 OR NOT BUILD_SHARED_LIBS)
return()
endif()
@@ -38,14 +38,14 @@ function(ie_add_vs_version_file)
get_target_property(target_type ${VS_VER_NAME} TYPE)
if(NOT target_type MATCHES "^(SHARED|MODULE)_LIBRARY$")
message(FATAL_ERROR "ie_add_vs_version_file can work only with dynamic libraries")
message(FATAL_ERROR "ov_add_vs_version_file can work only with dynamic libraries")
endif()
macro(_vs_ver_update_variable name)
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
set(IE_VS_VER_${name} "${IE_${VS_VER_NAME}_VS_VER_${name}}")
if(VS_VER_NAME AND DEFINED OV_${VS_VER_NAME}_VS_VER_${name})
set(OV_VS_VER_${name} "${OV_${VS_VER_NAME}_VS_VER_${name}}")
elseif(VS_VER_${name})
set(IE_VS_VER_${name} "${VS_VER_${name}}")
set(OV_VS_VER_${name} "${VS_VER_${name}}")
endif()
endmacro()
@@ -53,10 +53,10 @@ function(ie_add_vs_version_file)
_vs_ver_update_variable(PRODUCTVERSION_QUAD)
macro(_vs_ver_update_str_variable name)
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
set(IE_VS_VER_${name}_STR "${IE_${VS_VER_NAME}_VS_VER_${name}}")
if(VS_VER_NAME AND DEFINED OV_${VS_VER_NAME}_VS_VER_${name})
set(OV_VS_VER_${name}_STR "${OV_${VS_VER_NAME}_VS_VER_${name}}")
elseif(VS_VER_${name})
set(IE_VS_VER_${name}_STR "${VS_VER_${name}}")
set(OV_VS_VER_${name}_STR "${VS_VER_${name}}")
endif()
endmacro()
@@ -69,8 +69,8 @@ function(ie_add_vs_version_file)
_vs_ver_update_str_variable(PRODUCTVERSION)
_vs_ver_update_str_variable(COMMENTS)
set(IE_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(IE_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
set(OV_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(OV_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
set(vs_version_output "${CMAKE_CURRENT_BINARY_DIR}/vs_version.rc")
configure_file("${IEDevScripts_DIR}/vs_version/vs_version.rc.in" "${vs_version_output}" @ONLY)

View File

@@ -1,8 +1,8 @@
#include <winver.h>
VS_VERSION_INFO VERSIONINFO
FILEVERSION @IE_VS_VER_FILEVERSION_QUAD@
PRODUCTVERSION @IE_VS_VER_PRODUCTVERSION_QUAD@
FILEVERSION @OV_VS_VER_FILEVERSION_QUAD@
PRODUCTVERSION @OV_VS_VER_PRODUCTVERSION_QUAD@
FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
#ifdef _DEBUG
FILEFLAGS 1
@@ -17,15 +17,15 @@ BEGIN
BEGIN
BLOCK "040904E4"
BEGIN
VALUE "CompanyName", "@IE_VS_VER_COMPANY_NAME_STR@\0"
VALUE "FileDescription", "@IE_VS_VER_FILEDESCRIPTION_STR@\0"
VALUE "FileVersion", "@IE_VS_VER_FILEVERSION_STR@\0"
VALUE "InternalName", "@IE_VS_VER_INTERNALNAME_STR@\0"
VALUE "LegalCopyright", "@IE_VS_VER_COPYRIGHT_STR@\0"
VALUE "OriginalFilename", "@IE_VS_VER_ORIGINALFILENAME_STR@\0"
VALUE "ProductName", "@IE_VS_VER_PRODUCTNAME_STR@\0"
VALUE "ProductVersion", "@IE_VS_VER_PRODUCTVERSION_STR@\0"
VALUE "Comments", "@IE_VS_VER_COMMENTS_STR@\0"
VALUE "CompanyName", "@OV_VS_VER_COMPANY_NAME_STR@\0"
VALUE "FileDescription", "@OV_VS_VER_FILEDESCRIPTION_STR@\0"
VALUE "FileVersion", "@OV_VS_VER_FILEVERSION_STR@\0"
VALUE "InternalName", "@OV_VS_VER_INTERNALNAME_STR@\0"
VALUE "LegalCopyright", "@OV_VS_VER_COPYRIGHT_STR@\0"
VALUE "OriginalFilename", "@OV_VS_VER_ORIGINALFILENAME_STR@\0"
VALUE "ProductName", "@OV_VS_VER_PRODUCTNAME_STR@\0"
VALUE "ProductVersion", "@OV_VS_VER_PRODUCTVERSION_STR@\0"
VALUE "Comments", "@OV_VS_VER_COMMENTS_STR@\0"
END
END
BLOCK "VarFileInfo"

View File

@@ -6,7 +6,9 @@
# Common cmake options
#
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64" OFF)
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64 OR AARCH64 OR ARM" OFF)
ie_dependent_option (ENABLE_ARM_COMPUTE_CMAKE "Enable ARM Compute build via cmake" OFF "ENABLE_INTEL_CPU" OFF)
ie_option (ENABLE_TESTS "unit, behavior and functional tests" OFF)

View File

@@ -156,17 +156,20 @@ macro(ov_cpack_settings)
set(auto_copyright "generic")
endif()
# intel-cpu
if(ENABLE_INTEL_CPU OR DEFINED openvino_arm_cpu_plugin_SOURCE_DIR)
if(ENABLE_INTEL_CPU)
# cpu
if(ENABLE_INTEL_CPU)
if(ARM OR AARCH64)
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
set(cpu_copyright "arm_cpu")
elseif(X86 OR X86_64)
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU plugin")
set(cpu_copyright "generic")
else()
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM CPU")
set(cpu_copyright "arm_cpu")
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
set(CPACK_COMPONENT_CPU_DEPENDS "${OV_CPACK_COMP_CORE}")
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_DEBIAN_CPU_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
_ov_add_plugin(cpu OFF)
endif()

View File

@@ -156,17 +156,20 @@ macro(ov_cpack_settings)
set(auto_copyright "generic")
endif()
# intel-cpu
if(ENABLE_INTEL_CPU OR DEFINED openvino_arm_cpu_plugin_SOURCE_DIR)
if(ENABLE_INTEL_CPU)
# cpu
if(ENABLE_INTEL_CPU)
if(ARM OR AARCH64)
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
set(cpu_copyright "arm_cpu")
elseif(X86 OR X86_64)
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU")
set(cpu_copyright "generic")
else()
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM CPU")
set(cpu_copyright "arm_cpu")
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
set(CPACK_RPM_CPU_PACKAGE_REQUIRES "${core_package}")
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
_ov_add_package(plugin_packages cpu)
endif()

View File

@@ -42,11 +42,12 @@ function(ov_model_convert SRC DST OUT)
endif()
set(full_out_name "${DST}/${rel_out_name}")
file(MAKE_DIRECTORY "${DST}/${rel_dir}")
if(ext STREQUAL ".prototxt")
# convert .prototxt models to .onnx binary
add_custom_command(OUTPUT ${full_out_name}
COMMAND ${CMAKE_COMMAND} -E make_directory
"${DST}/${rel_dir}"
COMMAND ${PYTHON_EXECUTABLE} ${onnx_gen_script}
"${SRC}/${in_file}" ${full_out_name}
DEPENDS ${onnx_gen_script} "${SRC}/${in_file}"
@@ -55,6 +56,8 @@ function(ov_model_convert SRC DST OUT)
WORKING_DIRECTORY "${model_source_dir}")
else()
add_custom_command(OUTPUT ${full_out_name}
COMMAND ${CMAKE_COMMAND} -E make_directory
"${DST}/${rel_dir}"
COMMAND "${CMAKE_COMMAND}" -E copy_if_different
"${SRC}/${in_file}" ${full_out_name}
DEPENDS ${onnx_gen_script} "${SRC}/${in_file}"
@@ -68,18 +71,24 @@ function(ov_model_convert SRC DST OUT)
set(${OUT} ${files} PARENT_SCOPE)
endfunction()
if(OV_GENERATOR_MULTI_CONFIG AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.20)
set(test_model_zoo_output_dir "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/$<CONFIG>/test_model_zoo")
else()
set(test_model_zoo_output_dir "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo")
endif()
ov_model_convert("${CMAKE_CURRENT_SOURCE_DIR}/src/core/tests"
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/core"
"${test_model_zoo_output_dir}/core"
core_tests_out_files)
set(rel_path "src/tests/functional/plugin/shared/models")
ov_model_convert("${OpenVINO_SOURCE_DIR}/${rel_path}"
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/func_tests/models"
"${test_model_zoo_output_dir}/func_tests/models"
ft_out_files)
set(rel_path "src/frontends/onnx/tests/models")
ov_model_convert("${OpenVINO_SOURCE_DIR}/${rel_path}"
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/onnx"
"${test_model_zoo_output_dir}/onnx"
onnx_fe_out_files)
if(ENABLE_TESTS)
@@ -87,11 +96,12 @@ if(ENABLE_TESTS)
${ft_out_files}
${onnx_fe_out_files})
if (ENABLE_OV_PADDLE_FRONTEND)
add_dependencies(test_model_zoo paddle_test_models)
endif()
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
#if (ENABLE_OV_PADDLE_FRONTEND)
# add_dependencies(test_model_zoo paddle_test_models)
#endif()
install(DIRECTORY "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo"
install(DIRECTORY "${test_model_zoo_output_dir}"
DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL)
set(TEST_MODEL_ZOO "./test_model_zoo" CACHE PATH "Path to test model zoo")

View File

@@ -18,7 +18,7 @@ Every deep learning workflow begins with obtaining a model. You can choose to pr
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
Conversion is not required for ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:

View File

@@ -19,7 +19,7 @@
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
TensorFlow, PyTorch, ONNX, TensorFlow Lite, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>`.
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
@@ -52,13 +52,13 @@ Mapping from Framework Operation
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for new ONNX, PaddlePaddle or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
1. Implemented in C++ only.

View File

@@ -2,32 +2,47 @@
@sphinxdirective
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to understand entire flow.
The goal of this chapter is to explain how to use Frontend extension classes to facilitate
mapping of custom operations from framework model representation to OpenVINO representation.
Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to
understand the entire flow.
This API is applicable for new frontends only, which exist for ONNX, PaddlePaddle and TensorFlow. If a different model format is used, follow legacy :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
This API is applicable to new frontends only, which exist for ONNX, TensorFlow Lite, PaddlePaddle, and TensorFlow.
If a different model format is used, follow legacy
:doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
guide.
.. note::
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__, which demonstrates extension development details based on minimalistic ``Identity`` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__,
which demonstrates extension development details based on minimalistic ``Identity``
operation that is a placeholder for your real custom operation. You can review the complete code,
which is fully compilable, to see how it works.
Single Operation Mapping with OpExtension
#########################################
#########################################
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension`` class that works well if all the following conditions are satisfied:
This section covers the case when a single operation in framework representation is mapped to a single
operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension``
class that works well if all the following conditions are satisfied:
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
2. Number of outputs is also the same in both representations.
3. Inputs can be indexed and are mapped in order correspondingly, e.g. input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
3. Inputs can be indexed and are mapped in order correspondingly, e.g.
input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
4. The same for outputs.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by
some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is,
so type of a value should be compatible.
.. note::
``OpExtension`` class is currently available for ONNX and TensorFlow frontends. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
The next example maps ONNX operation with type `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__ to OpenVINO template extension ``Identity`` class.
``OpExtension`` class is currently available for ONNX and TensorFlow frontends.
PaddlePaddle frontend has named inputs and outputs for operation (not indexed)
therefore OpExtension mapping is not applicable for this case.
The following example maps ONNX operation with the type of `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__
to OpenVINO template extension ``Identity`` class.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
@@ -39,22 +54,32 @@ The next example maps ONNX operation with type `Identity <https://github.com/onn
The mapping doesnt involve any attributes, as operation Identity doesnt have them.
Extension objects, like just constructed ``extension`` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
Extension objects, like just constructed ``extension`` can be used to add to the
OpenVINO runtime just before the loading a model that contains custom operations:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_read_model]
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or ``benchmark_app``. Read about how to build and load such library in chapter “Create library with extensions” in :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
Or extensions can be constructed in a separately compiled shared library.
Separately compiled library can be used in Model Optimizer or ``benchmark_app``.
Read about how to build and load such a library in the chapter of “Create library with extensions” in
:doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces ``f32`` data type then operation that consumes this output should also support ``f32``. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
If operation have multiple inputs and/or outputs they will be mapped in order.
The type of elements in input/output tensors should match expected types in the surrounding operations.
For example, if a custom operation produces the ``f32`` data type, the operation that consumes this output
should also support ``f32``. Otherwise, model conversion fails with an error, as no automatic type conversion is performed.
Converting to Standard OpenVINO Operation
+++++++++++++++++++++++++++++++++++++++++
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO
operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
Here is an example for a custom framework operation MyRelu. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> ``Relu`` mapping should be used:
Here is an example of a custom framework operation 'MyRelu'. Assume it is mathematically equivalent
to standard ``Relu`` that exists in the OpenVINO operation set, but for some reason has the type name of 'MyRelu'.
In this case, you can directly say that 'MyRelu' -> ``Relu`` mapping should be used:
.. tab-set::
@@ -73,26 +98,37 @@ Here is an example for a custom framework operation “MyRelu”. Suppose it is
:fragment: [py_frontend_extension_MyRelu]
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation ``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a template parameter for ``OpExtension``. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with ``TemplateExtension::Identity``.
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation
``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used,
it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a
template parameter for ``OpExtension``. This method is available for operations from the standard operation set only.
For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter
as it was demonstrated with ``TemplateExtension::Identity``.
Attributes Mapping
Attribute Mapping
++++++++++++++++++
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
If the set of attributes in framework representation and OpenVINO representation completely match by their names and types,
nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped
automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
Imagine you have CustomOperation class implementation that has two attributes with names ``attr1`` and ``attr2``:
Imagine you have CustomOperation class implementation that has two attributes with names: ``attr1`` and ``attr2``.
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation]
And original model in framework representation also has operation with name “CustomOperatoin” with the same ``attr1`` and ``attr2`` attributes. Then with the following code:
And the original model in the framework representation also has operation named “CustomOperation” with the same
``attr1`` and ``attr2`` attributes. Then with the following code:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_CustomOperation_as_is]
both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in ``OpExtension`` constructor:
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute
names mapping in ``OpExtension`` constructor:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
@@ -100,7 +136,9 @@ both ``attr1`` and ``attr2`` are copied from framework representation to OpenVIN
Where ``fw_attr1`` and ``fw_attr2`` are names for corresponding attributes in framework operation representation.
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value. For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``, to achieve that do the following:
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value.
For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``,
to achieve that do the following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
@@ -109,36 +147,60 @@ If copying of an attribute is not what you need, ``OpExtension`` also can set at
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
1. Setting automatically due to name matching
2. Mapped by attribute name
3. Set to a constant value
This is achieved by specifying maps as arguments for `OpExtension` constructor.
### Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
---------------------------------------------------------------------------
> **NOTE**: Below solution works only for ONNX and Tensorflow frontends.
.. note::
Below solution works only for ONNX and Tensorflow frontends.
`OPENVINO_FRAMEWORK_MAP` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify the mapping between this operation to a frontend operation.
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition
and that lets you specify the mapping between this operation to a frontend operation.
Let's consider the following example. Imagine you have an ONNX model with `CustomOp` operation (and this operation has `mode` attribute) and a Tensorflow model with `CustomOpV3` operation (this operation has `axis` attribute) and both of them can be implemented with a single OpenVINO operation `CustomOp` like follows:
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation
(and this operation has ``mode`` attribute) and a Tensorflow model with ``CustomOpV3`` operation
(this operation has ``axis`` attribute) and both of them can be implemented with a single OpenVINO
operation ``CustomOp`` like follows:
.. doxygensnippet:: ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_headers]
.. doxygensnippet:: ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_CustomOp]
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_headers
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_CustomOp
Let's take a closer look at the parameters this macro takes:
```cpp
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
```
- `framework` - framework name.
- `name` - the framework operation name. It's optional if the OpenVINO custom operation name (that is the name that is passed as the first parameter to `OPENVINO_OP` macro) is the same as the framework operation name and both `attributes_map` and `attributes_values` are not provided.
- `attributes_map` - used to provide a mapping between OpenVINO operation attribute and framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation attribute name and value is its corresponding framework operation attribute name. This parameter is optional if the number of OpenVINO operation attributes and their names match one-to-one with framework operation attributes.
- `attributes_values` - used to provide default values for OpenVINO operation attributes that are not specified in `attributes_map`. Contains key-value pairs, where key is an OpenVINO operation attribute name and the value is this attribute value. This parameter cannot be provided if `attributes_map` contains all of OpenVINO operation attributes or if `attributes_map` is not provided.
In the example above, `OPENVINO_FRAMEWORK_MAP` is used twice.
First, OpenVINO `CustomOp` is mapped to ONNX `CustomOp` operation, `m_mode` attribute is mapped to `mode` attribute, while `m_axis` attribute gets the default value `-1`.
Secondly, OpenVINO `CustomOp` is mapped to Tensorflow `CustomOpV3` operation, `m_axis` attribute is mapped to `axis` attribute, while `m_mode` attribute gets the default value `"linear"`.
.. code-block::cpp
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
- ``framework`` - framework name.
- ``name`` - the framework operation name. It's optional if the OpenVINO custom operation name
(that is the name that is passed as the first parameter to `OPENVINO_OP` macro) is the
same as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
- ``attributes_map`` - used to provide a mapping between OpenVINO operation attribute and
framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation
attribute name and value is its corresponding framework operation attribute name.
This parameter is optional if the number of OpenVINO operation attributes and their names
match one-to-one with framework operation attributes.
- ``attributes_values`` - used to provide default values for OpenVINO operation attributes
that are not specified in ``attributes_map``. Contains key-value pairs, where key is an OpenVINO
operation attribute name and the value is this attribute value. This parameter cannot be provided
if ``attributes_map`` contains all of OpenVINO operation attributes or if ``attributes_map`` is not provided.
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used twice.
First, OpenVINO ``CustomOp`` is mapped to ONNX ``CustomOp`` operation, ``m_mode`` attribute is mapped to ``mode``
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO `CustomOp` is mapped
to Tensorflow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
attribute gets the default value ``linear``.
The last step is to register this custom operation by following:
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_add_extension
@@ -146,16 +208,28 @@ The last step is to register this custom operation by following:
Mapping to Multiple Operations with ConversionExtension
#######################################################
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
Previous sections cover the case when a single operation is mapped to a single operation with optional
adjustment in names and attribute values. That is likely enough for your own custom operation with existing
C++ kernel implementation. In this case your framework representation and OpenVINO representation for the
operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated ``ConversionExtension`` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered.
It is achieved by using more verbose and less automated ``ConversionExtension`` class.
It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO
operations constructing dependency graph of any complexity.
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO
operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to
learn how to use OpenVINO operation classes to build a fragment of model for replacement.
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu” from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
.. note::
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of ``ThresholdedRelu``.
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend
natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar
support for your custom operation instead of ``ThresholdedRelu``.
.. tab-set::
@@ -173,7 +247,6 @@ The next example illustrates using ``ConversionExtension`` for conversion of “
:language: python
:fragment: [py_frontend_extension_ThresholdedReLU_header]
.. tab-set::
.. tab-item:: C++
@@ -191,13 +264,15 @@ The next example illustrates using ``ConversionExtension`` for conversion of “
:fragment: [py_frontend_extension_ThresholdedReLU]
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has two main methods:
To access original framework operation attribute value and connect to inputs,
``node`` object of type ``NodeContext`` is used. It has two main methods:
* ``NodeContext::get_input`` to get input with a given index,
* ``NodeContext::get_attribute`` to get attribute value with a given name.
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.
The conversion function should return a vector of node outputs that are mapped to
corresponding outputs of the original framework operation in the same order.
@endsphinxdirective

View File

@@ -1,45 +1,74 @@
# Asynchronous Inference Request {#openvino_docs_ov_plugin_dg_async_infer_request}
@sphinxdirective
Asynchronous Inference Request runs an inference pipeline asynchronously in one or several task executors depending on a device pipeline structure.
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class:
- The class has the `m_pipeline` field of `std::vector<std::pair<std::shared_ptr<ov::threading::ITaskExecutor>, ov::threading::Task> >`, which contains pairs of an executor and executed task.
- All executors are passed as arguments to a class constructor and they are in the running state and ready to run tasks.
- The class has the ov::IAsyncInferRequest::stop_and_wait method, which waits for `m_pipeline` to finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the compiled model instance and are not destroyed.
* The class has the ``m_pipeline`` field of ``std::vector<std::pair<std::shared_ptr<ov::threading::ITaskExecutor>, ov::threading::Task> >``, which contains pairs of an executor and executed task.
* All executors are passed as arguments to a class constructor and they are in the running state and ready to run tasks.
* The class has the ov::IAsyncInferRequest::stop_and_wait method, which waits for ``m_pipeline`` to finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the compiled model instance and are not destroyed.
AsyncInferRequest Class
------------------------
#######################
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class for a custom asynchronous inference request implementation:
@snippet src/async_infer_request.hpp async_infer_request:header
.. doxygensnippet:: src/plugins/template/src/async_infer_request.hpp
:language: cpp
:fragment: [async_infer_request:header]
### Class Fields
Class Fields
++++++++++++
- `m_wait_executor` - a task executor that waits for a response from a device about device tasks completion
* ``m_cancel_callback`` - a callback which allows to interrupt the execution
* ``m_wait_executor`` - a task executor that waits for a response from a device about device tasks completion
> **NOTE**: If a plugin can work with several instances of a device, `m_wait_executor` must be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel.
.. note::
If a plugin can work with several instances of a device, ``m_wait_executor`` must be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel.
### AsyncInferRequest()
AsyncInferRequest()
+++++++++++++++++++
The main goal of the `AsyncInferRequest` constructor is to define a device pipeline `m_pipeline`. The example below demonstrates `m_pipeline` creation with the following stages:
The main goal of the ``AsyncInferRequest`` constructor is to define a device pipeline ``m_pipeline``. The example below demonstrates ``m_pipeline`` creation with the following stages:
- `infer_preprocess_and_start_pipeline` is a CPU ligthweight task to submit tasks to a remote device.
- `wait_pipeline` is a CPU non-compute task that waits for a response from a remote device.
- `infer_postprocess` is a CPU compute task.
* ``infer_preprocess_and_start_pipeline`` is a CPU ligthweight task to submit tasks to a remote device.
* ``wait_pipeline`` is a CPU non-compute task that waits for a response from a remote device.
* ``infer_postprocess`` is a CPU compute task.
.. doxygensnippet:: src/plugins/template/src/async_infer_request.cpp
:language: cpp
:fragment: [async_infer_request:ctor]
@snippet src/async_infer_request.cpp async_infer_request:ctor
The stages are distributed among two task executors in the following way:
- `infer_preprocess_and_start_pipeline` prepare input tensors and run on `m_request_executor`, which computes CPU tasks.
- You need at least two executors to overlap compute tasks of a CPU and a remote device the plugin works with. Otherwise, CPU and device tasks are executed serially one by one.
- `wait_pipeline` is sent to `m_wait_executor`, which works with the device.
* ``infer_preprocess_and_start_pipeline`` prepare input tensors and run on ``m_request_executor``, which computes CPU tasks.
* You need at least two executors to overlap compute tasks of a CPU and a remote device the plugin works with. Otherwise, CPU and device tasks are executed serially one by one.
* ``wait_pipeline`` is sent to ``m_wait_executor``, which works with the device.
> **NOTE**: `m_callback_executor` is also passed to the constructor and it is used in the base ov::IAsyncInferRequest class, which adds a pair of `callback_executor` and a callback function set by the user to the end of the pipeline.
.. note::
``m_callback_executor`` is also passed to the constructor and it is used in the base ov::IAsyncInferRequest class, which adds a pair of ``callback_executor`` and a callback function set by the user to the end of the pipeline.
### ~AsyncInferRequest()
~AsyncInferRequest()
++++++++++++++++++++
In the asynchronous request destructor, it is necessary to wait for a pipeline to finish. It can be done using the ov::IAsyncInferRequest::stop_and_wait method of the base class.
@snippet src/async_infer_request.cpp async_infer_request:dtor
.. doxygensnippet:: src/plugins/template/src/async_infer_request.cpp
:language: cpp
:fragment: [async_infer_request:dtor]
cancel()
++++++++
The method allows to cancel the infer request execution:
.. doxygensnippet:: src/plugins/template/src/async_infer_request.cpp
:language: cpp
:fragment: [async_infer_request:cancel]
@endsphinxdirective

View File

@@ -1,69 +1,101 @@
# Build Plugin Using CMake {#openvino_docs_ov_plugin_dg_plugin_build}
@sphinxdirective
OpenVINO build infrastructure provides the OpenVINO Developer Package for plugin development.
OpenVINO Developer Package
------------------------
##########################
To automatically generate the OpenVINO Developer Package, run the `cmake` tool during a OpenVINO build:
To automatically generate the OpenVINO Developer Package, run the ``cmake`` tool during a OpenVINO build:
```bash
$ mkdir openvino-release-build
$ cd openvino-release-build
$ cmake -DCMAKE_BUILD_TYPE=Release ../openvino
```
.. code-block:: bash
Once the commands above are executed, the OpenVINO Developer Package is generated in the `openvino-release-build` folder. It consists of several files:
- `OpenVINODeveloperPackageConfig.cmake` - the main CMake script which imports targets and provides compilation flags and CMake options.
- `OpenVINODeveloperPackageConfig-version.cmake` - a file with a package version.
- `targets_developer.cmake` - an automatically generated file which contains all targets exported from the OpenVINO build tree. This file is included by `OpenVINODeveloperPackageConfig.cmake` to import the following targets:
- Libraries for plugin development:
* `openvino::runtime` - shared OpenVINO library
* `openvino::runtime::dev` - interface library with OpenVINO Developer API
* `openvino::pugixml` - static Pugixml library
* `openvino::xbyak` - interface library with Xbyak headers
* `openvino::itt` - static library with tools for performance measurement using Intel ITT
- Libraries for tests development:
* `openvino::gtest`, `openvino::gtest_main`, `openvino::gmock` - Google Tests framework libraries
* `openvino::commonTestUtils` - static library with common tests utilities
* `openvino::funcTestUtils` - static library with functional tests utilities
* `openvino::unitTestUtils` - static library with unit tests utilities
* `openvino::ngraphFunctions` - static library with the set of `ov::Model` builders
* `openvino::funcSharedTests` - static library with common functional tests
* `openvino::ngraph_reference` - static library with operation reference implementations.
$ mkdir openvino-release-build
$ cd openvino-release-build
$ cmake -DCMAKE_BUILD_TYPE=Release ../openvino
> **NOTE**: it's enough just to run `cmake --build . --target ov_dev_targets` command to build only targets from the
> OpenVINO Developer package.
Once the commands above are executed, the OpenVINO Developer Package is generated in the ``openvino-release-build`` folder. It consists of several files:
* ``OpenVINODeveloperPackageConfig.cmake`` - the main CMake script which imports targets and provides compilation flags and CMake options.
* ``OpenVINODeveloperPackageConfig-version.cmake`` - a file with a package version.
* ``targets_developer.cmake`` - an automatically generated file which contains all targets exported from the OpenVINO build tree. This file is included by ``OpenVINODeveloperPackageConfig.cmake`` to import the following targets:
* Libraries for plugin development:
* ``openvino::runtime`` - shared OpenVINO library
* ``openvino::runtime::dev`` - interface library with OpenVINO Developer API
* ``openvino::pugixml`` - static Pugixml library
* ``openvino::xbyak`` - interface library with Xbyak headers
* ``openvino::itt`` - static library with tools for performance measurement using Intel ITT
* Libraries for tests development:
* ``openvino::gtest``, ``openvino::gtest_main``, ``openvino::gmock`` - Google Tests framework libraries
* ``openvino::commonTestUtils`` - static library with common tests utilities
* ``openvino::funcTestUtils`` - static library with functional tests utilities
* ``openvino::unitTestUtils`` - static library with unit tests utilities
* ``openvino::ngraphFunctions`` - static library with the set of ``ov::Model`` builders
* ``openvino::funcSharedTests`` - static library with common functional tests
* ``openvino::ngraph_reference`` - static library with operation reference implementations.
.. note::
It's enough just to run ``cmake --build . --target ov_dev_targets`` command to build only targets from the OpenVINO Developer package.
Build Plugin using OpenVINO Developer Package
------------------------
#############################################
To build a plugin source tree using the OpenVINO Developer Package, run the commands below:
```cmake
$ mkdir template-plugin-release-build
$ cd template-plugin-release-build
$ cmake -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
```
.. code-block:: bash
$ mkdir template-plugin-release-build
$ cd template-plugin-release-build
$ cmake -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
A common plugin consists of the following components:
1. Plugin code in the `src` folder
2. Code of tests in the `tests` folder
1. Plugin code in the ``src`` folder
2. Code of tests in the ``tests`` folder
To build a plugin and its tests, run the following CMake scripts:
- Root `CMakeLists.txt`, which finds the OpenVINO Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
@snippet template/CMakeLists.txt cmake:main
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the OpenVINO Developer Package and they are the same as for the main OpenVINO build tree. You can override them during plugin build using the command below:
```bash
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
```
- Root ``CMakeLists.txt``, which finds the OpenVINO Developer Package using the ``find_package`` CMake command and adds the ``src`` and ``tests`` subdirectories with plugin sources and their tests respectively:
- `src/CMakeLists.txt` to build a plugin shared library from sources:
@snippet template/src/CMakeLists.txt cmake:plugin
> **NOTE**: `openvino::...` targets are imported from the OpenVINO Developer Package.
.. doxygensnippet:: src/plugins/template/CMakeLists.txt
:language: cpp
:fragment: [cmake:main]
.. note::
The default values of the ``ENABLE_TESTS``, ``ENABLE_FUNCTIONAL_TESTS`` options are shared via the OpenVINO Developer Package and they are the same as for the main OpenVINO build tree. You can override them during plugin build using the command below:
.. code-block:: bash
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
* ``src/CMakeLists.txt`` to build a plugin shared library from sources:
.. doxygensnippet:: src/plugins/template/src/CMakeLists.txt
:language: cpp
:fragment: [cmake:plugin]
.. note::
``openvino::...`` targets are imported from the OpenVINO Developer Package.
* ``tests/functional/CMakeLists.txt`` to build a set of functional plugin tests:
.. doxygensnippet:: src/plugins/template/tests/functional/CMakeLists.txt
:language: cpp
:fragment: [cmake:functional_tests]
.. note::
The ``openvino::funcSharedTests`` static library with common functional OpenVINO Plugin tests is imported via the OpenVINO Developer Package.
@endsphinxdirective
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
@snippet template/tests/functional/CMakeLists.txt cmake:functional_tests
> **NOTE**: The `openvino::funcSharedTests` static library with common functional OpenVINO Plugin tests is imported via the OpenVINO Developer Package.

View File

@@ -1,89 +1,131 @@
# Compiled Model {#openvino_docs_ov_plugin_dg_compiled_model}
@sphinxdirective
ov::CompiledModel class functionality:
- Compile an ov::Model instance to a backend specific graph representation
- Create an arbitrary number of ov::InferRequest objects
- Hold some common resources shared between different instances of ov::InferRequest. For example:
- ov::ICompiledModel::m_task_executor task executor to implement asynchronous execution
- ov::ICompiledModel::m_callback_executor task executor to run an asynchronous inference request callback in a separate thread
* Compile an ov::Model instance to a backend specific graph representation
* Create an arbitrary number of ov::InferRequest objects
* Hold some common resources shared between different instances of ov::InferRequest. For example:
* ov::ICompiledModel::m_task_executor task executor to implement asynchronous execution
* ov::ICompiledModel::m_callback_executor task executor to run an asynchronous inference request callback in a separate thread
CompiledModel Class
------------------------
###################
OpenVINO Plugin API provides the interface ov::ICompiledModel which should be used as a base class for a compiled model. Based on that, a declaration of an compiled model class can look as follows:
@snippet src/compiled_model.hpp compiled_model:header
.. doxygensnippet:: src/plugins/template/src/compiled_model.hpp
:language: cpp
:fragment: [compiled_model:header]
### Class Fields
Class Fields
++++++++++++
The example class has several fields:
- `m_request_id` - Tracks a number of created inference requests, which is used to distinguish different inference requests during profiling via the Intel® Instrumentation and Tracing Technology (ITT) library.
- `m_cfg` - Defines a configuration a compiled model was compiled with.
- `m_model` - Keeps a reference to transformed `ov::Model` which is used in OpenVINO reference backend computations. Note, in case of other backends with backend specific graph representation `m_model` has different type and represents backend specific graph or just a set of computational kernels to perform an inference.
- `m_loaded_from_cache` - Allows to understand that model was loaded from cache.
* ``m_request_id`` - Tracks a number of created inference requests, which is used to distinguish different inference requests during profiling via the Intel® Instrumentation and Tracing Technology (ITT) library.
* ``m_cfg`` - Defines a configuration a compiled model was compiled with.
* ``m_model`` - Keeps a reference to transformed ``ov::Model`` which is used in OpenVINO reference backend computations. Note, in case of other backends with backend specific graph representation ``m_model`` has different type and represents backend specific graph or just a set of computational kernels to perform an inference.
* ``m_loaded_from_cache`` - Allows to understand that model was loaded from cache.
### CompiledModel Constructor
CompiledModel Constructor
+++++++++++++++++++++++++
This constructor accepts a generic representation of a model as an ov::Model and is compiled into a backend specific device graph:
@snippet src/compiled_model.cpp compiled_model:ctor
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:ctor]
The implementation `compile_model()` is fully device-specific.
The implementation ``compile_model()`` is fully device-specific.
### compile_model()
compile_model()
+++++++++++++++
The function accepts a const shared pointer to `ov::Model` object and applies OpenVINO passes using `transform_model()` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
The function accepts a const shared pointer to ``ov::Model`` object and applies OpenVINO passes using ``transform_model()`` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in :doc:`Low Precision Transformations <openvino_docs_OV_UG_lpt>` guide.
@snippet src/compiled_model.cpp compiled_model:compile_model
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:compile_model]
> **NOTE**: After all these steps, the backend specific graph is ready to create inference requests and perform inference.
### export_model()
.. note::
After all these steps, the backend specific graph is ready to create inference requests and perform inference.
The implementation of the method should write all data to the `model_stream`, which is required to import a backend specific graph later in the `Plugin::import_model` method:
export_model()
++++++++++++++
@snippet src/compiled_model.cpp compiled_model:export_model
The implementation of the method should write all data to the ``model_stream``, which is required to import a backend specific graph later in the ``Plugin::import_model`` method:
### create_sync_infer_request()
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:export_model]
create_sync_infer_request()
+++++++++++++++++++++++++++
The method creates an synchronous inference request and returns it.
@snippet src/compiled_model.cpp compiled_model:create_sync_infer_request
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:create_sync_infer_request]
While the public OpenVINO API has a single interface for inference request, which can be executed in synchronous and asynchronous modes, a plugin library implementation has two separate classes:
- [Synchronous inference request](@ref openvino_docs_ov_plugin_dg_infer_request), which defines pipeline stages and runs them synchronously in the `infer` method.
- [Asynchronous inference request](@ref openvino_docs_ov_plugin_dg_async_infer_request), which is a wrapper for a synchronous inference request and can run a pipeline asynchronously. Depending on a device pipeline structure, it can has one or several stages:
- For single-stage pipelines, there is no need to define this method and create a class derived from ov::IAsyncInferRequest. For single stage pipelines, a default implementation of this method creates ov::IAsyncInferRequest wrapping a synchronous inference request and runs it asynchronously in the `m_request_executor` executor.
- For pipelines with multiple stages, such as performing some preprocessing on host, uploading input data to a device, running inference on a device, or downloading and postprocessing output data, schedule stages on several task executors to achieve better device use and performance. You can do it by creating a sufficient number of inference requests running in parallel. In this case, device stages of different inference requests are overlapped with preprocessing and postprocessing stage giving better performance.
> **IMPORTANT**: It is up to you to decide how many task executors you need to optimally execute a device pipeline.
* :doc:`Synchronous inference request <openvino_docs_ov_plugin_dg_infer_request>`, which defines pipeline stages and runs them synchronously in the ``infer`` method.
* :doc:`Asynchronous inference request <openvino_docs_ov_plugin_dg_async_infer_request>`, which is a wrapper for a synchronous inference request and can run a pipeline asynchronously. Depending on a device pipeline structure, it can have one or several stages:
* For single-stage pipelines, there is no need to define this method and create a class derived from ov::IAsyncInferRequest. For single stage pipelines, a default implementation of this method creates ov::IAsyncInferRequest wrapping a synchronous inference request and runs it asynchronously in the ``m_request_executor`` executor.
* For pipelines with multiple stages, such as performing some preprocessing on host, uploading input data to a device, running inference on a device, or downloading and postprocessing output data, schedule stages on several task executors to achieve better device use and performance. You can do it by creating a sufficient number of inference requests running in parallel. In this case, device stages of different inference requests are overlapped with preprocessing and postprocessing stage giving better performance.
.. important::
It is up to you to decide how many task executors you need to optimally execute a device pipeline.
### create_infer_request()
create_infer_request()
++++++++++++++++++++++
The method creates an asynchronous inference request and returns it.
@snippet src/compiled_model.cpp compiled_model:create_infer_request
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:create_infer_request]
### get_property()
get_property()
++++++++++++++
Returns a current value for a property with the name `name`. The method extracts configuration values a compiled model is compiled with.
Returns a current value for a property with the name ``name``. The method extracts configuration values a compiled model is compiled with.
@snippet src/compiled_model.cpp compiled_model:get_property
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:get_property]
This function is the only way to get configuration values when a model is imported and compiled by other developers and tools.
### set_property()
set_property()
++++++++++++++
The methods allows to set compiled model specific properties.
@snippet src/compiled_model.cpp compiled_model:set_property
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:set_property]
### get_runtime_model()
get_runtime_model()
+++++++++++++++++++
The methods returns the runtime model with backend specific information.
@snippet src/compiled_model.cpp compiled_model:get_runtime_model
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
:language: cpp
:fragment: [compiled_model:get_runtime_model]
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) class.
The next step in plugin library implementation is the :doc:`Synchronous Inference Request <openvino_docs_ov_plugin_dg_infer_request>` class.
@endsphinxdirective

View File

@@ -1,92 +1,145 @@
# Synchronous Inference Request {#openvino_docs_ov_plugin_dg_infer_request}
`InferRequest` class functionality:
- Allocate input and output tensors needed for a backend-dependent network inference.
- Define functions for inference process stages (for example, `preprocess`, `upload`, `infer`, `download`, `postprocess`). These functions can later be used to define an execution pipeline during [Asynchronous Inference Request](@ref openvino_docs_ov_plugin_dg_async_infer_request) implementation.
- Call inference stages one by one synchronously.
@sphinxdirective
``InferRequest`` class functionality:
* Allocate input and output tensors needed for a backend-dependent network inference.
* Define functions for inference process stages (for example, ``preprocess``, ``upload``, ``infer``, ``download``, ``postprocess``). These functions can later be used to define an execution pipeline during :doc:`Asynchronous Inference Request <openvino_docs_ov_plugin_dg_async_infer_request>` implementation.
* Call inference stages one by one synchronously.
InferRequest Class
------------------------
##################
OpenVINO Plugin API provides the interface ov::ISyncInferRequest which should be
used as a base class for a synchronous inference request implementation. Based of that, a declaration
of a synchronous request class can look as follows:
@snippet src/sync_infer_request.hpp infer_request:header
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.hpp
:language: cpp
:fragment: [infer_request:header]
### Class Fields
Class Fields
++++++++++++
The example class has several fields:
- `m_profiling_task` - array of the `std::array<openvino::itt::handle_t, numOfStages>` type. Defines names for pipeline stages. Used to profile an inference pipeline execution with the Intel® instrumentation and tracing technology (ITT).
- `m_durations` - array of durations of each pipeline stage.
- backend specific fields:
- `m_backend_input_tensors` - input backend tensors.
- `m_backend_output_tensors` - output backend tensors.
- `m_executable` - an executable object / backend computational graph.
- `m_eval_context` - an evaluation context to save backend states after the inference.
- `m_variable_states` - a vector of variable states.
* ``m_profiling_task`` - array of the ``std::array<openvino::itt::handle_t, numOfStages>`` type. Defines names for pipeline stages. Used to profile an inference pipeline execution with the Intel® instrumentation and tracing technology (ITT).
### InferRequest Constructor
* ``m_durations`` - array of durations of each pipeline stage.
* backend-specific fields:
* ``m_backend_input_tensors`` - input backend tensors.
* ``m_backend_output_tensors`` - output backend tensors.
* ``m_executable`` - an executable object / backend computational graph.
* ``m_eval_context`` - an evaluation context to save backend states after the inference.
* ``m_variable_states`` - a vector of variable states.
InferRequest Constructor
++++++++++++++++++++++++
The constructor initializes helper fields and calls methods which allocate tensors:
@snippet src/sync_infer_request.cpp infer_request:ctor
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:ctor]
> **NOTE**: Use inputs/outputs information from the compiled model to understand shape and element type of tensors, which you can set with ov::InferRequest::set_tensor and get with ov::InferRequest::get_tensor. A plugin uses these hints to determine its internal layouts and element types for input and output tensors if needed.
.. note::
### ~InferRequest Destructor
Use inputs/outputs information from the compiled model to understand shape and element type of tensors, which you can set with ov::InferRequest::set_tensor and get with ov::InferRequest::get_tensor. A plugin uses these hints to determine its internal layouts and element types for input and output tensors if needed.
~InferRequest Destructor
++++++++++++++++++++++++
Destructor can contain plugin specific logic to finish and destroy infer request.
@snippet src/sync_infer_request.cpp infer_request:dtor
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:dtor]
### set_tensors_impl()
set_tensors_impl()
+++++++++++++++++++
The method allows to set batched tensors in case if the plugin supports it.
@snippet src/sync_infer_request.cpp infer_request:set_tensors_impl
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:set_tensors_impl]
### query_state()
query_state()
+++++++++++++
The method returns variable states from the model.
@snippet src/sync_infer_request.cpp infer_request:query_state
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:query_state]
### infer()
infer()
+++++++
The method calls actual pipeline stages synchronously. Inside the method plugin should check input/output tensors, move external tensors to backend and run the inference.
@snippet src/sync_infer_request.cpp infer_request:infer
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:infer]
#### 1. infer_preprocess()
1. infer_preprocess()
----------------------
Below is the code of the `infer_preprocess()` method. The method checks user input/output tensors and demonstrates conversion from user tensor to backend specific representation:
Below is the code of the ``infer_preprocess()`` method. The method checks user input/output tensors and demonstrates conversion from user tensor to backend specific representation:
@snippet src/sync_infer_request.cpp infer_request:infer_preprocess
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:infer_preprocess]
#### 2. start_pipeline()
2. start_pipeline()
--------------------
Executes a pipeline synchronously using `m_executable` object:
Executes a pipeline synchronously using ``m_executable`` object:
@snippet src/sync_infer_request.cpp infer_request:start_pipeline
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:start_pipeline]
#### 3. wait_pipeline()
3. wait_pipeline()
--------------------
Waits a pipeline in case of plugin asynchronous execution:
@snippet src/sync_infer_request.cpp infer_request:wait_pipeline
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:wait_pipeline]
#### 4. infer_postprocess()
4. infer_postprocess()
----------------------
Converts backend specific tensors to tensors passed by user:
@snippet src/sync_infer_request.cpp infer_request:infer_postprocess
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:infer_postprocess]
### get_profiling_info()
get_profiling_info()
+++++++++++++++++++++
The method returns the profiling info which was measured during pipeline stages execution:
@snippet src/sync_infer_request.cpp infer_request:get_profiling_info
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:get_profiling_info]
The next step in the plugin library implementation is the [Asynchronous Inference Request](@ref openvino_docs_ov_plugin_dg_async_infer_request) class.
cancel()
+++++++++
The plugin specific method allows to interrupt the synchronous execution from the AsyncInferRequest:
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
:language: cpp
:fragment: [infer_request:cancel]
The next step in the plugin library implementation is the :doc:`Asynchronous Inference Request <openvino_docs_ov_plugin_dg_async_infer_request>` class.
@endsphinxdirective

View File

@@ -19,55 +19,75 @@
openvino_docs_ie_plugin_detailed_guides
openvino_docs_ie_plugin_api_references
@endsphinxdirective
The plugin architecture of the OpenVINO allows to develop and plug independent inference
The plugin architecture of OpenVINO allows to develop and plug independent inference
solutions dedicated to different devices. Physically, a plugin is represented as a dynamic library
exporting the single `CreatePluginEngine` function that allows to create a new plugin instance.
exporting the single ``CreatePluginEngine`` function that allows to create a new plugin instance.
OpenVINO Plugin Library
-----------------------
#######################
OpenVINO plugin dynamic library consists of several main components:
1. [Plugin class](@ref openvino_docs_ov_plugin_dg_plugin):
- Provides information about devices of a specific type.
- Can create an [compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) instance which represents a Neural Network backend specific graph structure for a particular device in opposite to the ov::Model
which is backend-independent.
- Can import an already compiled graph structure from an input stream to an
[compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) object.
2. [Compiled Model class](@ref openvino_docs_ov_plugin_dg_compiled_model):
- Is an execution configuration compiled for a particular device and takes into account its capabilities.
- Holds a reference to a particular device and a task executor for this device.
- Can create several instances of [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request).
- Can export an internal backend specific graph structure to an output stream.
3. [Inference Request class](@ref openvino_docs_ov_plugin_dg_infer_request):
- Runs an inference pipeline serially.
- Can extract performance counters for an inference pipeline execution profiling.
4. [Asynchronous Inference Request class](@ref openvino_docs_ov_plugin_dg_async_infer_request):
- Wraps the [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) class and runs pipeline stages in parallel on several task executors based on a device-specific pipeline structure.
5. [Plugin specific properties](@ref openvino_docs_ov_plugin_dg_properties):
- Provides the plugin specific properties.
6. [Remote Context](@ref openvino_docs_ov_plugin_dg_remote_context):
- Provides the device specific remote context. Context allows to create remote tensors.
7. [Remote Tensor](@ref openvino_docs_ov_plugin_dg_remote_tensor)
- Provides the device specific remote tensor API and implementation.
1. :doc:`Plugin class <openvino_docs_ov_plugin_dg_plugin>`:
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
at `<openvino source dir>/src/plugins/template`.
* Provides information about devices of a specific type.
* Can create an :doc:`compiled model <openvino_docs_ov_plugin_dg_compiled_model>` instance which represents a Neural Network backend specific graph structure for a particular device in opposite to the ov::Model which is backend-independent.
* Can import an already compiled graph structure from an input stream to a :doc:`compiled model <openvino_docs_ov_plugin_dg_compiled_model>` object.
Detailed guides
-----------------------
2. :doc:`Compiled Model class <openvino_docs_ov_plugin_dg_compiled_model>`:
* [Build](@ref openvino_docs_ov_plugin_dg_plugin_build) a plugin library using CMake
* Plugin and its components [testing](@ref openvino_docs_ov_plugin_dg_plugin_testing)
* [Quantized networks](@ref openvino_docs_ov_plugin_dg_quantized_models)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
* Is an execution configuration compiled for a particular device and takes into account its capabilities.
* Holds a reference to a particular device and a task executor for this device.
* Can create several instances of :doc:`Inference Request <openvino_docs_ov_plugin_dg_infer_request>`.
* Can export an internal backend specific graph structure to an output stream.
3. :doc:`Inference Request class <openvino_docs_ov_plugin_dg_infer_request>`:
* Runs an inference pipeline serially.
* Can extract performance counters for an inference pipeline execution profiling.
4. :doc:`Asynchronous Inference Request class <openvino_docs_ov_plugin_dg_async_infer_request>`:
* Wraps the :doc:`Inference Request <openvino_docs_ov_plugin_dg_infer_request>` class and runs pipeline stages in parallel on several task executors based on a device-specific pipeline structure.
5. :doc:`Plugin specific properties <openvino_docs_ov_plugin_dg_properties>`:
* Provides the plugin specific properties.
6. :doc:`Remote Context <openvino_docs_ov_plugin_dg_remote_context>`:
* Provides the device specific remote context. Context allows to create remote tensors.
7. :doc:`Remote Tensor <openvino_docs_ov_plugin_dg_remote_tensor>`
* Provides the device specific remote tensor API and implementation.
.. note::
This documentation is written based on the ``Template`` plugin, which demonstrates plugin development details. Find the complete code of the ``Template``, which is fully compilable and up-to-date, at ``<openvino source dir>/src/plugins/template``.
Detailed Guides
###############
* :doc:`Build <openvino_docs_ov_plugin_dg_plugin_build>` a plugin library using CMake
* Plugin and its components :doc:`testing <openvino_docs_ov_plugin_dg_plugin_testing>`
* :doc:`Quantized networks <openvino_docs_ov_plugin_dg_quantized_models>`
* :doc:`Low precision transformations <openvino_docs_OV_UG_lpt>` guide
* :doc:`Writing OpenVINO™ transformations <openvino_docs_transformations>` guide
API References
-----------------------
##############
* [OpenVINO Plugin API](@ref ov_dev_api)
* [OpenVINO Transformation API](@ref ie_transformation_api)
* `OpenVINO Plugin API <https://docs.openvino.ai/nightly/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2022.3/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -1,171 +1,230 @@
# Plugin {#openvino_docs_ov_plugin_dg_plugin}
@sphinxdirective
OpenVINO Plugin usually represents a wrapper around a backend. Backends can be:
- OpenCL-like backend (e.g. clDNN library) for GPU devices.
- oneDNN backend for Intel CPU devices.
- NVIDIA cuDNN for NVIDIA GPUs.
* OpenCL-like backend (e.g. clDNN library) for GPU devices.
* oneDNN backend for Intel CPU devices.
* NVIDIA cuDNN for NVIDIA GPUs.
The responsibility of OpenVINO Plugin:
- Initializes a backend and throw exception in `Engine` constructor if backend cannot be initialized.
- Provides information about devices enabled by a particular backend, e.g. how many devices, their properties and so on.
- Loads or imports [compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) objects.
* Initializes a backend and throw exception in ``Engine`` constructor if backend cannot be initialized.
* Provides information about devices enabled by a particular backend, e.g. how many devices, their properties and so on.
* Loads or imports :doc:`compiled model <openvino_docs_ov_plugin_dg_compiled_model>` objects.
In addition to the OpenVINO Public API, the OpenVINO provides the Plugin API, which is a set of functions and helper classes that simplify new plugin development:
- header files in the `src/inference/dev_api/openvino` directory
- implementations in the `src/inference/src/dev/` directory
- symbols in the OpenVINO shared library
* header files in the ``src/inference/dev_api/openvino`` directory
* implementations in the ``src/inference/src/dev/`` directory
* symbols in the OpenVINO shared library
To build an OpenVINO plugin with the Plugin API, see the [OpenVINO Plugin Building](@ref openvino_docs_ov_plugin_dg_plugin_build) guide.
To build an OpenVINO plugin with the Plugin API, see the :doc:`OpenVINO Plugin Building <openvino_docs_ov_plugin_dg_plugin_build>` guide.
Plugin Class
------------------------
############
OpenVINO Plugin API provides the helper ov::IPlugin class recommended to use as a base class for a plugin.
Based on that, declaration of a plugin class can look as follows:
@snippet template/src/plugin.hpp plugin:header
.. doxygensnippet:: src/plugins/template/src/plugin.hpp
:language: cpp
:fragment: [plugin:header]
### Class Fields
Class Fields
++++++++++++
The provided plugin class also has several fields:
* `m_backend` - a backend engine that is used to perform actual computations for model inference. For `Template` plugin `ov::runtime::Backend` is used which performs computations using OpenVINO™ reference implementations.
* `m_waitExecutor` - a task executor that waits for a response from a device about device tasks completion.
* `m_cfg` of type `Configuration`:
* ``m_backend`` - a backend engine that is used to perform actual computations for model inference. For ``Template`` plugin ``ov::runtime::Backend`` is used which performs computations using OpenVINO™ reference implementations.
* ``m_waitExecutor`` - a task executor that waits for a response from a device about device tasks completion.
* ``m_cfg`` of type ``Configuration``:
@snippet template/src/config.hpp configuration:header
.. doxygensnippet:: src/plugins/template/src/config.hpp
:language: cpp
:fragment: [configuration:header]
As an example, a plugin configuration has three value parameters:
- `device_id` - particular device ID to work with. Applicable if a plugin supports more than one `Template` device. In this case, some plugin methods, like `set_property`, `query_model`, and `compile_model`, must support the ov::device::id property.
- `perf_counts` - boolean value to identify whether to collect performance counters during [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) execution.
- `streams_executor_config` - configuration of `ov::threading::IStreamsExecutor` to handle settings of multi-threaded context.
- `performance_mode` - configuration of `ov::hint::PerformanceMode` to set the performance mode.
- `disable_transformations` - allows to disable transformations which are applied in the process of model compilation.
* ``device_id`` - particular device ID to work with. Applicable if a plugin supports more than one ``Template`` device. In this case, some plugin methods, like ``set_property``, ``query_model``, and ``compile_model``, must support the ov::device::id property.
* ``perf_counts`` - boolean value to identify whether to collect performance counters during :doc:`Inference Request <openvino_docs_ov_plugin_dg_infer_request>` execution.
* ``streams_executor_config`` - configuration of ``ov::threading::IStreamsExecutor`` to handle settings of multi-threaded context.
* ``performance_mode`` - configuration of ``ov::hint::PerformanceMode`` to set the performance mode.
* ``disable_transformations`` - allows to disable transformations which are applied in the process of model compilation.
* ``exclusive_async_requests`` - allows to use exclusive task executor for asynchronous infer requests.
### Plugin Constructor
Plugin Constructor
++++++++++++++++++
A plugin constructor must contain code that checks the ability to work with a device of the `Template`
A plugin constructor must contain code that checks the ability to work with a device of the ``Template``
type. For example, if some drivers are required, the code must check
driver availability. If a driver is not available (for example, OpenCL runtime is not installed in
case of a GPU device or there is an improper version of a driver is on a host machine), an exception
must be thrown from a plugin constructor.
A plugin must define a device name enabled via the `set_device_name()` method of a base class:
A plugin must define a device name enabled via the ``set_device_name()`` method of a base class:
@snippet template/src/plugin.cpp plugin:ctor
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:ctor]
### Plugin Destructor
Plugin Destructor
+++++++++++++++++
A plugin destructor must stop all plugins activities, and clean all allocated resources.
@snippet template/src/plugin.cpp plugin:dtor
### compile_model()
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:dtor]
The plugin should implement two `compile_model()` methods: the first one compiles model without remote context, the second one with remote context if plugin supports.
compile_model()
+++++++++++++++
This is the most important function of the `Plugin` class is to create an instance of compiled `CompiledModel`,
The plugin should implement two ``compile_model()`` methods: the first one compiles model without remote context, the second one with remote context if plugin supports.
This is the most important function of the ``Plugin`` class is to create an instance of compiled ``CompiledModel``,
which holds a backend-dependent compiled model in an internal representation:
@snippet template/src/plugin.cpp plugin:compile_model
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:compile_model]
@snippet template/src/plugin.cpp plugin:compile_model_with_remote
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:compile_model_with_remote]
Before a creation of an `CompiledModel` instance via a constructor, a plugin may check if a provided
Before a creation of an ``CompiledModel`` instance via a constructor, a plugin may check if a provided
ov::Model object is supported by a device if it is needed.
Actual model compilation is done in the `CompiledModel` constructor. Refer to the [CompiledModel Implementation Guide](@ref openvino_docs_ov_plugin_dg_compiled_model) for details.
Actual model compilation is done in the ``CompiledModel`` constructor. Refer to the :doc:`CompiledModel Implementation Guide <openvino_docs_ov_plugin_dg_compiled_model>` for details.
> **NOTE**: Actual configuration map used in `CompiledModel` is constructed as a base plugin
> configuration set via `Plugin::set_property`, where some values are overwritten with `config` passed to `Plugin::compile_model`.
> Therefore, the config of `Plugin::compile_model` has a higher priority.
.. note::
### transform_model()
Actual configuration map used in ``CompiledModel`` is constructed as a base plugin configuration set via ``Plugin::set_property``, where some values are overwritten with ``config`` passed to ``Plugin::compile_model``. Therefore, the config of ``Plugin::compile_model`` has a higher priority.
The function accepts a const shared pointer to `ov::Model` object and applies common and device-specific transformations on a copied model to make it more friendly to hardware operations. For details how to write custom device-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about model representation:
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
* [Quantized models](@ref openvino_docs_ov_plugin_dg_quantized_models).
transform_model()
+++++++++++++++++
@snippet template/src/plugin.cpp plugin:transform_model
The function accepts a const shared pointer to `ov::Model` object and applies common and device-specific transformations on a copied model to make it more friendly to hardware operations. For details how to write custom device-specific transformation, refer to :doc:`Writing OpenVINO™ transformations <openvino_docs_transformations>` guide. See detailed topics about model representation:
> **NOTE**: After all these transformations, an `ov::Model` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing `A + B` operations at once, the `transform_model` function should contain a pass which fuses operations `A` and `B` into a single custom operation `A + B` which fits backend kernels set.
* :doc:`Intermediate Representation and Operation Sets <openvino_docs_MO_DG_IR_and_opsets>`
* :doc:`Quantized models <openvino_docs_ov_plugin_dg_quantized_models>`.
### query_model()
Use the method with the `HETERO` mode, which allows to distribute model execution between different
devices based on the `ov::Node::get_rt_info()` map, which can contain the `"affinity"` key.
The `query_model` method analyzes operations of provided `model` and returns a list of supported
operations via the ov::SupportedOpsMap structure. The `query_model` firstly applies `transform_model` passes to input `ov::Model` argument. After this, the transformed model in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (`m_backend` has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in `m_backend`):
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:transform_model]
1. Store original names of all operations in input `ov::Model`
2. Apply `transform_model` passes. Note, the names of operations in a transformed model can be different and we need to restore the mapping in the steps below.
3. Construct `supported` map which contains names of original operations. Note, that since the inference is performed using OpenVINO™ reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
4. `ov.SupportedOpsMap` contains only operations which are fully supported by `m_backend`.
.. note::
@snippet template/src/plugin.cpp plugin:query_model
After all these transformations, an ``ov::Model`` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing ``A + B`` operations at once, the ``transform_model`` function should contain a pass which fuses operations ``A`` and ``B`` into a single custom operation `A + B` which fits backend kernels set.
### set_property()
query_model()
+++++++++++++
Use the method with the ``HETERO`` mode, which allows to distribute model execution between different
devices based on the ``ov::Node::get_rt_info()`` map, which can contain the ``affinity`` key.
The ``query_model`` method analyzes operations of provided ``model`` and returns a list of supported
operations via the ov::SupportedOpsMap structure. The ``query_model`` firstly applies ``transform_model`` passes to input ``ov::Model`` argument. After this, the transformed model in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (``m_backend`` has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in ``m_backend``):
1. Store original names of all operations in input ``ov::Model``.
2. Apply ``transform_model`` passes. Note, the names of operations in a transformed model can be different and we need to restore the mapping in the steps below.
3. Construct ``supported`` map which contains names of original operations. Note that since the inference is performed using OpenVINO™ reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
4. ``ov.SupportedOpsMap`` contains only operations which are fully supported by ``m_backend``.
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:query_model]
set_property()
++++++++++++++
Sets new values for plugin property keys:
@snippet template/src/plugin.cpp plugin:set_property
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:set_property]
In the snippet above, the `Configuration` class overrides previous configuration values with the new
In the snippet above, the ``Configuration`` class overrides previous configuration values with the new
ones. All these values are used during backend specific model compilation and execution of inference requests.
> **NOTE**: The function must throw an exception if it receives an unsupported configuration key.
.. note::
The function must throw an exception if it receives an unsupported configuration key.
### get_property()
get_property()
++++++++++++++
Returns a current value for a specified property key:
@snippet template/src/plugin.cpp plugin:get_property
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:get_property]
The function is implemented with the `Configuration::Get` method, which wraps an actual configuration
The function is implemented with the ``Configuration::Get`` method, which wraps an actual configuration
key value to the ov::Any and returns it.
> **NOTE**: The function must throw an exception if it receives an unsupported configuration key.
.. note::
The function must throw an exception if it receives an unsupported configuration key.
### import_model()
import_model()
++++++++++++++
The importing of compiled model mechanism allows to import a previously exported backend specific model and wrap it
using an [CompiledModel](@ref openvino_docs_ov_plugin_dg_compiled_model) object. This functionality is useful if
using an :doc:`CompiledModel <openvino_docs_ov_plugin_dg_compiled_model>` object. This functionality is useful if
backend specific model compilation takes significant time and/or cannot be done on a target host
device due to other reasons.
During export of backend specific model using `CompiledModel::export_model`, a plugin may export any
During export of backend specific model using ``CompiledModel::export_model``, a plugin may export any
type of information it needs to import a compiled model properly and check its correctness.
For example, the export information may include:
- Compilation options (state of `Plugin::m_cfg` structure)
- Information about a plugin and a device type to check this information later during the import and
throw an exception if the `model` stream contains wrong data. For example, if devices have different
capabilities and a model compiled for a particular device cannot be used for another, such type of
information must be stored and checked during the import.
- Compiled backend specific model itself
* Compilation options (state of ``Plugin::m_cfg`` structure).
* Information about a plugin and a device type to check this information later during the import and throw an exception if the ``model`` stream contains wrong data. For example, if devices have different capabilities and a model compiled for a particular device cannot be used for another, such type of information must be stored and checked during the import.
* Compiled backend specific model itself.
@snippet template/src/plugin.cpp plugin:import_model
@snippet template/src/plugin.cpp plugin:import_model_with_remote
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:import_model]
### create_context()
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:import_model_with_remote]
The Plugin should implement `Plugin::create_context()` method which returns `ov::RemoteContext` in case if plugin supports remote context, in other case the plugin can throw an exception that this method is not implemented.
@snippet template/src/plugin.cpp plugin:create_context
create_context()
++++++++++++++++
### get_default_context()
The Plugin should implement ``Plugin::create_context()`` method which returns ``ov::RemoteContext`` in case if plugin supports remote context, in other case the plugin can throw an exception that this method is not implemented.
`Plugin::get_default_context()` also needed in case if plugin supports remote context, if the plugin doesn't support it, this method can throw an exception that functionality is not implemented.
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:create_context]
@snippet template/src/plugin.cpp plugin:get_default_context
get_default_context()
+++++++++++++++++++++
``Plugin::get_default_context()`` also needed in case if plugin supports remote context, if the plugin doesn't support it, this method can throw an exception that functionality is not implemented.
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:get_default_context]
Create Instance of Plugin Class
------------------------
###############################
OpenVINO plugin library must export only one function creating a plugin instance using OV_DEFINE_PLUGIN_CREATE_FUNCTION macro:
@snippet template/src/plugin.cpp plugin:create_plugin_engine
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
:language: cpp
:fragment: [plugin:create_plugin_engine]
Next step in a plugin library implementation is the [CompiledModel](@ref openvino_docs_ov_plugin_dg_compiled_model) class.
Next step in a plugin library implementation is the :doc:`CompiledModel <openvino_docs_ov_plugin_dg_compiled_model>` class.
@endsphinxdirective

View File

@@ -1,45 +1,67 @@
# Plugin Testing {#openvino_docs_ov_plugin_dg_plugin_testing}
@sphinxdirective
OpenVINO tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the OpenVINO public API.
All the tests are written in the [Google Test C++ framework](https://github.com/google/googletest).
All the tests are written in the `Google Test C++ framework <https://github.com/google/googletest>`__.
OpenVINO Plugin tests are included in the `openvino::funcSharedTests` CMake target which is built within the OpenVINO repository
(see [Build Plugin Using CMake](@ref openvino_docs_ov_plugin_dg_plugin_build) guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on.
OpenVINO Plugin tests are included in the ``openvino::funcSharedTests`` CMake target which is built within the OpenVINO repository
(see :doc:`Build Plugin Using CMake <openvino_docs_ov_plugin_dg_plugin_build>` guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on.
Test definitions are split into tests class declaration (see `src/tests/functional/plugin/shared/include`) and tests class implementation (see `src/tests/functional/plugin/shared/src`) and include the following scopes of plugin conformance tests:
Test definitions are split into tests class declaration (see ``src/tests/functional/plugin/shared/include``) and tests class implementation (see ``src/tests/functional/plugin/shared/src``) and include the following scopes of plugin conformance tests:
1. **Behavior tests** (`behavior` sub-folder), which are a separate test group to check that a plugin satisfies basic OpenVINO concepts: plugin creation, multiple compiled models support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters.
1. **Behavior tests** (``behavior`` sub-folder), which are a separate test group to check that a plugin satisfies basic OpenVINO concepts: plugin creation, multiple compiled models support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters.
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `openvino::funcSharedTests` library:
2. **Single layer tests** (``single_layer_tests`` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from ``openvino::funcSharedTests`` library:
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
@snippet single_layer/convolution.hpp test_convolution:definition
* From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the ``convLayerTestParamsSet`` tuple of parameters:
.. doxygensnippet:: src/tests/functional/shared_test_classes/include/shared_test_classes/single_layer/convolution.hpp
:language: cpp
:fragment: [test_convolution:definition]
- Based on that, define a set of parameters for `Template` plugin functional test instantiation:
@snippet single_layer_tests/convolution.cpp test_convolution:declare_parameters
* Based on that, define a set of parameters for ``Template`` plugin functional test instantiation:
.. doxygensnippet:: src/plugins/template/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp
:language: cpp
:fragment: [test_convolution:declare_parameters]
- Instantiate the test itself using standard GoogleTest macro `INSTANTIATE_TEST_SUITE_P`:
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
* Instantiate the test itself using standard GoogleTest macro ``INSTANTIATE_TEST_SUITE_P``:
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
> **Note**, such sub-graphs or patterns for sub-graph tests should be added to `openvino::ngraphFunctions` library first (this library is a pre-defined set of small `ov::Model`) and re-used in sub-graph tests after.
.. doxygensnippet:: src/plugins/template/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp
:language: cpp
:fragment: [test_convolution:instantiate]
4. **HETERO tests** (`subgraph_tests` sub-folder) contains tests for `HETERO` scenario (manual or automatic affinities settings, tests for `query_model`).
3. **Sub-graph tests** (``subgraph_tests`` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from ``ResNet-50`` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
.. note::
Such sub-graphs or patterns for sub-graph tests should be added to ``openvino::ngraphFunctions`` library first (this library is a pre-defined set of small ``ov::Model``) and re-used in sub-graph tests after.
4. **HETERO tests** (``subgraph_tests`` sub-folder) contains tests for ``HETERO`` scenario (manual or automatic affinities settings, tests for ``query_model``).
5. **Other tests**, which contain tests for other scenarios and has the following types of tests:
- Tests for execution graph
- Etc.
To use these tests for your own plugin development, link the `openvino::funcSharedTests` library to your test binary and instantiate required test cases with desired parameters values.
* Tests for execution graph
* Other
> **NOTE**: A plugin may contain its own tests for use cases that are specific to hardware or need to be extensively tested.
To use these tests for your own plugin development, link the ``openvino::funcSharedTests`` library to your test binary and instantiate required test cases with desired parameters values.
To build test binaries together with other build artifacts, use the `make all` command. For details, see
[Build Plugin Using CMake*](@ref openvino_docs_ov_plugin_dg_plugin_build).
.. note::
A plugin may contain its own tests for use cases that are specific to hardware or need to be extensively tested.
### How to Extend OpenVINO Plugin Tests
To build test binaries together with other build artifacts, use the ``make all`` command. For details, see :doc:`Build Plugin Using CMake <openvino_docs_ov_plugin_dg_plugin_build>`.
How to Extend OpenVINO Plugin Tests
+++++++++++++++++++++++++++++++++++
OpenVINO Plugin tests are open for contribution.
Add common test case definitions applicable for all plugins to the `openvino::funcSharedTests` target within the OpenVINO repository. Then, any other plugin supporting corresponding functionality can instantiate the new test.
Add common test case definitions applicable for all plugins to the ``openvino::funcSharedTests`` target within the OpenVINO repository. Then, any other plugin supporting corresponding functionality can instantiate the new test.
.. note::
When implementing a new subgraph test, add new single-layer tests for each operation of the subgraph if such test does not exist.
@endsphinxdirective
> **NOTE**: When implementing a new subgraph test, add new single-layer tests for each operation of the subgraph if such test does not exist.

View File

@@ -1,10 +1,17 @@
# Plugin Properties {#openvino_docs_ov_plugin_dg_properties}
Plugin can provide own device specific properties.
@sphinxdirective
Plugin can provide own device-specific properties.
Property Class
------------------------
##############
OpenVINO API provides the interface ov::Property which allows to define the property and access rights. Based on that, a declaration of plugin specific properties can look as follows:
@snippet include/template/properties.hpp properties:public_header
.. doxygensnippet:: src/plugins/template/include/template/properties.hpp
:language: cpp
:fragment: [properties:public_header]
@endsphinxdirective

View File

@@ -1,15 +1,20 @@
# Quantized models compute and restrictions {#openvino_docs_ov_plugin_dg_quantized_models}
@sphinxdirective
One of the feature of OpenVINO is the support of quantized models with different precisions: INT8, INT4, etc.
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
All quantized models which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
For more details about low-precision model representation please refer to this [document](@ref openvino_docs_ie_plugin_dg_lp_representation).
For more details about low-precision model representation please refer to this :doc:`document <openvino_docs_ie_plugin_dg_lp_representation>`.
Interpreting FakeQuantize at runtime
####################################
### Interpreting FakeQuantize at runtime
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
- Independently based on the definition of *FakeQuantize* operation.
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
* Independently based on the definition of *FakeQuantize* operation.
* Using a special library of low-precision transformations (LPT) which applies common rules for generic operations, such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.
@@ -17,33 +22,47 @@ The former one is aimed to transform the input data into the target precision wh
In practice *Dequantize* operations can be propagated forward through the linear operations, such as *Convolution* or *Fully-Connected*,
and in some cases fused with the following *Quantize* operation for the next layer into the so-called *Requantize* operation (see Fig. 1).
![qdq_propagation]
<div align="center">Figure 1. Quantization operations propagation at runtime. Q, DQ, RQ stand for Quantize, Dequantize, and Requantize correspondingly.</div>
.. image:: _static/images/qdq_propagation.png
Figure 1. Quantization operations propagation at runtime. Q, DQ, RQ stand for Quantize, Dequantize, and Requantize correspondingly.
From the calculation standpoint, the FakeQuantize formula also is split into two parts accordingly:
`output = round((x - input_low) / (input_high - input_low) * (levels-1)) / (levels-1) * (output_high - output_low) + output_low`
``output = round((x - input_low) / (input_high - input_low) * (levels-1)) / (levels-1) * (output_high - output_low) + output_low``
The first part of this formula represents *Quantize* operation:
`q = round((x - input_low) / (input_high - input_low) * (levels-1))`
``q = round((x - input_low) / (input_high - input_low) * (levels-1))``
The second is responsible for the dequantization:
`r = q / (levels-1) * (output_high - output_low) + output_low`
``r = q / (levels-1) * (output_high - output_low) + output_low``
From the scale/zero-point notation standpoint the latter formula can be written as follows:
`r = (output_high - output_low) / (levels-1) * (q + output_low / (output_high - output_low) * (levels-1))`
``r = (output_high - output_low) / (levels-1) * (q + output_low / (output_high - output_low) * (levels-1))``
Thus we can define:
- **Scale** as `(output_high - output_low) / (levels-1)`
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
> **NOTE**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
* **Scale** as ``(output_high - output_low) / (levels-1)``
* **Zero-point** as ``-output_low / (output_high - output_low) * (levels-1)``
.. note::
During the quantization process the values ``input_low``, ``input_high``, ``output_low``, ``output_high`` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
Quantization specifics and restrictions
#######################################
## Quantization specifics and restrictions
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
is considered the default way to get optimized models. Since the POT supports HW-aware quantization it means that specific rules can be implemented in it for
the particular HW. However, it is reasonable to have compatibility with general-purpose HW such as CPU and GPU and support their quantization schemes.
Below we define these rules as follows:
- Support of mixed-precision models where some layers can be kept in the floating-point precision.
- Per-channel quantization of weights of Convolutional and Fully-Connected layers.
- Per-channel quantization of activations for channel-wise and element-wise operations, e.g. Depthwise Convolution, Eltwise Add/Mul, ScaleShift.
- Symmetric and asymmetric quantization of weights and activations with the support of per-channel scales and zero-points.
- Non-unified quantization parameters for Eltwise and Concat operations.
- Non-quantized models output, i.e. there are no quantization parameters for it.
[qdq_propagation]: images/qdq_propagation.png
* Support of mixed-precision models where some layers can be kept in the floating-point precision.
* Per-channel quantization of weights of Convolutional and Fully-Connected layers.
* Per-channel quantization of activations for channel-wise and element-wise operations, e.g. Depthwise Convolution, Eltwise Add/Mul, ScaleShift.
* Symmetric and asymmetric quantization of weights and activations with the support of per-channel scales and zero-points.
* Non-unified quantization parameters for Eltwise and Concat operations.
* Non-quantized network output, i.e. there are no quantization parameters for it.
@endsphinxdirective

View File

@@ -1,49 +1,71 @@
# Remote Context {#openvino_docs_ov_plugin_dg_remote_context}
ov::RemoteContext class functionality:
- Represents device specific inference context.
- Allows to create remote device specific tensor.
@sphinxdirective
> **NOTE**: If plugin provides a public API for own Remote Context, the API should be header only and doesn't depend on the plugin library.
ov::RemoteContext class functionality:
* Represents device-specific inference context.
* Allows to create remote device specific tensor.
.. note::
If plugin provides a public API for own Remote Context, the API should be header only and does not depend on the plugin library.
RemoteContext Class
------------------------
###################
OpenVINO Plugin API provides the interface ov::IRemoteContext which should be used as a base class for a plugin specific remote context. Based on that, a declaration of an compiled model class can look as follows:
@snippet src/remote_context.hpp remote_context:header
.. doxygensnippet:: src/plugins/template/src/remote_context.hpp
:language: cpp
:fragment: [remote_context:header]
### Class Fields
Class Fields
++++++++++++
The example class has several fields:
- `m_name` - Device name.
- `m_property` - Device specific context properties. It can be used to cast RemoteContext to device specific type.
* ``m_name`` - Device name.
* ``m_property`` - Device-specific context properties. It can be used to cast RemoteContext to device specific type.
### RemoteContext Constructor
RemoteContext Constructor
+++++++++++++++++++++++++
This constructor should initialize the remote context device name and properties.
@snippet src/remote_context.cpp remote_context:ctor
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [remote_context:ctor]
### get_device_name()
get_device_name()
++++++++++++++++++
The function returns the device name from the remote context.
@snippet src/remote_context.cpp remote_context:get_device_name
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [remote_context:get_device_name]
### get_property()
get_property()
+++++++++++++++
The implementation returns the remote context properties.
@snippet src/remote_context.cpp remote_context:get_property
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [remote_context:get_property]
### create_tensor()
create_tensor()
+++++++++++++++
The method creates device specific remote tensor.
@snippet src/remote_context.cpp remote_context:create_tensor
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [remote_context:create_tensor]
The next step to support device specific tensors is a creation of device specific :doc:`Remote Tensor <openvino_docs_ov_plugin_dg_remote_tensor>` class.
@endsphinxdirective
The next step to support device specific tensors is a creation of device specific [Remote Tensor](@ref openvino_docs_ov_plugin_dg_remote_tensor) class.

View File

@@ -1,30 +1,39 @@
# Remote Tensor {#openvino_docs_ov_plugin_dg_remote_tensor}
ov::RemoteTensor class functionality:
- Provide an interface to work with device specific memory.
@sphinxdirective
> **NOTE**: If plugin provides a public API for own Remote Tensor, the API should be header only and doesn't depend on the plugin library.
ov::RemoteTensor class functionality:
* Provides an interface to work with device-specific memory.
.. note::
If plugin provides a public API for own Remote Tensor, the API should be header only and does not depend on the plugin library.
Device Specific Remote Tensor Public API
------------------------------------------
########################################
The public interface to work with device specific remote tensors should have header only implementation and doesn't depend on the plugin library.
@snippet include/template/remote_tensor.hpp remote_tensor:public_header
.. doxygensnippet:: src/plugins/template/include/template/remote_tensor.hpp
:language: cpp
:fragment: [remote_tensor:public_header]
The implementation below has several methods:
### type_check()
type_check()
+++++++++++++++++++++++++
Static method is used to understand that some abstract remote tensor can be casted to this particular remote tensor type.
### get_data()
get_data()
+++++++++++++++++++++++++
The set of methods (specific for the example, other implementation can have another API) which are helpers to get an access to remote data.
Device Specific Internal tensor implementation
-----------------------------------------------
Device-Specific Internal tensor implementation
##############################################
The plugin should have the internal implementation of remote tensor which can communicate with public API.
The example contains the implementation of remote tensor which wraps memory from stl vector.
@@ -33,55 +42,70 @@ OpenVINO Plugin API provides the interface ov::IRemoteTensor which should be use
The example implementation have two remote tensor classes:
- Internal type dependent implementation which has as an template argument the vector type and create the type specific tensor.
- The type independent implementation which works with type dependent tensor inside.
* Internal type dependent implementation which has as an template argument the vector type and create the type specific tensor.
* The type independent implementation which works with type dependent tensor inside.
Based on that, an implementation of a type independent remote tensor class can look as follows:
@snippet src/remote_context.cpp vector_impl:implementation
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [vector_impl:implementation]
The implementation provides a helper to get wrapped stl tensor and overrides all important methods of ov::IRemoteTensor class and recall the type dependent implementation.
The type dependent remote tensor has the next implementation:
@snippet src/remote_context.cpp vector_impl_t:implementation
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
:language: cpp
:fragment: [vector_impl_t:implementation]
### Class Fields
Class Fields
++++++++++++
The class has several fields:
- `m_element_type` - Tensor element type.
- `m_shape` - Tensor shape.
- `m_strides` - Tensor strides.
- `m_data` - Wrapped vector.
- `m_dev_name` - Device name.
- `m_properties` - Remote tensor specific properties which can be used to detect the type of the remote tensor.
* ``m_element_type`` - Tensor element type.
* ``m_shape`` - Tensor shape.
* ``m_strides`` - Tensor strides.
* ``m_data`` - Wrapped vector.
* ``m_dev_name`` - Device name.
* ``m_properties`` - Remote tensor specific properties which can be used to detect the type of the remote tensor.
### VectorTensorImpl()
VectorTensorImpl()
++++++++++++++++++
The constructor of remote tensor implementation. Creates a vector with data, initialize device name and properties, updates shape, element type and strides.
### get_element_type()
get_element_type()
++++++++++++++++++
The method returns tensor element type.
### get_shape()
get_shape()
+++++++++++
The method returns tensor shape.
### get_strides()
get_strides()
+++++++++++++
The method returns tensor strides.
### set_shape()
set_shape()
+++++++++++
The method allows to set new shapes for the remote tensor.
### get_properties()
get_properties()
++++++++++++++++
The method returns tensor specific properties.
### get_device_name()
get_device_name()
+++++++++++++++++
The method returns tensor specific device name.
@endsphinxdirective

View File

@@ -9,10 +9,11 @@
openvino_docs_ov_plugin_dg_quantized_models
openvino_docs_OV_UG_lpt
@endsphinxdirective
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
* [Quantized networks](@ref openvino_docs_ov_plugin_dg_quantized_models)
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
* :doc:`Quantized networks <openvino_docs_ov_plugin_dg_quantized_models>`
* :doc:`Low precision transformations guide <openvino_docs_OV_UG_lpt>`
* :doc:`Writing OpenVINO™ transformations guide <openvino_docs_transformations>`
@endsphinxdirective

View File

@@ -69,6 +69,7 @@
<tab type="user" title="VariadicSplitTransformation" url="@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation"/>
</tab>
<tab type="user" title="Step 4. Cleanup transformations" url="@ref openvino_docs_OV_UG_lpt_step4_cleanup">
<tab type="user" title="EliminateFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_EliminateFakeQuantizeTransformation"/>
<tab type="user" title="FoldConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation"/>
<tab type="user" title="FoldFakeQuantizeTransformation" url="@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation"/>
<tab type="user" title="FuseConvertTransformation" url="@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation"/>

View File

@@ -1,11 +1,21 @@
# AvgPoolPrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved}
ngraph::AvgPoolPrecisionPreservedAttribute class represents the `AvgPoolPrecisionPreserved` attribute.
@sphinxdirective
Utility attribute, which is used only during `AvgPool` operation, precision preserved property definition.
:ref:`ngraph::AvgPoolPrecisionPreservedAttribute <doxid-classngraph_1_1_avg_pool_precision_preserved_attribute>` class represents the ``AvgPoolPrecisionPreserved`` attribute.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation |
| Properties | value (boolean) |
Utility attribute, which is used only during ``AvgPool`` operation, precision preserved property definition.
.. list-table::
:header-rows: 1
* - Property name
- Values
* - Required
- Yes
* - Defined
- Operation
* - Properties
- value (boolean)
@endsphinxdirective

View File

@@ -1,11 +1,21 @@
# IntervalsAlignment Attribute {#openvino_docs_OV_UG_lpt_IntervalsAlignment}
ngraph::IntervalsAlignmentAttribute class represents the `IntervalsAlignment` attribute.
@sphinxdirective
The attribute defines a subgraph with the same quantization intervals alignment. `FakeQuantize` operations are included. The attribute is used by quantization operations.
:ref:`ngraph::IntervalsAlignmentAttribute <doxid-classngraph_1_1_intervals_alignment_attribute>` class represents the ``IntervalsAlignment`` attribute.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation |
| Properties | combined interval, minimal interval, minimal levels, preferable precisions |
The attribute defines a subgraph with the same quantization intervals alignment. ``FakeQuantize`` operations are included. The attribute is used by quantization operations.
.. list-table::
:header-rows: 1
* - Property name
- Values
* - Required
- Yes
* - Defined
- Operation
* - Properties
- combined interval, minimal interval, minimal levels, preferable precisions
@endsphinxdirective

View File

@@ -1,11 +1,21 @@
# PrecisionPreserved Attribute {#openvino_docs_OV_UG_lpt_PrecisionPreserved}
ngraph::PrecisionPreservedAttribute class represents the `PrecisionPreserved` attribute.
@sphinxdirective
:ref:`ngraph::PrecisionPreservedAttribute <doxid-classngraph_1_1_precision_preserved_attribute>` class represents the ``PrecisionPreserved`` attribute.
The attribute defines a precision preserved operation. If the attribute is absent, then an operation is not precision preserved.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation |
| Properties | value (boolean) |
.. list-table::
:header-rows: 1
* - Property name
- Values
* - Required
- Yes
* - Defined
- Operation
* - Properties
- value (boolean)
@endsphinxdirective

View File

@@ -1,11 +1,21 @@
# Precisions Attribute {#openvino_docs_OV_UG_lpt_Precisions}
ngraph::PrecisionsAttribute class represents the `Precisions` attribute.
@sphinxdirective
:ref:`ngraph::PrecisionsAttribute <doxid-classngraph_1_1_precisions_attribute>` class represents the ``Precisions`` attribute.
The attribute defines precision which is required for input/output port or an operation.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation, input port, output port |
| Properties | precisions |
.. list-table::
:header-rows: 1
* - Property name
- Values
* - Required
- Yes
* - Defined
- Operation, input port, output port
* - Properties
- precisions
@endsphinxdirective

View File

@@ -1,11 +1,21 @@
# QuantizationAlignment Attribute {#openvino_docs_OV_UG_lpt_QuantizationAlignment}
ngraph::QuantizationAlignmentAttribute class represents the `QuantizationAlignment` attribute.
@sphinxdirective
The attribute defines a subgraph with the same quantization alignment. `FakeQuantize` operations are not included. The attribute is used by quantization operations.
:ref:`ngraph::QuantizationAlignmentAttribute <doxid-classngraph_1_1_quantization_alignment_attribute>` class represents the ``QuantizationAlignment`` attribute.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | Yes |
| Defined | Operation |
| Properties | value (boolean) |
The attribute defines a subgraph with the same quantization alignment. ``FakeQuantize`` operations are not included. The attribute is used by quantization operations.
.. list-table::
:header-rows: 1
* - Property name
- Values
* - Required
- Yes
* - Defined
- Operation
* - Properties
- value (boolean)
@endsphinxdirective

View File

@@ -1,11 +1,21 @@
# QuantizationGranularity Attribute {#openvino_docs_OV_UG_lpt_QuantizationGranularity}
ngraph::QuantizationAttribute class represents the `QuantizationGranularity` attribute.
@sphinxdirective
ngraph::QuantizationAttribute class represents the ``QuantizationGranularity`` attribute.
The attribute defines quantization granularity of operation inputs.
| Property name | Values |
|---------------|----------------------------------------------|
| Required | No |
| Defined | Input ports |
| Properties | Quantization granularity |
.. list-table::
:header-rows: 1
* - Property name
- Values
* - Required
- No
* - Defined
- Input ports
* - Properties
- Quantization granularity
@endsphinxdirective

View File

@@ -15,305 +15,455 @@
Step 3. Main transformations <openvino_docs_OV_UG_lpt_step3_main>
Step 4. Cleanup transformations <openvino_docs_OV_UG_lpt_step4_cleanup>
@endsphinxdirective
## Introduction
Introduction
############
Low precision transformations (known as LPT) are a set of nGraph transformations, which are combined in one library. The library is mandatory part of OpenVINO to infer quantized model in low precision with the maximum performance on Intel CPU, GPU and ARM platforms. The library includes more than 45 transformations and supports more then 30 operations. Some transformations are mandatory, some of them are optional and developed for specific device.
The goal of Low Precision Transformations (LPT) is to transform a quantized model from its original precision (FP16 or FP32) to a low precision (INT8: `signed int8` or `unsigned int8`), so that it is prepared for low precision inference in OpenVINO™ plugin. It is achieved by two main principles:
1. `FakeQuantize` operation decomposition to two parts:
- part #1: quantize operation - new `FakeQuantize` operation with output quantization intervals in low precision range (signed int8: [-128, 127] or [-127, 127], unsigned int8: [0, 255] or [0, 256]) and with low precision output (`signed int8` or `unsigned int8`),
- part #2: dequantization operations with low precision input and original precision output.
The goal of Low Precision Transformations (LPT) is to transform a quantized model from its original precision (FP16 or FP32) to a low precision (INT8: ``signed int8`` or ``unsigned int8``), so that it is prepared for low precision inference in OpenVINO™ plugin. It is achieved by two main principles:
1. ``FakeQuantize`` operation decomposition to two parts:
* part 1: quantize operation - new ``FakeQuantize`` operation with output quantization intervals in low precision range (signed int8: [-128, 127] or [-127, 127], unsigned int8: [0, 255] or [0, 256]) and with low precision output (``signed int8`` or ``unsigned int8``).
* part 2: dequantization operations with low precision input and original precision output.
2. Propagation of the dequantization operation through original model's operations. It is done to avoid dequantization operations before original model operations, thus the quantize operations with low precision output remain before the original model operations.
As result, operation input tensor precisions will be changed from original to low precision and operations can be inferred by OpenVINO™ plugin in low precision.
For a more detailed description on how to quantize a model, see the [Low precision tools](#low-precision-tools) section below. For more information about model quantization, refer to **Brief History of Lower Precision in Deep Learning** section in [this whitepaper](https://software.intel.com/en-us/articles/lower-numerical-precision-deep-learning-inference-and-training).
For a more detailed description on how to quantize a model, see the `Low precision tools <#low-precision-tools>`__ section below. For more information about model quantization, refer to **Brief History of Lower Precision in Deep Learning** section in `this whitepaper <https://software.intel.com/en-us/articles/lower-numerical-precision-deep-learning-inference-and-training>`__.
## Input model requirements
Input model requirements
########################
LPT transformations propagate dequantization operations through the following operations:
* [Add-1](@ref openvino_docs_ops_arithmetic_Add_1)
* [AvgPool-1](@ref openvino_docs_ops_pooling_AvgPool_1)
* [Clamp-1](@ref openvino_docs_ops_activation_Clamp_1)
* [Concat-1](@ref openvino_docs_ops_movement_Concat_1)
* [Convolution-1](@ref openvino_docs_ops_convolution_Convolution_1)
* [ConvolutionBackpropData-1](@ref openvino_docs_ops_convolution_ConvolutionBackpropData_1)
* [DepthToSpace-1](@ref openvino_docs_ops_movement_DepthToSpace_1)
* [FakeQuantize-1](@ref openvino_docs_ops_quantization_FakeQuantize_1)
* [GroupConvolution-1](@ref openvino_docs_ops_convolution_GroupConvolution_1)
* [Interpolate-1](@ref openvino_docs_ops_image_Interpolate_1)
* [Interpolate-4](@ref openvino_docs_ops_image_Interpolate_4)
* [MatMul-1](@ref openvino_docs_ops_matrix_MatMul_1)
* [MaxPool-1](@ref openvino_docs_ops_pooling_MaxPool_1)
* [Multiply-1](@ref openvino_docs_ops_arithmetic_Multiply_1)
* [MVN-1](@ref openvino_docs_ops_normalization_MVN_1)
* [NormalizeL2-1](@ref openvino_docs_ops_normalization_NormalizeL2_1)
* [PRelu-1](@ref openvino_docs_ops_activation_PReLU_1)
* [ReduceMax-1](@ref openvino_docs_ops_reduction_ReduceMax_1)
* [ReduceMean-1](@ref openvino_docs_ops_reduction_ReduceMean_1)
* [ReduceMin-1](@ref openvino_docs_ops_reduction_ReduceMin_1)
* [ReduceSum-1](@ref openvino_docs_ops_reduction_ReduceSum_1)
* [Relu-1](@ref openvino_docs_ops_activation_ReLU_1)
* [Reshape-1](@ref openvino_docs_ops_shape_Reshape_1)
* [Split-1](@ref openvino_docs_ops_movement_Split_1)
* [Squeeze-1](@ref openvino_docs_ops_shape_Reshape_1)
* [StridedSlice-1](@ref openvino_docs_ops_movement_StridedSlice_1)
* [Transpose-1](@ref openvino_docs_ops_movement_Transpose_1)
* [Gather-7](@ref openvino_docs_ops_movement_Gather_7)
* [Gather-8](@ref openvino_docs_ops_movement_Gather_8)
* [Unsqueeze-1](@ref openvino_docs_ops_shape_Unsqueeze_1)
* [VariadicSplit-1](@ref openvino_docs_ops_movement_VariadicSplit_1)
* :doc:`Add-1 <openvino_docs_ops_arithmetic_Add_1>`
* :doc:`AvgPool-1 <openvino_docs_ops_pooling_AvgPool_1>`
* :doc:`Clamp-1 <openvino_docs_ops_activation_Clamp_1>`
* :doc:`Concat-1 <openvino_docs_ops_movement_Concat_1>`
* :doc:`Convolution-1 <openvino_docs_ops_convolution_Convolution_1>`
* :doc:`ConvolutionBackpropData-1 <openvino_docs_ops_convolution_ConvolutionBackpropData_1>`
* :doc:`DepthToSpace-1 <openvino_docs_ops_movement_DepthToSpace_1>`
* :doc:`FakeQuantize-1 <openvino_docs_ops_quantization_FakeQuantize_1>`
* :doc:`GroupConvolution-1 <openvino_docs_ops_convolution_GroupConvolution_1>`
* :doc:`Interpolate-1 <openvino_docs_ops_image_Interpolate_1>`
* :doc:`Interpolate-4 <openvino_docs_ops_image_Interpolate_4>`
* :doc:`MatMul-1 <openvino_docs_ops_matrix_MatMul_1>`
* :doc:`MaxPool-1 <openvino_docs_ops_pooling_MaxPool_1>`
* :doc:`Multiply-1 <openvino_docs_ops_arithmetic_Multiply_1>`
* :doc:`MVN-1 <openvino_docs_ops_normalization_MVN_1>`
* :doc:`NormalizeL2-1 <openvino_docs_ops_normalization_NormalizeL2_1>`
* :doc:`PRelu-1 <openvino_docs_ops_activation_PReLU_1>`
* :doc:`ReduceMax-1 <openvino_docs_ops_reduction_ReduceMax_1>`
* :doc:`ReduceMean-1 <openvino_docs_ops_reduction_ReduceMean_1>`
* :doc:`ReduceMin-1 <openvino_docs_ops_reduction_ReduceMin_1>`
* :doc:`ReduceSum-1 <openvino_docs_ops_reduction_ReduceSum_1>`
* :doc:`Relu-1 <openvino_docs_ops_activation_ReLU_1>`
* :doc:`Reshape-1 <openvino_docs_ops_shape_Reshape_1>`
* :doc:`Split-1 <openvino_docs_ops_movement_Split_1>`
* :doc:`Squeeze-1 <openvino_docs_ops_shape_Reshape_1>`
* :doc:`StridedSlice-1 <openvino_docs_ops_movement_StridedSlice_1>`
* :doc:`Transpose-1 <openvino_docs_ops_movement_Transpose_1>`
* :doc:`Gather-7 <openvino_docs_ops_movement_Gather_7>`
* :doc:`Gather-8 <openvino_docs_ops_movement_Gather_8>`
* :doc:`Unsqueeze-1 <openvino_docs_ops_shape_Unsqueeze_1>`
* :doc:`VariadicSplit-1 <openvino_docs_ops_movement_VariadicSplit_1>`
If operation is not supported by LPT then dequantization operation will not be propagated, input tensor precisions will not be changed to low precision and operation will be executed in original precision.
For example, if you would like to infer a model with `Convolution` operation in low precision then the model can look as on picture below:
For example, if you would like to infer a model with ``Convolution`` operation in low precision then the model can look as on picture below:
![Quantized Convolution](img/model_fq_and_convolution.common.png)
.. image:: _static/images/model_fq_and_convolution.common.svg
:alt: Quantized Convolution
> There are several supported quantization approaches on activations and on weights. All supported approaches are described in [Quantization approaches](#quantization-approaches) section below. In demonstrated model [FakeQuantize operation quantization](#fakequantize-operation) approach is used.
There are several supported quantization approaches on activations and on weights. All supported approaches are described in `Quantization approaches <#quantization-approaches>`__ section below. In demonstrated model `FakeQuantize operation quantization <#fakequantize-operation>`__ approach is used.
### <a name="low-precision-tools"></a> Low precision tools
For more details on how to get a quantized model, refer to [Model Optimization](@ref openvino_docs_model_optimization_guide) document.
Low precision tools
+++++++++++++++++++
For more details on how to get a quantized model, refer to :doc:`Model Optimization <openvino_docs_model_optimization_guide>` document.
Quantization approaches
#######################
## <a name="quantization-approaches"></a> Quantization approaches
LPT transformations support two quantization approaches:
1. `FakeQuantize` operation,
1. ``FakeQuantize`` operation,
2. Quantize and dequantization operations
Let's explore both approaches in details on `Convolution` operation.
### <a name="fakequantize-operation"></a> FakeQuantize operation
In this case `FakeQuantize` operation is used on activations and quantized constant on weights. Original input model:
Let's explore both approaches in details on ``Convolution`` operation.
![Original model with FakeQuantize](img/model_fq_and_convolution.common.png)
FakeQuantize operation
++++++++++++++++++++++
### Quantize and dequantization operations
In this case `FakeQuantize` operation and `Convert` are used as quantize operation and return quantized low precision tensor. After quantize operation on activations there are `Convert` and dequantization operations to compensate decomposition. Original input model:
In this case ``FakeQuantize`` operation is used on activations and quantized constant on weights. Original input model:
![Original model with Q/DQ](img/model_qdq_and_convolution.common.png)
.. image:: _static/images/model_fq_and_convolution.common.svg
:alt: Original model with FakeQuantize
In both cases result is the same. In LPT result model you can see, that:
1. if necessary, `FakeQuantize` operations on activations were decomposed to two part:
- new `FakeQuantize`operation with updated output intervals in low precision range and low precision output,
- dequantization operations on activations;
2. if necessary, an existing `FakeQuantize` decomposition can be reworked to get better precision;
3. dequantization operations were propagated through `Convolution`.
Quantize and dequantization operations
++++++++++++++++++++++++++++++++++++++
In this case ``FakeQuantize`` operation and ``Convert`` are used as quantize operation and return quantized low precision tensor. After quantize operation on activations there are ``Convert`` and dequantization operations to compensate decomposition. Original input model:
.. image:: _static/images/model_qdq_and_convolution.common.svg
:alt: Original model with Q/DQ
In both cases result is the same. In LPT result model you can see that:
1. if necessary, ``FakeQuantize`` operations on activations were decomposed to two part:
* new ``FakeQuantize`` operation with updated output intervals in low precision range and low precision output,
* dequantization operations on activations;
2. if necessary, an existing ``FakeQuantize`` decomposition can be reworked to get better precision;
3. dequantization operations were propagated through ``Convolution``.
LPT result model:
![Result model](img/model_fq_and_convolution.transformed.png)
.. image:: _static/images/model_fq_and_convolution.transformed.svg
:alt: Result model
Low precision transformations pipeline
++++++++++++++++++++++++++++++++++++++
### Low precision transformations pipeline
LPT transformation pipeline has several steps. For each transformation inside one step pattern matcher is unique per transformation, but each operation can be assigned to several transformations.
![Low precision transformations pipeline](img/low_precision_transformation_pipeline.png)
.. image:: _static/images/low_precision_transformation_pipeline.svg
:alt: Low precision transformations pipeline
Inside each step LPT transformations handle input model operation by operation, applying transformation matching pattern for each transformation from the step to an operation, and execute transformation if pattern is matched. Decomposition transformation decomposes `FakeQuantize` to quantize and dequantization operations. Dequantization operations from previous transformation result is used for the current one and so on, until the end of the model is achieved.
Inside each step LPT transformations handle input model operation by operation, applying transformation matching pattern for each transformation from the step to an operation, and execute transformation if pattern is matched. Decomposition transformation decomposes ``FakeQuantize`` to quantize and dequantization operations. Dequantization operations from previous transformation result is used for the current one and so on, until the end of the model is achieved.
As result, usually all operations are inferred by plugin in low precision. If plugin doesn't support an operation inference in low precision, then corresponding LPT transformation can be disabled, and input tensor precisions for the operation will not be changed. In this case the operation is inferred in the original precision.
Low precision transformations pipeline includes four steps:
* [Step #1: Prerequisites](@ref openvino_docs_OV_UG_lpt_step1_prerequisites)
* [Step #2: Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup)
* [Step #3: Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main)
* [Step #4: Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup)
### Step 1. Prerequisites
* :doc:`Step 1: Prerequisites <openvino_docs_OV_UG_lpt_step1_prerequisites>`
* :doc:`Step 2: Markup transformations <openvino_docs_OV_UG_lpt_step2_markup>`
* :doc:`Step 3: Main transformations <openvino_docs_OV_UG_lpt_step3_main>`
* :doc:`Step 4: Cleanup transformations <openvino_docs_OV_UG_lpt_step4_cleanup>`
Step 1. Prerequisites
---------------------
This step fuses and propagates some operations in the model to prepare for the next step. It is required for OpenVINO plugins. Transformations:
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)
The model on this step is changed. There are more details in developer guide [Prerequisites transformations](@ref openvino_docs_OV_UG_lpt_step1_prerequisites).
* :doc:`PullReshapeThroughDequantization <openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization>`
* :doc:`PullTransposeThroughDequantization <openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization>`
* :doc:`LinOpSequenceFusion <openvino_docs_OV_UG_lpt_LinOpSequenceFusion>`
The model on this step is changed. There are more details in developer guide :doc:`Prerequisites transformations <openvino_docs_OV_UG_lpt_step1_prerequisites>`.
Step 2. Markup
--------------
### Step 2. Markup
This step creates runtime attributes for operations. These attributes will be used in next step. Transformations:
* [MarkupBias](@ref openvino_docs_OV_UG_lpt_MarkupBias)
* [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
* [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
* [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
* [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
* [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
* [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
* [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide [Markup transformations](@ref openvino_docs_OV_UG_lpt_step2_markup).
* :doc:`MarkupBias <openvino_docs_OV_UG_lpt_MarkupBias>`
* :doc:`MarkupCanBeQuantized <openvino_docs_OV_UG_lpt_MarkupCanBeQuantized>`
* :doc:`MarkupPrecisions <openvino_docs_OV_UG_lpt_MarkupPrecisions>`
* :doc:`MarkupPerTensorQuantization <openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization>`
* :doc:`MarkupAvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved>`
* :doc:`PropagatePrecisions <openvino_docs_OV_UG_lpt_PropagatePrecisions>`
* :doc:`AlignQuantizationIntervals <openvino_docs_OV_UG_lpt_AlignQuantizationIntervals>`
* :doc:`AlignQuantizationParameters <openvino_docs_OV_UG_lpt_AlignQuantizationParameters>`
### Step 3. Main transformations, FakeQuantize decomposition and dequantization operations handling
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide [Main transformations](@ref openvino_docs_OV_UG_lpt_step3_main). Transformations:
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [GatherTransformation](@ref openvino_docs_OV_UG_lpt_GatherTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
The model on this step is changed: only new attributes are added to some operations. There are more details in developer guide :doc:`Markup transformations <openvino_docs_OV_UG_lpt_step2_markup>`.
#### Decomposition transformations
Decomposition transformations decompose the `FakeQuantize` operation to: quantize (`FakeQuantize` with low precision output) and dequantization operations (opposite to quantize, with low precision input and the original precision output). For dequantization operations LPT uses three operations: `Convert`, `Subtract` and `Multiply`. Element-wise operations `Subtract` and `Multiply` have constants on the second branches. If dequantization operations are not handled at the end of LPT pipeline, then they will be fused back to the `FakeQuantize`.
Step 3. Main transformations, FakeQuantize decomposition and dequantization operations handling
-----------------------------------------------------------------------------------------------
This step has the most transformations. These transformations can be separated in two groups: decomposition transformation and dequantization operations handling. There are more details in developer guide :doc:`Main transformations <openvino_docs_OV_UG_lpt_step3_main>`.
Transformations:
* :doc:`AddTransformation <openvino_docs_OV_UG_lpt_AddTransformation>`
* :doc:`AvgPoolTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`ClampTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`ConcatTransformation <openvino_docs_OV_UG_lpt_ConcatTransformation>`
* :doc:`ConvolutionTransformation <openvino_docs_OV_UG_lpt_ConvolutionTransformation>`
* :doc:`ConvolutionBackpropDataTransformation <openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation>`
* :doc:`DepthToSpaceTransformation <openvino_docs_OV_UG_lpt_DepthToSpaceTransformation>`
* :doc:`FakeQuantizeDecompositionTransformation <openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation>`
* :doc:`FakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FakeQuantizeTransformation>`
* :doc:`InterpolateTransformation <openvino_docs_OV_UG_lpt_InterpolateTransformation>`
* :doc:`GroupConvolutionTransformation <openvino_docs_OV_UG_lpt_GroupConvolutionTransformation>`
* :doc:`GatherTransformation <openvino_docs_OV_UG_lpt_GatherTransformation>`
* :doc:`MatMulTransformation <openvino_docs_OV_UG_lpt_MatMulTransformation>`
* :doc:`MaxPoolTransformation <openvino_docs_OV_UG_lpt_MaxPoolTransformation>`
* :doc:`MultiplyTransformation <openvino_docs_OV_UG_lpt_MultiplyTransformation>`
* :doc:`MVNTransformation <openvino_docs_OV_UG_lpt_MVNTransformation>`
* :doc:`NormalizeL2Transformation <openvino_docs_OV_UG_lpt_NormalizeL2Transformation>`
* :doc:`PReluTransformation <openvino_docs_OV_UG_lpt_PReluTransformation>`
* :doc:`ReduceMaxTransformation <openvino_docs_OV_UG_lpt_ReduceMaxTransformation>`
* :doc:`ReduceMeanTransformation <openvino_docs_OV_UG_lpt_ReduceMeanTransformation>`
* :doc:`ReduceMinTransformation <openvino_docs_OV_UG_lpt_ReduceMinTransformation>`
* :doc:`ReduceSumTransformation <openvino_docs_OV_UG_lpt_ReduceSumTransformation>`
* :doc:`ReluTransformation <openvino_docs_OV_UG_lpt_ReluTransformation>`
* :doc:`ReshapeTransformation <openvino_docs_OV_UG_lpt_ReshapeTransformation>`
* :doc:`SqueezeTransformation <openvino_docs_OV_UG_lpt_SqueezeTransformation>`
* :doc:`ShuffleChannelsTransformation <openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation>`
* :doc:`SplitTransformation <openvino_docs_OV_UG_lpt_SplitTransformation>`
* :doc:`StridedSliceTransformation <openvino_docs_OV_UG_lpt_StridedSliceTransformation>`
* :doc:`TransposeTransformation <openvino_docs_OV_UG_lpt_TransposeTransformation>`
* :doc:`UnsqueezeTransformation <openvino_docs_OV_UG_lpt_UnsqueezeTransformation>`
* :doc:`VariadicSplitTransformation <openvino_docs_OV_UG_lpt_VariadicSplitTransformation>`
Decomposition transformations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Decomposition transformations decompose the ``FakeQuantize`` operation to: quantize (``FakeQuantize`` with low precision output) and dequantization operations (opposite to quantize, with low precision input and the original precision output). For dequantization operations LPT uses three operations: ``Convert``, ``Subtract`` and ``Multiply``. Element-wise operations ``Subtract`` and ``Multiply`` have constants on the second branches. If dequantization operations are not handled at the end of LPT pipeline, then they will be fused back to the ``FakeQuantize``.
Original `FakeQuantize`:
![FakeQuantize operation before LPT](quantization/img/fq.common.png)
Original ``FakeQuantize``:
.. image:: _static/images/fq.common.svg
:alt: FakeQuantize operation before LPT
`FakeQuantize` after decomposition to quantization and dequantization operations:
![FakeQuantize operation after LPT](quantization/img/fq.transformed.png)
``FakeQuantize`` after decomposition to quantization and dequantization operations:
.. image:: _static/images/fq.transformed.svg
:alt: FakeQuantize operation after LPT
#### Dequantization operations handling transformations
Dequantization operations handling transformations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this step, LPT transformations fuse dequantization operations or move them through existing model operations as much as possible.
Original `Convolution` operation in FP32 with dequantization operations before:
![Convolution operation before LPT](img/model_fq_and_convolution.common.png)
Original ``Convolution`` operation in FP32 with dequantization operations before:
`Convolution` operation in INT8 after decomposition and dequantization operations handling:
![Convolution operation after LPT](img/model_fq_and_convolution.transformed.png)
.. image:: _static/images/model_fq_and_convolution.common.svg
:alt: Convolution operation before LPT
### Step 4: Cleanup of the result model
LPT cleanup transformations is final stage in LPT pipeline. In this step LPT transformations clean up the result model to avoid not handled dequantization operations: fuse dequantization operations if possible (fuse at least `Convert` operations if not) to other model operations to cleanup result model. Transformations:
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)
``Convolution`` operation in INT8 after decomposition and dequantization operations handling:
There are more details in developer guide [Cleanup transformations](@ref openvino_docs_OV_UG_lpt_step4_cleanup).
`FakeQuantize` operation with not handled dequantization operations:
![TODO: FakeQuantize operation with dequantization operations before LPT](quantization/img/fq.transformed.png)
`FakeQuantize` operation with fused dequantization operations:
![TODO: FakeQuantize operation with fused operations after LPT](quantization/img/fq.common.png)
.. image:: _static/images/model_fq_and_convolution.transformed.svg
:alt: Convolution operation after LPT
Step 4: Cleanup of the result model
-----------------------------------
LPT cleanup transformations is final stage in LPT pipeline. In this step LPT transformations clean up the result model to avoid not handled dequantization operations: fuse dequantization operations if possible (fuse at least ``Convert`` operations if not` to other model operations to cleanup result model).
Transformations:
* :doc:`EliminateFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_EliminateFakeQuantizeTransformation>`
* :doc:`FoldConvertTransformation <openvino_docs_OV_UG_lpt_FoldConvertTransformation>`
* :doc:`FoldFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation>`
* :doc:`FuseConvertTransformation <openvino_docs_OV_UG_lpt_FuseConvertTransformation>`
* :doc:`FuseMultiplyToFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation>`
* :doc:`FuseSubtractToFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation>`
* :doc:`MultiplyToGroupConvolutionTransformation <openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation>`
There are more details in developer guide :doc:`Cleanup transformations <openvino_docs_OV_UG_lpt_step4_cleanup>`.
``FakeQuantize`` operation with not handled dequantization operations:
.. image:: _static/images/fq.transformed.svg
:alt: TODO: FakeQuantize operation with dequantization operations before LPT
``FakeQuantize`` operation with fused dequantization operations:
.. image:: _static/images/fq.common.svg
:alt: TODO: FakeQuantize operation with fused operations after LPT
Low precision transformations in plugin transformation pipeline
###############################################################
## Low precision transformations in plugin transformation pipeline
Typical transformation pipeline described below.
### Step 1. Common optimizations
Step 1. Common optimizations
++++++++++++++++++++++++++++
This step is optional for LPT but typically is presented in OpenVINO™ plugins. The step doesn't use any LPT transformation. Firstly, the step disables dequantization operations constant folding on constant subgraph on weights to prevent the lost of dequantization info on the next plugin transformations. After that, it optimizes nGraph function and convert operations to operation set 1. Typically, usage of this step is the simplest way to meet LPT requirements for the input quantized model. If plugin can guarantee that LPT input requirements are met, then this step can be skipped.
@snippet snippets/lpt_intel_cpu_plugin.cpp lpt_common
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [lpt_common]
### Step 2. Low precision transformations execution
Step 2. Low precision transformations execution
+++++++++++++++++++++++++++++++++++++++++++++++
This step is mandatory. It configures and runs LPT transformations.
@snippet snippets/lpt_intel_cpu_plugin.cpp lpt_execution
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [lpt_execution]
Step 3. Plugin-specific transformations
+++++++++++++++++++++++++++++++++++++++
### Step 3. Plugin-specific transformations
This step is optional. It modifies the nGraph function to a device-specific operation set.
@snippet snippets/lpt_intel_cpu_plugin.cpp lpt_device
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [lpt_device]
## Result model overview
Result model overview
#####################
Let's explore quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model. Use [Model Downloader](@ref omz_tools_downloader) tool to download the `fp16` model from [OpenVINO™ Toolkit - Open Model Zoo repository](https://github.com/openvinotoolkit/open_model_zoo):
```sh
omz_downloader --name resnet-50-tf --precisions FP16-INT8
```
After that you should quantize model by the [Model Quantizer](@ref omz_tools_downloader) tool.
```sh
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
```
Let's explore quantized `TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__ model. Use `Model Downloader <https://docs.openvino.ai/2022.3/omz_tools_downloader.html>`__ tool to download the ``fp16`` model from `OpenVINO™ Toolkit - Open Model Zoo repository <https://github.com/openvinotoolkit/open_model_zoo>`__:
### Inference
.. code-block:: sh
omz_downloader --name resnet-50-tf --precisions FP16-INT8
After that you should quantize model by the `Model Quantizer <https://docs.openvino.ai/2022.3/omz_tools_downloader.html>`__ tool.
.. code-block:: sh
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
Inference
+++++++++
The simplest way to infer the model and collect performance counters is :doc:`Benchmark Application <openvino_inference_engine_samples_benchmark_app_README>`.
.. code-block:: sh
./benchmark_app -m resnet-50-tf.xml -d CPU -niter 1 -api sync -report_type average_counters -report_folder pc_report_dir
The simplest way to infer the model and collect performance counters is [Benchmark Application](../../../../samples/cpp/benchmark_app/README.md).
```sh
./benchmark_app -m resnet-50-tf.xml -d CPU -niter 1 -api sync -report_type average_counters -report_folder pc_report_dir
```
If you infer the model with the OpenVINO™ CPU plugin and collect performance counters, all operations (except last not quantized SoftMax) are executed in INT8 precision.
### Results analysis
Results analysis
++++++++++++++++
Result model depends on different factors:
* The original model quantization possibility and quantization quality. For some models, some operations are not possible to be quantized by POT and NNCF tools. In this case `FakeQuantize` operations are absent before these operations and they will be inferred in original precision.
* The original model quantization possibility and quantization quality. For some models, some operations are not possible to be quantized by POT and NNCF tools. In this case ``FakeQuantize`` operations are absent before these operations and they will be inferred in original precision.
* LPT customization and plugin supported operations. If plugin doesn't support INT8 inference for some operation then corresponding LPT transformation should be disabled and the operation will be inferred in original precision.
Information about layer precision is stored in the performance counters that are
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized `TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__ model inference on CPU Plugin looks as follows:
.. list-table::
:header-rows: 1
* - layerName
- execStatus
- layerType
- execType
- realTime (ms)
- cpuTime (ms)
* - resnet_model/batch_normalization_15/FusedBatchNorm/Add
- EXECUTED
- Convolution
- jit_avx512_1x1_I8
- 0.377
- 0.377
* - resnet_model/conv2d_16/Conv2D/fq_input_0
- NOT_RUN
- FakeQuantize
- undef
- 0
- 0
* - resnet_model/batch_normalization_16/FusedBatchNorm/Add
- EXECUTED
- Convolution
- jit_avx512_I8
- 0.499
- 0.499
* - resnet_model/conv2d_17/Conv2D/fq_input_0
- NOT_RUN
- FakeQuantize
- undef
- 0
- 0
* - resnet_model/batch_normalization_17/FusedBatchNorm/Add
- EXECUTED
- Convolution
- jit_avx512_1x1_I8
- 0.399
- 0.399
* - resnet_model/add_4/fq_input_0
- NOT_RUN
- FakeQuantize
- undef
- 0
- 0
* - resnet_model/add_4
- NOT_RUN
- Eltwise
- undef
- 0
- 0
* - resnet_model/add_5/fq_input_1
- NOT_RUN
- FakeQuantize
- undef
- 0
- 0
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |
| --------------------------------------------------------- | ---------- | ------------ | -------------------- | ------------- | ------------ |
| resnet\_model/batch\_normalization\_15/FusedBatchNorm/Add | EXECUTED | Convolution | jit\_avx512\_1x1\_I8 | 0.377 | 0.377 |
| resnet\_model/conv2d\_16/Conv2D/fq\_input\_0 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
| resnet\_model/batch\_normalization\_16/FusedBatchNorm/Add | EXECUTED | Convolution | jit\_avx512\_I8 | 0.499 | 0.499 |
| resnet\_model/conv2d\_17/Conv2D/fq\_input\_0 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
| resnet\_model/batch\_normalization\_17/FusedBatchNorm/Add | EXECUTED | Convolution | jit\_avx512\_1x1\_I8 | 0.399 | 0.399 |
| resnet\_model/add\_4/fq\_input\_0 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
| resnet\_model/add\_4 | NOT\_RUN | Eltwise | undef | 0 | 0 |
| resnet\_model/add\_5/fq\_input\_1 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
The ``execStatus`` column of the table includes possible values:
* ``EXECUTED`` - layer was executed by standalone primitive,
* ``NOT_RUN`` - layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
> The `execStatus` column of the table includes possible values:
> - `EXECUTED` - layer was executed by standalone primitive,
> - `NOT_RUN` - layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
>
> The `execType` column of the table includes inference primitives with specific suffixes. The layers have the following marks:
> * Suffix `I8` for layers that had 8-bit data type input and were computed in 8-bit precision
> * Suffix `FP32` for layers computed in 32-bit precision
The ``execType`` column of the table includes inference primitives with specific suffixes. The layers have the following marks:
As result all operations (except not quantized `SoftMax` at the end of the model) in OpenVINO™ CPU plugin are inferred in low precision. Note, please, in the result model there are `FakeQuantize` operations in FP32 but the plugin responsibility is fuse these operations with previous operations. OpenVINO™ CPU plugin achieves maximum optimized inference for all operations by fusing INT8 `Convolution` with FP32 output with `FakeQuantize` operation with FP32 input and INT8 output. In this case OpenVINO™ CPU plugin uses INT8 and FP32 vectorized instructions but reports about one INT8 kernel usage for inference, which is the most optimized for this case.
* Suffix ``I8`` for layers that had 8-bit data type input and were computed in 8-bit precision
* Suffix ``FP32`` for layers computed in 32-bit precision
## Mixed precision
If LPT input model operation output has `fp16` precision then dequantization computations still occurs in `fp32` precision. This approach is used to avoid accuracy loss in `fp16` arithmetic computations. The ultimate output of the dequantization operation will have the `fp16` precision, as expected.
As result all operations (except not quantized ``SoftMax`` at the end of the model) in OpenVINO™ CPU plugin are inferred in low precision. Note, please, in the result model there are ``FakeQuantize`` operations in FP32 but the plugin responsibility is fuse these operations with previous operations. OpenVINO™ CPU plugin achieves maximum optimized inference for all operations by fusing INT8 ``Convolution`` with FP32 output with ``FakeQuantize`` operation with FP32 input and INT8 output. In this case OpenVINO™ CPU plugin uses INT8 and FP32 vectorized instructions but reports about one INT8 kernel usage for inference, which is the most optimized for this case.
Mixed precision
###############
If LPT input model operation output has ``fp16`` precision then dequantization computations still occurs in ``fp32`` precision. This approach is used to avoid accuracy loss in ``fp16`` arithmetic computations. The ultimate output of the dequantization operation will have the ``fp16`` precision, as expected.
Customization
#############
## Customization
Low Precision Transformations can be customizable. Build-in customization options:
* operation precision restrictions,
* operation per tensor quantization restrictions,
* update precisions,
* dequantization precision.
Operation precision restrictions
++++++++++++++++++++++++++++++++
### Operation precision restrictions
This option defines precisions which allowed for the operation input ports. The option value is passed as input argument for `LowPrecision` constructor. For example:
This option defines precisions which allowed for the operation input ports. The option value is passed as input argument for ``LowPrecision`` constructor. For example:
@snippet snippets/lpt_intel_cpu_plugin.cpp lpt_supported_precisions
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [lpt_supported_precisions]
In provided example in result model ``Convolution`` operation inputs must have specific precisions: ``u8`` (unsigned int8) precision on input 0 (on activations) and ``i8`` (signed int8) precision on input 1 (on weights).
In provided example in result model `Convolution` operation inputs must have specific precisions: `u8` (unsigned int8) precision on input 0 (on activations) and `i8` (signed int8) precision on input 1 (on weights).
Operation per tensor quantization restrictions
++++++++++++++++++++++++++++++++++++++++++++++
### Operation per tensor quantization restrictions
This option defines if operation supports per-tensor quantization only. The option value is passed as input argument for `LowPrecision` constructor. For example:
This option defines if operation supports per-tensor quantization only. The option value is passed as input argument for ``LowPrecision`` constructor. For example:
@snippet snippets/lpt_intel_cpu_plugin.cpp per_tensor_quantization
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [per_tensor_quantization]
In provided example in result model `Convolution` operations must have per-tensor quantization on input 0 (on activations).
In provided example in result model ``Convolution`` operations must have per-tensor quantization on input 0 (on activations).
### Update precisions
This option defines if each LPT transformation updates precision or not. The option value is boolean and is passed as `updatePrecisions` member of `LayerTransformation::Params` which is input argument for `LowPrecision` constructor. All transformations are affected. If `true` then low precision transformations update precisions to low precision and doesn't if `false`. Typically this option is used for plugin debugging.
Update precisions
++++++++++++++++++
### Typical customization use cases
This option defines if each LPT transformation updates precision or not. The option value is boolean and is passed as ``updatePrecisions`` member of ``LayerTransformation::Params`` which is input argument for ``LowPrecision`` constructor. All transformations are affected. If ``true`` then low precision transformations update precisions to low precision and doesn't if ``false``. Typically this option is used for plugin debugging.
Plugin specific customization can be implemented via nGraph transformation callbacks. For example: asymmetric quantization support can be easily customizable via `LayerTransformation::isAsymmetricQuantization` and `WeightableLayerTransformation::isAsymmetricOnWeights` methods usage in callbacks. For example:
Typical customization use cases
+++++++++++++++++++++++++++++++
@snippet snippets/lpt_intel_cpu_plugin.cpp asymmetric_quantization
Plugin specific customization can be implemented via nGraph transformation callbacks. For example: asymmetric quantization support can be easily customizable via ``LayerTransformation::isAsymmetricQuantization`` and ``WeightableLayerTransformation::isAsymmetricOnWeights`` methods usage in callbacks. For example:
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [asymmetric_quantization]
@endsphinxdirective

View File

@@ -14,44 +14,89 @@
QuantizationAlignment <openvino_docs_OV_UG_lpt_QuantizationAlignment>
QuantizationGranularity <openvino_docs_OV_UG_lpt_QuantizationGranularity>
@endsphinxdirective
Introduction
############
## Introduction
.. list-table::
:header-rows: 1
| Name | Target | Required | Mutable |
|-------------------------------------------------------------------------------------|--------------------------|----------|---------|
| [AvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved) | Precision | No | Yes |
| [IntervalsAlignment](@ref openvino_docs_OV_UG_lpt_IntervalsAlignment) | Quantization interval | Yes | Yes |
| [PrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PrecisionPreserved) | Precision | Yes | Yes |
| [Precisions](@ref openvino_docs_OV_UG_lpt_Precisions) | Precision | Yes | Yes |
| [QuantizationAlignment](@ref openvino_docs_OV_UG_lpt_QuantizationAlignment) | Quantization granularity | Yes | Yes |
| [QuantizationGranularity](@ref openvino_docs_OV_UG_lpt_QuantizationGranularity) | Quantization granularity | Yes | No |
* - Name
- Target
- Required
- Mutable
* - :doc:`AvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved>`
- Precision
- No
- Yes
* - :doc:`IntervalsAlignment <openvino_docs_OV_UG_lpt_IntervalsAlignment>`
- Quantization interval
- Yes
- Yes
* - :doc:`PrecisionPreserved <openvino_docs_OV_UG_lpt_PrecisionPreserved>`
- Precision
- Yes
- Yes
* - :doc:`Precisions <openvino_docs_OV_UG_lpt_Precisions>`
- Precision
- Yes
- Yes
* - :doc:`QuantizationAlignment <openvino_docs_OV_UG_lpt_QuantizationAlignment>`
- Quantization granularity
- Yes
- Yes
* - :doc:`QuantizationGranularity <openvino_docs_OV_UG_lpt_QuantizationGranularity>`
- Quantization granularity
- Yes
- No
> `Target` attribute group defines attribute usage during model transformation for the best performance:
> - `Precision` - the attribute defines the most optimal output port precision.
> - `Quantization interval` - the attribute defines quantization interval.
> - `Quantization alignment` - the attribute defines quantization granularity in runtime: per-channel or per-tensor quantization.
> - `Quantization granularity` - the attribute is set by plugin to define quantization granularity: per-channel or per-tensor quantization.
>
> `Required` attribute group defines if attribute usage is required to get an optimal model during transformation:
> - `Yes` - the attribute is used by all OpenVINO plugins for low-precision optimization.
> - `No` - the attribute is used in a specific OpenVINO plugin.
>
> `Mutable` attribute group defines if transformation can update an existing attribute:
> - `Yes` - the attribute can be updated by the next transformations in the pipeline. But attribute update order is still important.
> - `No` - existing attribute can not be updated by the next transformation. Previous handled transformation has optimized a model according to the current value.
``Target`` attribute group defines attribute usage during model transformation for the best performance:
`FakeQuantize` decomposition is a mandatory part of low precision transformations. Attributes used during decomposition are mandatory. Optional attributes are required only for certain operations.
* ``Precision`` - the attribute defines the most optimal output port precision.
* ``Quantization interval`` - the attribute defines quantization interval.
* ``Quantization alignment`` - the attribute defines quantization granularity in runtime: per-channel or per-tensor quantization.
* ``Quantization granularity`` - the attribute is set by plugin to define quantization granularity: per-channel or per-tensor quantization.
``Required`` attribute group defines if attribute usage is required to get an optimal model during transformation:
* ``Yes`` - the attribute is used by all OpenVINO plugins for low-precision optimization.
* ``No`` - the attribute is used in a specific OpenVINO plugin.
``Mutable`` attribute group defines if transformation can update an existing attribute:
* ``Yes`` - the attribute can be updated by the next transformations in the pipeline. But attribute update order is still important.
* ``No`` - existing attribute can not be updated by the next transformation. Previous handled transformation has optimized a model according to the current value.
``FakeQuantize`` decomposition is a mandatory part of low precision transformations. Attributes used during decomposition are mandatory. Optional attributes are required only for certain operations.
Attributes usage by transformations:
| Attribute name | Created by transformations | Used by transformations |
|---------------------------|---------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| PrecisionPreserved | MarkupPrecisions, MarkupAvgPoolPrecisionPreserved | AlignQuantizationIntervals, AlignQuantizationParameters, FakeQuantizeDecompositionTransformation, MarkupAvgPoolPrecisionPreserved |
| AvgPoolPrecisionPreserved | MarkupAvgPoolPrecisionPreserved | |
| Precisions | MarkupCanBeQuantized, MarkupPrecisions | FakeQuantizeDecompositionTransformation |
| PerTensorQuantization | MarkupPerTensorQuantization | |
| IntervalsAlignment | AlignQuantizationIntervals | FakeQuantizeDecompositionTransformation |
| QuantizationAlignment | AlignQuantizationParameters | FakeQuantizeDecompositionTransformation |
.. list-table::
:header-rows: 1
> **NOTE**: The same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different.
* - Attribute name
- Created by transformations
- Used by transformations
* - PrecisionPreserved
- MarkupPrecisions, MarkupAvgPoolPrecisionPreserved
- AlignQuantizationIntervals, AlignQuantizationParameters, FakeQuantizeDecompositionTransformation, MarkupAvgPoolPrecisionPreserved
* - AvgPoolPrecisionPreserved
- MarkupAvgPoolPrecisionPreserved
-
* - Precisions
- MarkupCanBeQuantized, MarkupPrecisions
- FakeQuantizeDecompositionTransformation
* - PerTensorQuantization
- MarkupPerTensorQuantization
-
* - IntervalsAlignment
- AlignQuantizationIntervals
- FakeQuantizeDecompositionTransformation
* - QuantizationAlignment
- AlignQuantizationParameters
- FakeQuantizeDecompositionTransformation
.. note::
The same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, ``Precision`` attribute instances are created in ``MarkupCanBeQuantized`` and ``MarkupPrecisions`` transformations, but the reasons for their creation are different.
@endsphinxdirective

View File

@@ -1,6 +1,11 @@
# Step 1. Prerequisites Transformations {#openvino_docs_OV_UG_lpt_step1_prerequisites}
@sphinxdirective
Prerequisites transformations are optional. The transformations prepare a model before running other low precision transformations. The transformations do not operate with dequantization operations or update precisions. Prerequisites transformations include:
* [PullReshapeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization)
* [PullTransposeThroughDequantization](@ref openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization)
* [LinOpSequenceFusion](@ref openvino_docs_OV_UG_lpt_LinOpSequenceFusion)
* :doc:`PullReshapeThroughDequantization <openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization>`
* :doc:`PullTransposeThroughDequantization <openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization>`
* :doc:`LinOpSequenceFusion <openvino_docs_OV_UG_lpt_LinOpSequenceFusion>`
@endsphinxdirective

View File

@@ -1,142 +1,207 @@
# Step 2. Markup Transformations {#openvino_docs_OV_UG_lpt_step2_markup}
This step defines the optimal `FakeQuantize` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
@sphinxdirective
1. [MarkupBias](@ref openvino_docs_OV_UG_lpt_MarkupBias)
2. [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
3. [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
4. [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
5. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
6. [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
7. [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
8. [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
This step defines the optimal ``FakeQuantize`` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
The table of transformations and used attributes:
1. :doc:`MarkupBias <openvino_docs_OV_UG_lpt_MarkupBias>`
2. :doc:`MarkupCanBeQuantized <openvino_docs_OV_UG_lpt_MarkupCanBeQuantized>`
3. :doc:`MarkupPrecisions <openvino_docs_OV_UG_lpt_MarkupPrecisions>`
4. :doc:`MarkupPerTensorQuantization <openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization>`
5. :doc:`MarkupAvgPoolPrecisionPreserved <openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved>`
6. :doc:`PropagatePrecisions <openvino_docs_OV_UG_lpt_PropagatePrecisions>`
7. :doc:`AlignQuantizationIntervals <openvino_docs_OV_UG_lpt_AlignQuantizationIntervals>`
8. :doc:`AlignQuantizationParameters <openvino_docs_OV_UG_lpt_AlignQuantizationParameters>`
| Transformation name | Create attributes | Use attributes |
|---------------------------------|-------------------------------|-------------------------------------------|
| MarkupBias | Bias | |
| MarkupCanBeQuantized | Precisions | |
| MarkupPrecisions | Precisions,PrecisionPreserved | |
| MarkupPerTensorQuantization | PerTensorQuantization | |
| MarkupAvgPoolPrecisionPreserved | AvgPoolPrecisionPreserved | Precisions, PrecisionPreserved |
| PropagatePrecisions | Precisions | Precisions, PrecisionPreserved |
| AlignQuantizationIntervals | IntervalsAlignment | PrecisionPreserved |
| AlignQuantizationParameters | QuantizationAlignment | PrecisionPreserved, PerTensorQuantization |
.. list-table::
:header-rows: 1
> **NOTE**: The same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, `Precision` attribute instances are created in `MarkupCanBeQuantized` and `MarkupPrecisions` transformations, but the reasons for their creation are different
* - Transformation name
- Create attributes
- Use attributes
* - MarkupBias
- Bias
-
* - MarkupCanBeQuantized
- Precisions
-
* - MarkupPrecisions
- Precisions,PrecisionPreserved
-
* - MarkupPerTensorQuantization
- PerTensorQuantization
-
* - MarkupAvgPoolPrecisionPreserved
- AvgPoolPrecisionPreserved
- Precisions, PrecisionPreserved
* - PropagatePrecisions
- Precisions
- Precisions, PrecisionPreserved
* - AlignQuantizationIntervals
- IntervalsAlignment
- PrecisionPreserved
* - AlignQuantizationParameters
- QuantizationAlignment
- PrecisionPreserved, PerTensorQuantization
.. note::
The same type of attribute instances can be created in different transformations. This approach is the result of the transformation single-responsibility principle. For example, ``Precision`` attribute instances are created in ``MarkupCanBeQuantized`` and ``MarkupPrecisions`` transformations, but the reasons for their creation are different
Common markup transformations can be decomposed into simpler utility markup transformations. The order of Markup utility transformations is not important:
* [CreateAttribute](@ref openvino_docs_OV_UG_lpt_CreateAttribute)
* [CreatePrecisionsDependentAttribute](@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute)
* [PropagateThroughPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved)
* [PropagateToInput](@ref openvino_docs_OV_UG_lpt_PropagateToInput)
* [UpdateSharedPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved)
* :doc:`CreateAttribute <openvino_docs_OV_UG_lpt_CreateAttribute>`
* :doc:`CreatePrecisionsDependentAttribute <openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute>`
* :doc:`PropagateThroughPrecisionPreserved <openvino_docs_OV_UG_lpt_PropagateThroughPrecisionPreserved>`
* :doc:`PropagateToInput <openvino_docs_OV_UG_lpt_PropagateToInput>`
* :doc:`UpdateSharedPrecisionPreserved <openvino_docs_OV_UG_lpt_UpdateSharedPrecisionPreserved>`
Let's explore all transformations and their relations in detail, using one and the same model:
![](img/step2_markup_original.png)
.. image:: _static/images/step2_markup_original.svg
The original model key features:
* The first `concat1` concatenation operation has not quantized `convolution1` consumer.
* The second `concat2` concatenation operation has quantized `convolution2` consumer with requirements:
- support `unsigned int8` on activations,
- per-tensor quantization.
* Between the `concat2` concatenation operation and `Convolution` there is an `AvgPool` operation, which mathematically should return an `f32` tensor. But the `MarkupAvgPoolPrecisionPreserved` transformation is active. This allows the low precision transformation, that goes after the `AvgPool`, to propagate low precision tensor to the next consumer.
* The first ``concat1`` concatenation operation has not quantized ``convolution1`` consumer.
* The second ``concat2`` concatenation operation has quantized ``convolution2`` consumer with requirements:
* support ``unsigned int8`` on activations,
* per-tensor quantization.
* Between the ``concat2`` concatenation operation and ``Convolution`` there is an ``AvgPool`` operation, which mathematically should return an ``f32`` tensor. But the ``MarkupAvgPoolPrecisionPreserved`` transformation is active. This allows the low precision transformation, that goes after the ``AvgPool``, to propagate low precision tensor to the next consumer.
Transformations are run with the following parameters:
@snippet snippets/lpt_intel_cpu_plugin.cpp lpt_markup_pipeline
.. doxygensnippet:: docs/snippets/lpt_intel_cpu_plugin.cpp
:language: cpp
:fragment: [lpt_markup_pipeline]
1. MarkupCanBeQuantized
#######################
## 1. MarkupCanBeQuantized
The transformation marks operations that cannot be quantized. No attributes are required before the transformation.
Changes in the example model after `MarkupCanBeQuantized` transformation:
* Not quantized `convolution1` operation is marked by the `Precisions` attribute with empty values. This attribute allows the next transformation to ignore not quantized operation.
Changes in the example model after ``MarkupCanBeQuantized`` transformation:
* Not quantized ``convolution1`` operation is marked by the ``Precisions`` attribute with empty values. This attribute allows the next transformation to ignore not quantized operation.
Result model:
![MarkupCanBeQuantized](img/step2_markup1.png)
.. image:: _static/images/step2_markup1.svg
:alt: MarkupCanBeQuantize
Model display features (here and below):
* The attributes added by the current transformation are marked in bold.
* If attributes do not fit into one line, then one line consists of only one attribute.
## 2. MarkupPrecisions
2. MarkupPrecisions
###################
The transformation is required and includes two tasks:
1. Mark operation input ports (create `Precision` attribute instance) by provided restrictions: input port index and required precisions. Restrictions are provided as input argument in `ngraph::pass::low_precision::LowPrecision` constructor.
1. Mark operation input ports (create ``Precision`` attribute instance) by provided restrictions: input port index and required precisions. Restrictions are provided as input argument in ``:ref:`ngraph::pass::low_precision::LowPrecision <doxid-classngraph_1_1pass_1_1low__precision_1_1_low_precision>``` constructor.
2. Mark precision preserved operations.
No attributes are required before the transformation. Changes in the example model after `MarkupPrecisions` transformation:
No attributes are required before the transformation. Changes in the example model after ``MarkupPrecisions`` transformation:
* Both concatenation operations are marked as precision preserved operations. It allows to propagate precision via these operations.
* Quantized `convolution2` operation is marked by the `Precisions` attribute with `u8` precision on activations and `i8` precisions on weights according to the provided restrictions. This attribute instance allows to specify which precisions are required for quantized `Convolution` operation.
* Quantized ``convolution2`` operation is marked by the ``Precisions`` attribute with ``u8`` precision on activations and ``i8`` precisions on weights according to the provided restrictions. This attribute instance allows to specify which precisions are required for quantized ``Convolution`` operation.
Result model:
![MarkupPrecisions result](img/step2_markup2.png)
.. image:: _static/images/step2_markup2.svg
:alt: MarkupPrecisions result
## 3. MarkupPerTensorQuantization
The transformation is required and marks operations (create `PerTensorQuantization` attribute instance) by provided restrictions: an operation that requires per-tensor quantization. No attributes are required before the transformation.
3. MarkupPerTensorQuantization
##############################
Changes in the example model after `MarkupPerTensorQuantization` transformation:
* both `Convolution` operations are marked by `PerTensorQuantization`
The transformation is required and marks operations (create ``PerTensorQuantization`` attribute instance) by provided restrictions: an operation that requires per-tensor quantization. No attributes are required before the transformation.
Changes in the example model after ``MarkupPerTensorQuantization`` transformation:
* both ``Convolution`` operations are marked by ``PerTensorQuantization``
Result model:
![MarkupPerTensorQuantization result](img/step2_markup3.png)
.. image:: _static/images/step2_markup3.svg
:alt: MarkupPerTensorQuantization result
4. MarkupAvgPoolPrecisionPreserved
##################################
The transformation is optional. ``MarkupAvgPoolPrecisionPreserved`` marks ``AvgPool`` operations as precision preserved or not precision preserved. ``AvgPool`` operation is precision preserved if next not precision preserved operation can be inferred in low precision. In other words, ``AvgPool`` operations become precision preserved operations to speed up model inference. The transformation uses ``PrecisionPreserved`` attributes created before. The transformation is combined and uses:
## 4. MarkupAvgPoolPrecisionPreserved
The transformation is optional. `MarkupAvgPoolPrecisionPreserved` marks `AvgPool` operations as precision preserved or not precision preserved. `AvgPool` operation is precision preserved if next not precision preserved operation can be inferred in low precision. In other words, `AvgPool` operations become precision preserved operations to speed up model inference. The transformation uses `PrecisionPreserved` attributes created before. The transformation is combined and uses:
* CreatePrecisionsDependentAttribute
* PropagateThroughPrecisionPreserved
* UpdateSharedPrecisionPreserved
Changes in the example model after `MarkupAvgPoolPrecisionPreserved` transformation:
* `AvgPool` operations are marked by `PrecisionPreserved` and `AvgPoolPrecisionPreserved` (not used below).
Changes in the example model after ``MarkupAvgPoolPrecisionPreserved`` transformation:
* ``AvgPool`` operations are marked by ``PrecisionPreserved`` and ``AvgPoolPrecisionPreserved`` (not used below).
Result model:
![MarkupAvgPoolPrecisionPreserved](img/step2_markup4.png)
.. image:: _static/images/step2_markup4.svg
:alt: arkupAvgPoolPrecisionPreserved
## 5. PropagatePrecisions
The transformation is required. `PropagatePrecision` is a key transformation in the markup pipeline, which marks `FakeQuantize` output port precisions. The transformation uses `PrecisionPreserved` attribute instances created before. The transformation is combined and uses:
5. PropagatePrecisions
######################
The transformation is required. ``PropagatePrecision`` is a key transformation in the markup pipeline, which marks ``FakeQuantize`` output port precisions. The transformation uses ``PrecisionPreserved`` attribute instances created before. The transformation is combined and uses:
* CreateAttribute
* PropagateThroughPrecisionPreserved
* PropagateToInput
Changes in the example model after `PropagatePrecisions` transformation:
* All precision preserved operations are marked by the `Precisions` attribute instance, which defines the required precision for the operation.
* `FakeQuantize` operation output ports are marked by `Precisions` attribute instances, which define target precision for decomposition. In the sample model, `FakeQuantize` operations have signed intervals, but the `Precisions` attributes are initialized by `u8` (`unsigned int8`) values as the result applied during transformations restrictions for `Convolution` operations.
Changes in the example model after ``PropagatePrecisions`` transformation:
* All precision preserved operations are marked by the ``Precisions`` attribute instance, which defines the required precision for the operation.
* ``FakeQuantize`` operation output ports are marked by ``Precisions`` attribute instances, which define target precision for decomposition. In the sample model, ``FakeQuantize`` operations have signed intervals, but the ``Precisions`` attributes are initialized by ``u8`` (``unsigned int8``) values as the result applied during transformations restrictions for ``Convolution`` operations.
Result model:
![PropagatePrecisions](img/step2_markup5.png)
.. image:: _static/images/step2_markup5.svg
:alt: PropagatePrecisions
> **NOTE**: `AlignQuantizationIntervals` and `AlignQuantizationParameters` transformations are required if the model has quantized concatenation operations.
.. note::
``AlignQuantizationIntervals`` and ``AlignQuantizationParameters`` transformations are required if the model has quantized concatenation operations.
6. AlignQuantizationIntervals
#############################
The transformation is required for models with the quantized operation. The transformation marks ``FakeQuantize`` operation and precision preserved consumers to combine quantization information from different ``FakeQuantize`` operations for future quantization intervals alignment. The transformation is combined and uses:
## 6. AlignQuantizationIntervals
The transformation is required for models with the quantized operation. The transformation marks `FakeQuantize` operation and precision preserved consumers to combine quantization information from different `FakeQuantize` operations for future quantization intervals alignment. The transformation is combined and uses:
* CreateAttribute
* PropagateThroughPrecisionPreserved
Changes in the example model after `AlignQuantizationIntervals` transformation:
* All `FakeQuantize` operations and their precision preserved consumers are marked by the `IntervalsAlignment` attribute instance.
Changes in the example model after ``AlignQuantizationIntervals`` transformation:
* All ``FakeQuantize`` operations and their precision preserved consumers are marked by the ``IntervalsAlignment`` attribute instance.
Result model:
![AlignQuantizationIntervals](img/step2_markup6.png)
.. image:: _static/images/step2_markup6.svg
:alt: AlignQuantizationIntervals
7. AlignQuantizationParameters
##############################
## 7. AlignQuantizationParameters
The transformation is required for models with quantized concatenation operation. The transformation marks `FakeQuantize` precision preserved consumers to align quantization intervals. The transformation is combined and uses:
* CreateAttribute
* PropagateThroughPrecisionPreserved
* UpdateSharedPrecisionPreserved
Changes in the example model after `AlignQuantizationParameters` transformation:
* All `FakeQuantize` precision preserved consumers are marked by `QuantizationAlignment` attribute instance. `convolution1` input ports are marked by `Precisions` attribute instances with empty precisions collection. As a result, the `convolution1` operation was detected as not quantized, and the `QuantizationAlignment` attribute default value `false` does not change. `convolution2` input ports are marked by `Precisions` attribute instances with not empty precisions collection. `convolution2` operation was detected as quantized with the `PerTensorQuantization` attribute, and the `QuantizationAlignment` attribute default value changed to `true`.
Changes in the example model after ``AlignQuantizationParameters`` transformation:
* All ``FakeQuantize`` precision preserved consumers are marked by ``QuantizationAlignment`` attribute instance. ``convolution1`` input ports are marked by ``Precisions`` attribute instances with empty precisions collection. As a result, the ``convolution1`` operation was detected as not quantized, and the ``QuantizationAlignment`` attribute default value ``false`` does not change. ``convolution2`` input ports are marked by ``Precisions`` attribute instances with not empty precisions collection. ``convolution2`` operation was detected as quantized with the ``PerTensorQuantization`` attribute, and the ``QuantizationAlignment`` attribute default value changed to ``true``.
Final model:
![AlignQuantizationParameters](img/step2_markup7.png)
.. image:: _static/images/step2_markup7.svg
:alt: AlignQuantizationParameters
@endsphinxdirective

View File

@@ -1,50 +1,62 @@
# Step 3. Main Transformations {#openvino_docs_OV_UG_lpt_step3_main}
@sphinxdirective
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
* [AddTransformation](@ref openvino_docs_OV_UG_lpt_AddTransformation)
* [AvgPoolTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ClampTransformation](@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation)
* [ConcatTransformation](@ref openvino_docs_OV_UG_lpt_ConcatTransformation)
* [ConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation)
* [ConvolutionBackpropDataTransformation](@ref openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation)
* [DepthToSpaceTransformation](@ref openvino_docs_OV_UG_lpt_DepthToSpaceTransformation)
* [FakeQuantizeDecompositionTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation)
* [FakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FakeQuantizeTransformation)
* [InterpolateTransformation](@ref openvino_docs_OV_UG_lpt_InterpolateTransformation)
* [GroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_GroupConvolutionTransformation)
* [GatherTransformation](@ref openvino_docs_OV_UG_lpt_GatherTransformation)
* [MatMulTransformation](@ref openvino_docs_OV_UG_lpt_MatMulTransformation)
* [MaxPoolTransformation](@ref openvino_docs_OV_UG_lpt_MaxPoolTransformation)
* [MultiplyTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyTransformation)
* [MVNTransformation](@ref openvino_docs_OV_UG_lpt_MVNTransformation)
* [NormalizeL2Transformation](@ref openvino_docs_OV_UG_lpt_NormalizeL2Transformation)
* [PReluTransformation](@ref openvino_docs_OV_UG_lpt_PReluTransformation)
* [ReduceMaxTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMaxTransformation)
* [ReduceMeanTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMeanTransformation)
* [ReduceMinTransformation](@ref openvino_docs_OV_UG_lpt_ReduceMinTransformation)
* [ReduceSumTransformation](@ref openvino_docs_OV_UG_lpt_ReduceSumTransformation)
* [ReluTransformation](@ref openvino_docs_OV_UG_lpt_ReluTransformation)
* [ReshapeTransformation](@ref openvino_docs_OV_UG_lpt_ReshapeTransformation)
* [SqueezeTransformation](@ref openvino_docs_OV_UG_lpt_SqueezeTransformation)
* [ShuffleChannelsTransformation](@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation)
* [SplitTransformation](@ref openvino_docs_OV_UG_lpt_SplitTransformation)
* [StridedSliceTransformation](@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation)
* [TransposeTransformation](@ref openvino_docs_OV_UG_lpt_TransposeTransformation)
* [UnsqueezeTransformation](@ref openvino_docs_OV_UG_lpt_UnsqueezeTransformation)
* [VariadicSplitTransformation](@ref openvino_docs_OV_UG_lpt_VariadicSplitTransformation)
* :doc:`AddTransformation <openvino_docs_OV_UG_lpt_AddTransformation>`
* :doc:`AvgPoolTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`ClampTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`ConcatTransformation <openvino_docs_OV_UG_lpt_ConcatTransformation>`
* :doc:`ConvolutionTransformation <openvino_docs_OV_UG_lpt_ConvolutionTransformation>`
* :doc:`ConvolutionBackpropDataTransformation <openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation>`
* :doc:`DepthToSpaceTransformation <openvino_docs_OV_UG_lpt_DepthToSpaceTransformation>`
* :doc:`FakeQuantizeDecompositionTransformation <openvino_docs_OV_UG_lpt_FakeQuantizeDecompositionTransformation>`
* :doc:`FakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FakeQuantizeTransformation>`
* :doc:`InterpolateTransformation <openvino_docs_OV_UG_lpt_InterpolateTransformation>`
* :doc:`GroupConvolutionTransformation <openvino_docs_OV_UG_lpt_GroupConvolutionTransformation>`
* :doc:`GatherTransformation <openvino_docs_OV_UG_lpt_GatherTransformation>`
* :doc:`MatMulTransformation <openvino_docs_OV_UG_lpt_MatMulTransformation>`
* :doc:`MaxPoolTransformation <openvino_docs_OV_UG_lpt_MaxPoolTransformation>`
* :doc:`MultiplyTransformation <openvino_docs_OV_UG_lpt_MultiplyTransformation>`
* :doc:`MVNTransformation <openvino_docs_OV_UG_lpt_MVNTransformation>`
* :doc:`NormalizeL2Transformation <openvino_docs_OV_UG_lpt_NormalizeL2Transformation>`
* :doc:`PReluTransformation <openvino_docs_OV_UG_lpt_PReluTransformation>`
* :doc:`ReduceMaxTransformation <openvino_docs_OV_UG_lpt_ReduceMaxTransformation>`
* :doc:`ReduceMeanTransformation <openvino_docs_OV_UG_lpt_ReduceMeanTransformation>`
* :doc:`ReduceMinTransformation <openvino_docs_OV_UG_lpt_ReduceMinTransformation>`
* :doc:`ReduceSumTransformation <openvino_docs_OV_UG_lpt_ReduceSumTransformation>`
* :doc:`ReluTransformation <openvino_docs_OV_UG_lpt_ReluTransformation>`
* :doc:`ReshapeTransformation <openvino_docs_OV_UG_lpt_ReshapeTransformation>`
* :doc:`SqueezeTransformation <openvino_docs_OV_UG_lpt_SqueezeTransformation>`
* :doc:`ShuffleChannelsTransformation <openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation>`
* :doc:`SplitTransformation <openvino_docs_OV_UG_lpt_SplitTransformation>`
* :doc:`StridedSliceTransformation <openvino_docs_OV_UG_lpt_StridedSliceTransformation>`
* :doc:`TransposeTransformation <openvino_docs_OV_UG_lpt_TransposeTransformation>`
* :doc:`UnsqueezeTransformation <openvino_docs_OV_UG_lpt_UnsqueezeTransformation>`
* :doc:`VariadicSplitTransformation <openvino_docs_OV_UG_lpt_VariadicSplitTransformation>`
Let's explore some main transformations on the example model. Original model:
![Original model](img/step3_original.png)
.. image:: _static/images/step3_original.svg
:alt: Original model
Result model after main transformations:
![Original model](img/step3_transformed.png)
.. image:: _static/images/step3_transformed.svg
:alt: Transformed model
Changes in the example model after main transformation:
* All `FakeQuantize` operations (`fakeQuantize1`, `fakeQuantize2` and `fakeQuantize3`) were decomposed:
- original `FakeQuantize` operations were replaced with new operations with other output intervals and output port precision,
- dequantization operations.
* Dequantization operations were moved via precision preserved (`concat1` and `concat2`) and quantized (`convolution2`) operations.
> **NOTE**: The left branch (branch #1) does not require per-tensor quantization. As a result, the `fakeQuantize1`output interval is [0, 255]. But quantized `convolution2` requires per-tensor quantization on the right branch (branch #2). Then all connected `FakeQuantize` interval operations (`fakeQuantize1` and `fakeQuantize2`) are aligned to have per-tensor quantization after the concatenation (`concat2`) operation.
* All ``FakeQuantize`` operations (``fakeQuantize1``, ``fakeQuantize2`` and ``fakeQuantize3``) were decomposed:
* original ``FakeQuantize`` operations were replaced with new operations with other output intervals and output port precision,
* dequantization operations.
* Dequantization operations were moved via precision preserved (``concat1`` and ``concat2``) and quantized (``convolution2``) operations.
.. note::
The left branch (branch #1) does not require per-tensor quantization. As a result, the ``fakeQuantize1``output interval is [0, 255]. But quantized ``convolution2`` requires per-tensor quantization on the right branch (branch #2). Then all connected ``FakeQuantize`` interval operations (``fakeQuantize1`` and ``fakeQuantize2``) are aligned to have per-tensor quantization after the concatenation (``concat2``) operation.
@endsphinxdirective

View File

@@ -1,8 +1,13 @@
# Step 4. Cleanup Transformations {#openvino_docs_OV_UG_lpt_step4_cleanup}
* [FoldConvertTransformation](@ref openvino_docs_OV_UG_lpt_FoldConvertTransformation)
* [FoldFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation)
* [FuseConvertTransformation](@ref openvino_docs_OV_UG_lpt_FuseConvertTransformation)
* [FuseMultiplyToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation)
* [FuseSubtractToFakeQuantizeTransformation](@ref openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation)
* [MultiplyToGroupConvolutionTransformation](@ref openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation)
@sphinxdirective
* :doc:`EliminateFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_EliminateFakeQuantizeTransformation>`
* :doc:`FoldConvertTransformation <openvino_docs_OV_UG_lpt_FoldConvertTransformation>`
* :doc:`FoldFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FoldFakeQuantizeTransformation>`
* :doc:`FuseConvertTransformation <openvino_docs_OV_UG_lpt_FuseConvertTransformation>`
* :doc:`FuseMultiplyToFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FuseMultiplyToFakeQuantizeTransformation>`
* :doc:`FuseSubtractToFakeQuantizeTransformation <openvino_docs_OV_UG_lpt_FuseSubtractToFakeQuantizeTransformation>`
* :doc:`MultiplyToGroupConvolutionTransformation <openvino_docs_OV_UG_lpt_MultiplyToGroupConvolutionTransformation>`
@endsphinxdirective

View File

@@ -0,0 +1,3 @@
# EliminateFakeQuantizeTransformation transformation {#openvino_docs_OV_UG_lpt_EliminateFakeQuantizeTransformation}
ngraph::pass::low_precision::EliminateFakeQuantizeTransformation class represents the `EliminateFakeQuantizeTransformation` transformation.

View File

@@ -19,7 +19,7 @@
Model Optimizer is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, TensorFlow Lite, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
Note that Model Optimizer does not infer models.

View File

@@ -1,179 +1,85 @@
# Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO {#openvino_docs_MO_DG_IR_and_opsets}
# Operation Sets in OpenVINO {#openvino_docs_MO_DG_IR_and_opsets}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_ops_opset
openvino_docs_operations_specifications
openvino_docs_ops_broadcast_rules
This article provides essential information on the format used for representation of deep learning models in OpenVINO toolkit and supported operation sets.
## Overview of Artificial Neural Networks Representation
Overview of Artificial Neural Networks Representation
#####################################################
A deep learning network is usually represented as a directed graph describing the flow of data from the network input data to the inference results.
Input data can be in the form of images, video, audio, or preprocessed information representing objects from the target area of interest.
Input data can be in the form of images, video, text, audio, or preprocessed information representing objects from the target area of interest.
Here is an illustration of a small graph representing a model that consists of a single Convolutional layer and activation function:
Here is an illustration sof a small graph representing a model that consists of a single Convolutional layer and activation function:
![](img/small_IR_graph_demonstration.png)
.. image:: _static/images/small_IR_graph_demonstration.png
Vertices in the graph represent layers or operation instances such as convolution, pooling, and element-wise operations with tensors.
The terms of "layer" and "operation" are used interchangeably within OpenVINO documentation and define how input data is processed to produce output data for a node in a graph.
The terms of "layer" and "operation" are used interchangeably within OpenVINO documentation and define how the input data is processed to produce output data for a node in a graph.
An operation node in a graph may consume data at one or multiple input ports.
For example, an element-wise addition operation has two input ports which accept tensors that are to be summed.
Some operations do not have any input ports, for example the `Const` operation, which knows the data to be produced without any input.
For example, an element-wise addition operation has two input ports that accept tensors to be summed.
Some operations do not have any input ports, for example the ``Const`` operation which produces without any input.
An edge between operations represents data flow or data dependency implied from one operation node to another.
Each operation produces data on one or multiple output ports. For example, convolution produces output tensor with activations at a single output port. Split operation usually has multiple output ports, each producing part of an input tensor.
Each operation produces data on one or multiple output ports. For example, convolution produces an output tensor with activations at a single output port. The ``Split`` operation usually has multiple output ports, each producing part of an input tensor.
Depending on a deep learning framework, the graph can also contain extra nodes that explicitly represent tensors between operations.
In such representations, operation nodes are not connected to each other directly. They are rather using data nodes as intermediate stops for data flow.
If data nodes are not used, the produced data is associated with an output port of a corresponding operation node that produces the data.
If data nodes are not used, the produced data is associated with an output port of the corresponding operation node that produces the data.
A set of various operations used in a network is usually fixed for each deep learning framework.
It determines expressiveness and level of representation available in that framework.
Sometimes, a network that can be represented in one framework is hard or impossible to be represented in another one or should use significantly different graph, because operation sets used in those two frameworks do not match.
## Intermediate Representation Used in OpenVINO
OpenVINO toolkit introduces its own format of graph representation and its own operation set.
A graph is represented with two files: an XML file and a binary file.
This representation is commonly referred to as the *Intermediate Representation* or *IR*.
The XML file describes a network topology using a `<layer>` tag for an operation node and an `<edge>` tag for a data-flow connection.
Each operation has a fixed number of attributes that define operation flavor used for a node.
For example, the `Convolution` operation has such attributes as `dilation`, `stride`, `pads_begin`, and `pads_end`.
The XML file does not have big constant values like convolution weights.
Instead, it refers to a part of the accompanying binary file that stores such values in a binary format.
Here is an example of a small IR XML file that corresponds to a graph from the previous section:
```xml
<?xml version="1.0" ?>
<net name="model_file_name" version="10">
<layers>
<layer id="0" name="input" type="Parameter" version="opset1">
<data element_type="f32" shape="1,3,32,100"/> <!-- attributes of operation -->
<output>
<!-- description of output ports with type of element and tensor dimensions -->
<port id="0" precision="FP32">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>100</dim>
</port>
</output>
</layer>
<layer id="1" name="conv1/weights" type="Const" version="opset1">
<!-- Const is only operation from opset1 that refers to the IR binary file by specifying offset and size in bytes relative to the beginning of the file. -->
<data element_type="f32" offset="0" shape="64,3,3,3" size="6912"/>
<output>
<port id="1" precision="FP32">
<dim>64</dim>
<dim>3</dim>
<dim>3</dim>
<dim>3</dim>
</port>
</output>
</layer>
<layer id="2" name="conv1" type="Convolution" version="opset1">
<data auto_pad="same_upper" dilations="1,1" output_padding="0,0" pads_begin="1,1" pads_end="1,1" strides="1,1"/>
<input>
<port id="0">
<dim>1</dim>
<dim>3</dim>
<dim>32</dim>
<dim>100</dim>
</port>
<port id="1">
<dim>64</dim>
<dim>3</dim>
<dim>3</dim>
<dim>3</dim>
</port>
</input>
<output>
<port id="2" precision="FP32">
<dim>1</dim>
<dim>64</dim>
<dim>32</dim>
<dim>100</dim>
</port>
</output>
</layer>
<layer id="3" name="conv1/activation" type="ReLU" version="opset1">
<input>
<port id="0">
<dim>1</dim>
<dim>64</dim>
<dim>32</dim>
<dim>100</dim>
</port>
</input>
<output>
<port id="1" precision="FP32">
<dim>1</dim>
<dim>64</dim>
<dim>32</dim>
<dim>100</dim>
</port>
</output>
</layer>
<layer id="4" name="output" type="Result" version="opset1">
<input>
<port id="0">
<dim>1</dim>
<dim>64</dim>
<dim>32</dim>
<dim>100</dim>
</port>
</input>
</layer>
</layers>
<edges>
<!-- Connections between layer nodes: based on ids for layers and ports used in the descriptions above -->
<edge from-layer="0" from-port="0" to-layer="2" to-port="0"/>
<edge from-layer="1" from-port="1" to-layer="2" to-port="1"/>
<edge from-layer="2" from-port="2" to-layer="3" to-port="0"/>
<edge from-layer="3" from-port="1" to-layer="4" to-port="0"/>
</edges>
<meta_data>
<!-- This section that is not related to a topology; contains auxiliary information that serves for the debugging purposes. -->
<MO_version value="2019.1"/>
<cli_parameters>
<blobs_as_inputs value="True"/>
<caffe_parser_path value="DIR"/>
<data_type value="float"/>
...
<!-- Omitted a long list of CLI options that always are put here by MO for debugging purposes. -->
</cli_parameters>
</meta_data>
</net>
```
The IR does not use explicit data nodes described in the previous section.
In contrast, properties of data such as tensor dimensions and their data types are described as properties of input and output ports of operations.
## Operation Sets
Operation Sets
##############
Operations in OpenVINO Operation Sets are selected based on capabilities of supported deep learning frameworks and hardware capabilities of the target inference device.
It consists of several groups of operations:
A set consists of several groups of operations:
* Conventional deep learning layers such as `Convolution`, `MaxPool`, and `MatMul` (also known as `FullyConnected`).
* Conventional deep learning layers such as ``Convolution``, ``MaxPool``, and ``MatMul`` (also known as ``FullyConnected``).
* Various activation functions such as `ReLU`, `Tanh`, and `PReLU`.
* Various activation functions such as ``ReLU``, ``Tanh``, and ``PReLU``.
* Generic element-wise arithmetic tensor operations such as `Add`, `Subtract`, and `Multiply`.
* Generic element-wise arithmetic tensor operations such as ``Add``, ``Subtract``, and ``Multiply``.
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, `Less`, `Equeal`, `Greater`.
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, ``Less``, ``Equeal``, ``Greater``.
* Logical operations that are dealing with boolean tensors, for example, `And`, `Xor`, `Not`.
* Logical operations that are dealing with boolean tensors, for example, ``And``, ``Xor``, ``Not``.
* Data movement operations which are dealing with parts of tensors, for example, `Concat`, `Split`, `StridedSlice`, `Select`.
* Data movement operations which are dealing with parts of tensors, for example, ``Concat``, ``Split``, ``StridedSlice``, ``Select``.
* Specialized operations that implement complex algorithms dedicated for models of specific type, for example, `DetectionOutput`, `RegionYolo`, `PriorBox`.
* Specialized operations that implement complex algorithms dedicated for models of specific type, for example, ``DetectionOutput``, ``RegionYolo``, ``PriorBox``.
For more information, refer to the complete description of the supported operation sets in the [Available Operation Sets](../ops/opset.md) article.
For more information, refer to the complete description of the supported operation sets in the :doc:`Available Operation Sets <openvino_docs_ops_opset>` article.
## IR Versions vs Operation Set Versions
How to Read Opset Specification
###############################
In the :doc:`Available Operation Sets <openvino_docs_ops_opset>` there are opsets and there are operations.
Each opset specification has a list of links to operations descriptions that are included into that specific opset.
Two or more opsets may refer to the same operation.
That means an operation is kept unchanged from one operation set to another.
The description of each operation has a ``Versioned name`` field.
For example, the `ReLU` entry point in :doc:`opset1 <openvino_docs_ops_opset1>` refers to :doc:`ReLU-1 <openvino_docs_ops_activation_ReLU_1>` as the versioned name.
Meanwhile, `ReLU` in `opset2` refers to the same `ReLU-1` and both `ReLU` operations are the same operation and it has a single :doc:`description <openvino_docs_ops_activation_ReLU_1>`, which means that ``opset1`` and ``opset2`` share the same operation ``ReLU``.
To differentiate versions of the same operation type such as ``ReLU``, the ``-N`` suffix is used in a versioned name of the operation.
The ``N`` suffix usually refers to the first occurrence of ``opsetN`` where this version of the operation is introduced.
There is no guarantee that new operations will be named according to that rule. The naming convention might be changed, but not for old operations which are frozen completely.
IR Versions vs Operation Set Versions
######################################
The expressiveness of operations in OpenVINO is highly dependent on the supported frameworks and target hardware capabilities.
As the frameworks and hardware capabilities grow over time, the operation set is constantly evolving to support new models.
@@ -183,60 +89,45 @@ Version of IR specifies the rules which are used to read the XML and binary file
Historically, there are two major IR version epochs:
1. The older one includes IR versions from version 1 to version 7 without versioning of the operation set. During that epoch, the operation set has been growing evolutionally accumulating more layer types and extending existing layer semantics. Changing of the operation set for those versions meant increasing of IR version.
1. The older one includes IR versions from version 1 to version 7 without versioning of the operation set. During that epoch, the operation set has been growing evolutionally accumulating more layer types and extending existing layer semantics. Changing of the operation set for those versions meant increasing of the IR version.
2. OpenVINO 2020.1 is the starting point of the next epoch. With IR version 10 introduced in OpenVINO 2020.1, the versioning of the operation set is tracked separately from the IR versioning. Also, the operation set was significantly reworked as the result of nGraph integration to the OpenVINO.
The first supported operation set in the new epoch is `opset1`.
The number after `opset` is going to be increased each time new operations are added or old operations deleted at the release cadence.
The first supported operation set in the new epoch is ``opset1``.
The number after ``opset`` is going to be increased each time new operations are added or old operations deleted at the release cadence.
The operations from the new epoch cover more TensorFlow and ONNX operators in a form that is closer to the original operation semantics from the frameworks in comparison to the operation set used in former versions of IR (7 and lower).
The operations from the new epoch cover more TensorFlow and ONNX operations that better match the original operation semantics from the frameworks, compared to the operation set used in the older IR versions (7 and lower).
The name of the opset is specified for each operation in IR.
The IR version is specified once per whole IR.
The IR version is specified once.
Here is an example from the IR snippet:
```xml
<?xml version="1.0" ?>
<net name="model_file_name" version="10"> <!-- Version of the whole IR file is here; it is 10 -->
<layers>
<!-- Version of operation set that the layer belongs to is described in <layer>
tag attributes. For this operation, it is version="opset1". -->
<layer id="0" name="input" type="Parameter" version="opset1">
<data element_type="f32" shape="1,3,32,100"/> <!-- attributes of operation -->
<output>
<!-- description of output ports with type of element and tensor dimensions -->
<port id="0" precision="FP32">
<dim>1</dim>
<dim>3</dim>
.. code-block:: cpp
...
```
<?xml version="1.0" ?>
<net name="model_file_name" version="10"> <!-- Version of the whole IR file is here; it is 10 -->
<layers>
<!-- Version of operation set that the layer belongs to is described in <layer>
tag attributes. For this operation, it is version="opset1". -->
<layer id="0" name="input" type="Parameter" version="opset1">
<data element_type="f32" shape="1,3,32,100"/> <!-- attributes of operation -->
<output>
<!-- description of output ports with type of element and tensor dimensions -->
<port id="0" precision="FP32">
<dim>1</dim>
<dim>3</dim>
The `type="Parameter"` and `version="opset1"` attributes in the example above mean "use that version of the `Parameter` operation that is included in the `opset1` operation set. "
...
The ``type="Parameter"`` and ``version="opset1"`` attributes in the example above mean "use that version of the ``Parameter`` operation that is included in the ``opset1`` operation set. "
When a new operation set is introduced, most of the operations remain unchanged and are just aliased from the previous operation set within a new one.
The goal of operation set version evolution is to add new operations, and probably change small fractions of existing operations (fixing bugs and extending semantics).
The goal of operation set version evolution is to add new operations, and change small fractions of existing operations (fixing bugs and extending semantics).
However, such changes affect only new versions of operations from a new operation set, while old operations are used by specifying an appropriate `version`.
When an old `version` is specified, the behavior will be kept unchanged from that specified version to provide backward compatibility with older IRs.
A single `xml` file with IR may contain operations from different opsets.
An operation that is included in several opsets may be referred to with `version` which points to any opset that includes that operation.
For example, the same `Convolution` can be used with `version="opset1"` and `version="opset2"` because both opsets have the same `Convolution` operations.
## How to Read Opset Specification
In the [Available Operation Sets](../ops/opset.md) there are opsets and there are operations.
Each opset specification has a list of links to operations descriptions that are included into that specific opset.
Two or more opsets may refer to the same operation.
That means an operation is kept unchanged from one operation set to another.
The description of each operation has a `Versioned name` field.
For example, the `ReLU` entry point in [`opset1`](../ops/opset1.md) refers to [`ReLU-1`](../ops/activation/ReLU_1.md) as the versioned name.
Meanwhile, `ReLU` in `opset2` refers to the same `ReLU-1` and both `ReLU` operations are the same operation and it has a single [description](../ops/activation/ReLU_1.md), which means that `opset1` and `opset2` share the same operation `ReLU`.
To differentiate versions of the same operation type such as `ReLU`, the `-N` suffix is used in a versioned name of the operation.
The `N` suffix usually refers to the first occurrence of `opsetN` where this version of the operation is introduced.
There is no guarantee that new operations will be named according to that rule. The naming convention might be changed, but not for old operations which are frozen completely.
A single ``xml`` file with IR may contain operations from different opsets.
An operation that is included in several opsets may be referred to with ``version`` which points to any opset that includes that operation.
For example, the same ``Convolution`` can be used with ``version="opset1"`` and ``version="opset2"`` because both opsets have the same ``Convolution`` operations.
@endsphinxdirective

View File

@@ -2,18 +2,16 @@
@sphinxdirective
Model Optimizer by default converts all floating-point weights to ``FP16`` data type.
The resulting IR is called compressed ``FP16`` model. The resulting model will occupy
about twice as less space in the file system, but it may have some accuracy drop.
For most models, the accuracy drop is negligible. But in case if accuracy drop is
significant user can disable compression explicitly.
Model Optimizer can convert all floating-point weights to the ``FP16`` data type.
It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a drop in accuracy.
but it is negligible for most models.
By default, models are compressed to ``FP16``, but you can disable compression by
specifying ``--compress_to_fp16=False``:
To compress the model, use the `--compress_to_fp16` or `--compress_to_fp16=True` option:
.. code-block:: sh
mo --input_model INPUT_MODEL --compress_to_fp16=False
mo --input_model INPUT_MODEL --compress_to_fp16
For details on how plugins handle compressed ``FP16`` models, see
@@ -26,4 +24,11 @@ For details on how plugins handle compressed ``FP16`` models, see
information about that.
.. note::
Some large models (larger than a few GB) when compressed to ``FP16`` may consume enormous amount of RAM on the loading
phase of the inference. In case if you are facing such problems, please try to convert them without compression:
`mo --input_model INPUT_MODEL --compress_to_fp16=False`
@endsphinxdirective

View File

@@ -814,6 +814,120 @@ paddlepaddle >= 2.1
========================================== ===============================================================================
TensorFlow Lite Supported Operators
###########################################################
========================================== ===============================================================================
Operator Name in TensorFlow Lite Limitations
========================================== ===============================================================================
ABS
ADD
ADD_N
ARG_MAX
ARG_MIN
AVERAGE_POOL_2D
BATCH_MATMUL
BATCH_TO_SPACE_ND
BROADCAST_ARGS
BROADCAST_TO
CAST
CEIL
COMPLEX_ABS Supported in a specific pattern with RFFT2D
CONCATENATION
CONV_2D
COS
DEPTH_TO_SPACE
DEPTHWISE_CONV_2D
DEQUANTIZE
DIV
ELU
EQUAL
EXP
EXPAND_DIMS
FILL
FLOOR
FLOOR_DIV
FLOOR_MOD
FULLY_CONNECTED
GATHER
GATHER_ND
GREATER
GREATER_EQUAL
HARD_SWISH
L2_NORMALIZATION
LEAKY_RELU
LESS
LESS_EQUAL
LOG
LOG_SOFTMAX
LOGICAL_AND
LOGICAL_NOT
LOGICAL_OR
LOGISTIC
MATRIX_DIAG
MAX_POOL_2D
MAXIMUM
MEAN
MINIMUM
MIRROR_PAD
MUL
NEG
NOT_EQUAL
ONE_HOT
PACK
PAD
PADV2
POW
PRELU
QUANTIZE
RANGE
RANK
REDUCE_ALL
REDUCE_ANY
REDUCE_MAX
REDUCE_MIN
REDUCE_PROD
RELU
RELU6
RESHAPE
RESIZE_BILINEAR
RESIZE_NEAREST_NEIGHBOR
REVERSE_V2
RFFT2D Supported in a specific pattern with COMPLEX_ABS
ROUND
RSQRT
SCATTER_ND
SEGMENT_SUM
SELECT
SELECT_V2
SHAPE
SIGN
SIN
SLICE
SOFTMAX
SPACE_TO_BATCH_ND
SPACE_TO_DEPTH
SPLIT
SPLIT_V
SQRT
SQUARE
SQUARED_DIFFERENCE
SQUEEZE
STRIDED_SLICE
SUB
SUM
TANH
TILE
TOPK_V2
TRANSPOSE
TRANSPOSE_CONV
UNIQUE
UNPACK
WHERE
ZEROS_LIKE
========================================== ===============================================================================
@endsphinxdirective

View File

@@ -0,0 +1,24 @@
# Converting a TensorFlow Lite Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite}
@sphinxdirective
To convert a TensorFlow Lite model, use the ``mo`` script and specify the path to the input ``.tflite`` model file:
.. code-block:: sh
mo --input_model <INPUT_MODEL>.tflite
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API.
Supported TensorFlow Lite Layers
###################################
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
Supported TensorFlow Lite Models
###################################
More than eighty percent of public TensorFlow Lite models are supported from open sources `TensorFlow Hub <https://tfhub.dev/s?deployment-format=lite&subtype=module,placeholder>`__ and `MediaPipe <https://developers.google.com/mediapipe>`__.
Unsupported models usually have custom TensorFlow Lite operations.
@endsphinxdirective

View File

@@ -1,37 +1,48 @@
# Intermediate Representation Suitable for INT8 Inference {#openvino_docs_MO_DG_prepare_model_convert_model_IR_suitable_for_INT8_inference}
## Introduction
@sphinxdirective
Introduction
############
OpenVINO Runtime CPU and GPU devices can infer models in low precision.
For more details, refer to the [Model Optimization Guide](@ref openvino_docs_model_optimization_guide).
For more details, refer to the :doc:`Model Optimization Guide <openvino_docs_model_optimization_guide>`.
Intermediate Representation should be specifically formed to be suitable for low precision inference.
Intermediate Representation should be specifically formed to be suitable for low precision inference.
Such a model is called a Low Precision IR and can be generated in two ways:
- By [quantizing regular IR with the Post-Training Optimization tool](@ref pot_introduction)
- Using Model Optimizer for a model pre-trained for Low Precision inference: TensorFlow pre-TFLite models (`.pb` model file with `FakeQuantize*` operations) and ONNX quantized models.
Both TensorFlow and ONNX quantized models can be prepared by [Neural Network Compression Framework](https://github.com/openvinotoolkit/nncf/blob/develop/README.md).
* By :doc:`quantizing regular IR with the Post-Training Optimization tool <pot_introduction>`
* Using Model Optimizer for a model pre-trained for Low Precision inference: TensorFlow models (``.pb`` model file with ``FakeQuantize`` operations), quantized TensorFlow Lite models and ONNX quantized models.
TensorFlow and ONNX quantized models can be prepared by `Neural Network Compression Framework <https://github.com/openvinotoolkit/nncf/blob/develop/README.md>`__.
For an operation to be executed in INT8, it must have `FakeQuantize` operations as inputs.
For more details, see the [specification of `FakeQuantize` operation](../../../ops/quantization/FakeQuantize_1.md).
For more details, see the :doc:`specification of FakeQuantize operation <openvino_docs_ops_quantization_FakeQuantize_1>`.
To execute the `Convolution` operation in INT8 on CPU, both data and weight inputs should have `FakeQuantize` as an input operation:
![](../../img/expanded_int8_Convolution_weights.png)
To execute the ``Convolution`` operation in INT8 on CPU, both data and weight inputs should have ``FakeQuantize`` as an input operation:
Low precision IR is also suitable for FP32 and FP16 inference if a chosen plugin supports all operations of the IR. The only difference between a Low Precision IR and FP16 or FP32 IR is the existence of `FakeQuantize` in the Low Precision IR.
.. image:: _static/images/expanded_int8_Convolution_weights.png
Low precision IR is also suitable for FP32 and FP16 inference if a chosen plugin supports all operations of the IR. The only difference between a Low Precision IR and FP16 or FP32 IR is the existence of ``FakeQuantize`` in the Low Precision IR.
Plugins that support Low Precision Inference recognize these sub-graphs and quantize them during inference.
The ones that do not, execute all operations, including `FakeQuantize`, as is in the FP32 or FP16 precision.
The ones that do not, execute all operations, including ``FakeQuantize``, as is in the FP32 or FP16 precision.
Consequently, when `FakeQuantize` operations are present in an OpenVINO IR, it suggests to the inference device how to quantize particular operations in the model.
Consequently, when ``FakeQuantize`` operations are present in an OpenVINO IR, it suggests to the inference device how to quantize particular operations in the model.
If the device is capable, it accepts the suggestion and performs Low Precision Inference. If not, it executes the model in the floating-point precision.
## Compressed Low Precision Weights
Compressed Low Precision Weights
################################
Weighted operations, such as `Convolution` and `MatMul`, store weights as the floating-point `Constant` in the graph followed by the `FakeQuantize` operation.
The `Constant` followed by the `FakeQuantize` operation could be optimized memory-wise due to the `FakeQuantize` operation semantics.
The resulting weights sub-graph stores weights in Low Precision `Constant`, which gets unpacked back to floating point with the `Convert` operation.
Weights compression replaces `FakeQuantize` with optional `Subtract` and `Multiply` operation leaving output arithmetically the same and weights storing takes four times less memory.
Weighted operations, such as ``Convolution`` and ``MatMul``, store weights as the floating-point ``Constant`` in the graph followed by the `FakeQuantize` operation.
The ``Constant`` followed by the ``FakeQuantize`` operation could be optimized memory-wise due to the ``FakeQuantize`` operation semantics.
The resulting weights sub-graph stores weights in Low Precision ``Constant``, which gets unpacked back to floating point with the ``Convert`` operation.
Weights compression replaces ``FakeQuantize`` with optional ``Subtract`` and ``Multiply`` operation leaving output arithmetically the same and weights storing takes four times less memory.
See the visualization of `Convolution` with the compressed weights:
![](../../img/compressed_int8_Convolution_weights.png)
.. image:: _static/images/compressed_int8_Convolution_weights.png
Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default.
@endsphinxdirective

View File

@@ -9,6 +9,7 @@
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
@@ -18,7 +19,7 @@
**OpenVINO IR (Intermediate Representation)** - the proprietary format of OpenVINO™, benefiting from the full extent of its features.
**ONNX, PaddlePaddle, TensorFlow** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite** - formats supported directly, which means they can be used with OpenVINO Runtime without any prior conversion. For a guide on how to run inference on ONNX, PaddlePaddle, or TensorFlow, see how to :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
**MXNet, Caffe, Kaldi** - formats supported indirectly, which means they need to be converted to OpenVINO IR before running inference. The conversion is done with Model Optimizer and in some cases may involve intermediate steps.
@@ -27,6 +28,7 @@ Refer to the following articles for details on conversion for different formats
* :doc:`How to convert ONNX <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
* :doc:`How to convert PaddlePaddle <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
* :doc:`How to convert TensorFlow <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
* :doc:`How to convert TensorFlow Lite <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`
* :doc:`How to convert MXNet <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
* :doc:`How to convert Caffe <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
* :doc:`How to convert Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`

View File

@@ -219,7 +219,7 @@ To generate ``vocab.bpe.32000``, execute the ``nmt/scripts/wmt16_en_de.sh`` scri
--output_dir /path/to/output/IR/
Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO&trade; does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations.
Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations.
Input cutting:

View File

@@ -41,7 +41,7 @@ The Wide and Deep model is no longer in the master branch of the repository but
**Step 2**. Train the model
As the OpenVINO&trade; toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model
As the OpenVINO toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model
by changing the ``build_model_columns()`` function in `census_dataset.py` as follows:
.. code-block:: python
@@ -146,7 +146,7 @@ Use the following command line to convert the saved model file with the checkpoi
--output head/predictions/probabilities
The model contains operations unsupported by the OpenVINO&trade; toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes.
The model contains operations unsupported by the OpenVINO toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes.
The pruning is specified through `--input` option. The prunings for ``IteratorGetNext:*`` nodes correspond to numeric features.
The pruning for each categorical feature consists of three prunings for the following nodes: ``*/to_sparse_input/indices:0``, ``*/hash_table_Lookup/LookupTableFindV2:0``, and ``*/to_sparse_input/dense_shape:0``.

View File

@@ -13,7 +13,7 @@
This article describes Model Optimizer internals. Altering them may result in application instability, and in case of future changes to the API, lack of backward compatibility.
> **NOTE**: If you want to add support for ONNX, PaddlePaddle or Tensorflow operations, or you are not familiar with other extension alternatives in OpenVINO, read [this guide](../../../Extensibility_UG/Intro.md) instead.
> **NOTE**: If you want to add support for ONNX, TensorFlow Lite, PaddlePaddle or TensorFlow operations, or you are not familiar with other extension alternatives in OpenVINO, read [this guide](../../../Extensibility_UG/Intro.md) instead.
<a name="model-optimizer-extensibility"></a>Model Optimizer extensibility mechanism enables support of new operations and custom transformations to generate the optimized intermediate representation (IR) as described in the
[Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md). This
@@ -238,7 +238,7 @@ Methods `in_port()` and `output_port()` of the `Node` class are used to get and
how to use them, refer to the [Graph Traversal and Modification Using Ports and Connections](@ref graph-ports-and-conneсtions) section.
> **NOTE**: A shape inference function should perform output shape calculation in the original model layout. For
> example, OpenVINO&trade; supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as
> example, OpenVINO supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as
> well. Model Optimizer shape inference function calculates output shapes for NHWC Convolutions in NHWC layout and only
> during the layout change phase the shape is converted to NCHW.
@@ -259,7 +259,7 @@ More information on how to develop middle transformations and dedicated API desc
There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations are triggered by default for TensorFlow models as TensorFlow supports Convolution operations in the NHWC layout.
This layout change is disabled automatically if the model does not have operations that OpenVINO&trade needs to execute in the NCHW layout, for example, Convolutions in NHWC layout.
This layout change is disabled automatically if the model does not have operations that OpenVINO needs to execute in the NCHW layout, for example, Convolutions in NHWC layout.
For more details on how it works, refer to the source code of the transformations mentioned in the below summary of the process:

View File

@@ -1,91 +1,110 @@
# Model Caching Overview {#openvino_docs_OV_UG_Model_caching_overview}
@sphinxdirective
As described in :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`,
a common application flow consists of the following steps:
As described in the :doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`, a common application flow consists of the following steps:
1. | **Create a Core object**:
| First step to manage available devices and read model objects
2. | **Read the Intermediate Representation**:
| Read an Intermediate Representation file into an object of the `ov::Model <classov_1_1Model.html#doxid-classov-1-1-model>`__
3. | **Prepare inputs and outputs**:
| If needed, manipulate precision, memory layout, size or color format
4. | **Set configuration**:
| Pass device-specific loading configurations to the device
5. | **Compile and Load Network to device**:
| Use the `ov::Core::compile_model() <classov_1_1Core.html#doxid-classov-1-1-core-1a46555f0803e8c29524626be08e7f5c5a>`__ method with a specific device
6. | **Set input data**:
| Specify input tensor
7. | **Execute**:
| Carry out inference and process results
1. **Create a Core object**: First step to manage available devices and read model objects
Step 5 can potentially perform several time-consuming device-specific optimizations and network compilations.
To reduce the resulting delays at application startup, you can use Model Caching. It exports the compiled model
automatically and reuses it to significantly reduce the model compilation time.
2. **Read the Intermediate Representation**: Read an Intermediate Representation file into an object of the `ov::Model <classov_1_1Model.html#doxid-classov-1-1-model>`__
.. important::
3. **Prepare inputs and outputs**: If needed, manipulate precision, memory layout, size or color format
The :doc:`Compile Tool <openvino_inference_engine_tools_compile_tool_README>` may serve the same purpose
for C++ applications, but is considered a legacy solution and you should use Model Caching instead.
4. **Set configuration**: Pass device-specific loading configurations to the device
Not all devices support the network import/export feature. They will perform normally but will not
enable the compilation stage speed-up.
5. **Compile and Load Network to device**: Use the `ov::Core::compile_model() <classov_1_1Core.html#doxid-classov-1-1-core-1a46555f0803e8c29524626be08e7f5c5a>`__ method with a specific device
6. **Set input data**: Specify input tensor
7. **Execute**: Carry out inference and process results
Step 5 can potentially perform several time-consuming device-specific optimizations and network compilations,
and such delays can lead to a bad user experience on application startup. To avoid this, some devices offer
import/export network capability, and it is possible to either use the :doc:`Compile tool <openvino_inference_engine_tools_compile_tool_README>`
or enable model caching to export compiled model automatically. Reusing cached model can significantly reduce compile model time.
Set "cache_dir" config option to enable model caching
+++++++++++++++++++++++++++++++++++++++++++++++++++++
To enable model caching, the application must specify a folder to store cached blobs, which is done like this:
To enable model caching, the application must specify a folder to store the cached blobs:
.. tab-set::
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_caching.cpp
:language: cpp
:fragment: [ov:caching:part0]
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_caching.py
:language: py
:fragment: [ov:caching:part0]
.. tab:: C++
.. doxygensnippet:: docs/snippets/ov_caching.cpp
:language: cpp
:fragment: [ov:caching:part0]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_caching.py
:language: python
:fragment: [ov:caching:part0]
With this code, if the device specified by ``device_name`` supports import/export model capability, a cached blob is automatically created inside the ``/path/to/cache/dir`` folder.
If the device does not support import/export capability, cache is not created and no error is thrown.
Depending on your device, total time for compiling model on application startup can be significantly reduced.
Also note that the very first ``compile_model`` (when cache is not yet created) takes slightly longer time to "export" the compiled blob into a cache file:
With this code, if the device specified by ``device_name`` supports import/export model capability,
a cached blob is automatically created inside the ``/path/to/cache/dir`` folder.
If the device does not support the import/export capability, cache is not created and no error is thrown.
Note that the first ``compile_model`` operation takes slightly longer, as the cache needs to be created -
the compiled blob is saved into a cache file:
.. image:: _static/images/caching_enabled.svg
Even faster: use compile_model(modelPath)
+++++++++++++++++++++++++++++++++++++++++
Make it even faster: use compile_model(modelPath)
+++++++++++++++++++++++++++++++++++++++++++++++++++
In some cases, applications do not need to customize inputs and outputs every time. Such application always
call ``model = core.read_model(...)``, then ``core.compile_model(model, ..)`` and it can be further optimized.
call ``model = core.read_model(...)``, then ``core.compile_model(model, ..)``, which can be further optimized.
For these cases, there is a more convenient API to compile the model in a single call, skipping the read step:
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_caching.cpp
:language: cpp
:fragment: [ov:caching:part1]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_caching.py
:language: python
:language: py
:fragment: [ov:caching:part1]
With model caching enabled, total load time is even smaller, if ``read_model`` is optimized as well.
With model caching enabled, the total load time is even shorter, if ``read_model`` is optimized as well.
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_caching.cpp
:language: cpp
:fragment: [ov:caching:part2]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_caching.py
:language: python
:language: py
:fragment: [ov:caching:part2]
@@ -94,25 +113,30 @@ With model caching enabled, total load time is even smaller, if ``read_model`` i
Advanced Examples
++++++++++++++++++++
Not every device supports network import/export capability. For those that don't, enabling caching has no effect.
Not every device supports the network import/export capability. For those that don't, enabling caching has no effect.
To check in advance if a particular device supports model caching, your application can use the following code:
.. tab-set::
.. tab:: C++
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/ov_caching.cpp
:language: cpp
:fragment: [ov:caching:part3]
.. tab:: Python
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/ov_caching.py
:language: python
:language: py
:fragment: [ov:caching:part3]
.. note::
For GPU, model caching is currently implemented as a preview feature. Before it is fully supported, kernel caching can be used in the same manner: by setting the CACHE_DIR configuration key to a folder where the cache should be stored (see the :doc:`GPU plugin documentation <openvino_docs_OV_UG_supported_plugins_GPU>`). To activate the preview feature of model caching, set the OV_GPU_CACHE_MODEL environment variable to 1.
For GPU, model caching is currently supported fully for static models only. For dynamic models,
kernel caching is used and multiple .cl_cache files are generated along with the .blob file.
See the :doc:`GPU plugin documentation <openvino_docs_OV_UG_supported_plugins_GPU>`.
@endsphinxdirective

View File

@@ -1,4 +1,4 @@
# Operations Specifications {#openvino_docs_operations_specifications}
# Operation Specifications {#openvino_docs_operations_specifications}
@sphinxdirective
@@ -43,7 +43,7 @@
DeformablePSROIPooling-1 <openvino_docs_ops_detection_DeformablePSROIPooling_1>
DepthToSpace-1 <openvino_docs_ops_movement_DepthToSpace_1>
DetectionOutput-1 <openvino_docs_ops_detection_DetectionOutput_1>
DetectionOutput-1 <openvino_docs_ops_detection_DetectionOutput_8>
DetectionOutput-8 <openvino_docs_ops_detection_DetectionOutput_8>
DFT-7 <openvino_docs_ops_signals_DFT_7>
Divide-1 <openvino_docs_ops_arithmetic_Divide_1>
Einsum-7 <openvino_docs_ops_matrix_Einsum_7>

View File

@@ -1,5 +1,7 @@
# OpenVINO™ Python API Exclusives {#openvino_docs_OV_UG_Python_API_exclusives}
@sphinxdirective
OpenVINO™ Runtime Python API offers additional features and helpers to enhance user experience. The main goal of Python API is to provide user-friendly and simple yet powerful tool for Python users.
Easier Model Compilation
@@ -9,7 +11,7 @@ Easier Model Compilation
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [auto_compilation]
@@ -20,7 +22,7 @@ Besides functions aligned to C++ API, some of them have their Python counterpart
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [properties_example]
@@ -33,7 +35,7 @@ Python API allows passing data as tensors. The ``Tensor`` object holds a copy of
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [tensor_basics]
@@ -44,7 +46,7 @@ Shared Memory Mode
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [tensor_shared_mode]
@@ -57,7 +59,7 @@ All infer methods allow users to pass data as popular *numpy* arrays, gathered i
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [passing_numpy_array]
@@ -65,7 +67,7 @@ Results from inference can be obtained in various ways:
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [getting_results]
@@ -76,10 +78,34 @@ Python API provides different synchronous calls to infer model, which block the
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [sync_infer]
Inference Results - OVDict
++++++++++++++++++++++++++
Synchronous calls return a special data structure called ``OVDict``. It can be compared to a "frozen dictionary". There are various ways of accessing the object's elements:
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: python
:fragment: [ov_dict]
.. note::
It is possible to convert ``OVDict`` to a native dictionary using the ``to_dict()`` method.
.. warning::
Using ``to_dict()`` results in losing access via strings and integers. Additionally,
it performs a shallow copy, thus any modifications may affect the original
object as well.
AsyncInferQueue
++++++++++++++++++++
@@ -91,7 +117,7 @@ The ``start_async`` function call is not required to be synchronized - it waits
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [asyncinferqueue]
@@ -102,7 +128,7 @@ After the call to ``wait_all``, jobs and their data can be safely accessed. Acqu
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [asyncinferqueue_access]
@@ -115,7 +141,7 @@ The callback of ``AsyncInferQueue`` is uniform for every job. When executed, GIL
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [asyncinferqueue_set_callback]
@@ -127,7 +153,7 @@ To create an input tensor with such element types, you may need to pack your dat
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [packing_data]
@@ -135,7 +161,7 @@ To extract low precision values from a tensor into the *numpy* array, you can us
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [unpacking]
@@ -146,7 +172,7 @@ Some functions in Python API release the Global Lock Interpreter (GIL) while run
.. doxygensnippet:: docs/snippets/ov_python_exclusives.py
:language: cpp
:language: python
:fragment: [releasing_gil]
@@ -178,3 +204,5 @@ List of Functions that Release the GIL
* openvino.runtime.InferRequest.query_state
* openvino.runtime.Model.reshape
* openvino.preprocess.PrePostProcessor.build
@endsphinxdirective

View File

@@ -0,0 +1,89 @@
# OpenVINO™ Runtime Python API Advanced Inference {#openvino_docs_OV_UG_Python_API_inference}
@sphinxdirective
.. warning::
All mentioned methods are very dependent on a specific hardware and software set-up.
Consider conducting your own experiments with various models and different input/output
sizes. The methods presented here are not universal, they may or may not apply to the
specific pipeline. Please consider all tradeoffs and avoid premature optimizations.
Direct Inference with ``CompiledModel``
#######################################
The ``CompiledModel`` class provides the ``__call__`` method that runs a single synchronous inference using the given model. In addition to a compact code, all future calls to ``CompiledModel.__call__`` will result in less overhead, as the object reuses the already created ``InferRequest``.
.. doxygensnippet:: docs/snippets/ov_python_inference.py
:language: python
:fragment: [direct_inference]
Shared Memory on Inputs
#######################
While using ``CompiledModel``, ``InferRequest`` and ``AsyncInferQueue``,
OpenVINO™ Runtime Python API provides an additional mode - "Shared Memory".
Specify the ``shared_memory`` flag to enable or disable this feature.
The "Shared Memory" mode may be beneficial when inputs are large and copying
data is considered an expensive operation. This feature creates shared ``Tensor``
instances with the "zero-copy" approach, reducing overhead of setting inputs
to minimum. Example usage:
.. doxygensnippet:: docs/snippets/ov_python_inference.py
:language: python
:fragment: [shared_memory_inference]
.. note::
"Shared Memory" is enabled by default in ``CompiledModel.__call__``.
For other methods, like ``InferRequest.infer`` or ``InferRequest.start_async``,
it is required to set the flag to ``True`` manually.
.. warning::
When data is being shared, all modifications may affect inputs of the inference!
Use this feature with caution, especially in multi-threaded/parallel code,
where data can be modified outside of the function's control flow.
Hiding Latency with Asynchronous Calls
######################################
Asynchronous calls allow to hide latency to optimize overall runtime of a codebase.
For example, ``InferRequest.start_async`` releases the GIL and provides non-blocking call.
It is beneficial to process other calls while waiting to finish compute-intensive inference.
Example usage:
.. doxygensnippet:: docs/snippets/ov_python_inference.py
:language: python
:fragment: [hiding_latency]
.. note::
It is up to the user/developer to optimize the flow in a codebase to benefit from potential parallelization.
"Postponed Return" with Asynchronous Calls
##########################################
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
synchronous calls. "Postponed Return" could be applied when:
* only a part of output data is required. For example, only one specific output is significant
in a given pipeline step and all outputs are large, thus, expensive to copy.
* data is not required "now". For example, it can be later extracted inside the pipeline as
a part of latency hiding.
* data return is not required at all. For example, models are being chained with the pure ``Tensor`` interface.
.. doxygensnippet:: docs/snippets/ov_python_inference.py
:language: python
:fragment: [no_return_inference]
@endsphinxdirective

View File

@@ -112,6 +112,7 @@ OpenVINO Runtime uses frontend libraries dynamically to read models in different
- ``openvino_ir_frontend`` is used to read OpenVINO IR.
- ``openvino_tensorflow_frontend`` is used to read TensorFlow file format.
- ``openvino_tensorflow_lite_frontend`` is used to read TensorFlow Lite file format.
- ``openvino_onnx_frontend`` is used to read ONNX file format.
- ``openvino_paddle_frontend`` is used to read Paddle file format.
@@ -119,7 +120,7 @@ Depending on the model format types that are used in the application in `ov::Cor
.. note::
To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`. This way you don't have to keep TensorFlow, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
To optimize the size of final distribution package, you are recommended to convert models to OpenVINO IR by using :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`. This way you don't have to keep TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle, and other frontend libraries in the distribution package.
(Legacy) Preprocessing via G-API
++++++++++++++++++++++++++++++++

View File

@@ -8,6 +8,7 @@
openvino_docs_OV_UG_Model_Representation
openvino_docs_OV_UG_Infer_request
openvino_docs_OV_UG_Python_API_inference
openvino_docs_OV_UG_Python_API_exclusives
openvino_docs_MO_DG_TensorFlow_Frontend

View File

@@ -5,7 +5,7 @@
To run inference on multiple devices, you can choose either of the following ways:
- Use the :ref:`CUMULATIVE_THROUGHPUT option <cumulative throughput>` of the Automatic Device Selection mode. This way, you can use all available devices in the system without the need to specify them.
- Use the Multi-Device execution mode. This page will explain how it works and how to use it.
- Use the Multi-Device execution mode. It shares the same behaviors as the :ref:`CUMULATIVE_THROUGHPUT option <cumulative throughput>` of the Automatic Device Selection mode. The difference is,it needs <device list> or ``ov::device::priorities`` to be set explicitly.
How MULTI Works
####################
@@ -39,7 +39,7 @@ Following the OpenVINO™ naming convention, the Multi-Device mode is assigned t
+----------------------------+---------------------------------+------------------------------------------------------------+
Specifying the device list explicitly is required by MULTI, as it defines the devices available for inference and sets their priorities. Importantly, the list may also specify the number of requests for MULTI to keep for each device, as described below.
Specifying the device list explicitly is required by MULTI, as it defines the devices available for inference and sets their priorities.
Note that OpenVINO™ Runtime enables you to use “GPU” as an alias for “GPU.0” in function calls. More details on enumerating devices can be found in :doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.
@@ -59,24 +59,6 @@ The following commands are accepted by the API:
:fragment: [MULTI_0]
Notice that MULTI allows you to **change device priorities on the fly**. You can alter the order, exclude a device, and bring an excluded device back. Still, it does not allow adding new devices.
.. tab:: C++
.. doxygensnippet:: docs/snippets/MULTI1.cpp
:language: cpp
:fragment: [part1]
.. tab:: Python
.. doxygensnippet:: docs/snippets/ov_multi.py
:language: python
:fragment: [MULTI_1]
One more thing you can define is the **number of requests to allocate for each device**. You can do it simply by adding the number to each device in parentheses, like this: ``"MULTI:CPU(2),GPU(2)"``. However, this method is not recommended as it is not performance-portable. The suggested approach is to configure individual devices and query the resulting number of requests to be used at the application level, as described in `Configuring Individual Devices and Creating MULTI On Top <#configuring-individual-devices-and-creating-the-multi-device-on-top>`__.
To check what devices are present in the system, you can use the Device API. For information on how to do it, check :doc:`Query device properties and configuration <openvino_docs_OV_UG_query_api>`.
@@ -155,11 +137,4 @@ Additional Resources
- :doc:`Automatic Device Selection <openvino_docs_OV_UG_supported_plugins_AUTO>`
.. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen width="560" height="315" src="https://www.youtube.com/embed/xbORYFEmrqU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
.. note:: This video is currently available only for C++, but many of the same concepts apply to Python.
@endsphinxdirective

View File

@@ -16,7 +16,7 @@
openvino_docs_OV_UG_model_state_intro
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, TensorFlow Lite, ONNX, or PaddlePaddle model and execute it on preferred devices.
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.

View File

@@ -22,7 +22,7 @@ When some preprocessing steps cannot be integrated into the execution graph usin
Model Optimizer command-line options (for example, ``YUV``->``RGB`` color space conversion,
``Resize``, etc.), it is possible to write a simple code which:
* Reads the original model (OpenVINO IR, TensorFlow, ONNX, PaddlePaddle).
* Reads the original model (OpenVINO IR, TensorFlow, TensorFlow Lite, ONNX, PaddlePaddle).
* Adds the preprocessing/postprocessing steps.
* Saves resulting model as IR (``.xml`` and ``.bin``).

View File

@@ -11,7 +11,7 @@ This guide presents how to use OpenVINO securely with protected models.
Secure Model Deployment
#######################
After a model is optimized by the OpenVINO Model Optimizer, it's deployednto target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized model is stored on edge device and is executed by the OpenVINO Runtime. TensorFlow, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
After a model is optimized by the OpenVINO Model Optimizer, it's deployed to target devices in the OpenVINO Intermediate Representation (OpenVINO IR) format. An optimized model is stored on edge device and is executed by the OpenVINO Runtime. TensorFlow, TensorFlow Lite, ONNX and PaddlePaddle models can be read natively by OpenVINO Runtime as well.
Encrypting and optimizing model before deploying it to the edge device can be used to protect deep-learning models. The edge device should keep the stored model protected all the time and have the model decrypted **in runtime only** for use by the OpenVINO Runtime.

View File

@@ -292,24 +292,27 @@ For more details, see the :doc:`preprocessing API<openvino_docs_OV_UG_Preprocess
Model Caching
+++++++++++++++++++++++++++++++++++++++
Cache for the GPU plugin may be enabled via the common OpenVINO ``ov::cache_dir`` property. GPU plugin implementation supports only caching of compiled kernels, so all plugin-specific model transformations are executed on each ``ov::Core::compile_model()`` call regardless of the ``cache_dir`` option.
Still, since kernel compilation is a bottleneck in the model loading process, a significant load time reduction can be achieved with the ``ov::cache_dir`` property enabled.
Model Caching helps reduce application startup delays by exporting and reusing
the compiled model automatically. The cache for the GPU plugin may be enabled
via the common OpenVINO ``ov::cache_dir`` property.
.. note::
This means that all plugin-specific model transformations are executed on each ``ov::Core::compile_model()``
call, regardless of the ``ov::cache_dir`` option. Still, since kernel compilation is a bottleneck in the model
loading process, a significant load time reduction can be achieved.
Currently, GPU plugin implementation fully supports static models only. For dynamic models,
kernel caching is used instead and multiple .cl_cache files are generated along with the .blob file.
Full model caching support is currently implemented as a preview feature. To activate it, set the OV_GPU_CACHE_MODEL environment variable to 1.
For more details, see the :doc:`Model caching overview<openvino_docs_OV_UG_Model_caching_overview>`.
For more details, see the :doc:`Model caching overview <openvino_docs_OV_UG_Model_caching_overview>`.
Extensibility
+++++++++++++++++++++++++++++++++++++++
For information on this subject, see the :doc:`GPU Extensibility<openvino_docs_Extensibility_UG_GPU>`.
For information on this subject, see the :doc:`GPU Extensibility <openvino_docs_Extensibility_UG_GPU>`.
GPU Context and Memory Sharing via RemoteTensor API
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For information on this subject, see the :doc:`RemoteTensor API of GPU Plugin<openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API>`.
For information on this subject, see the :doc:`RemoteTensor API of GPU Plugin <openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API>`.
Supported Properties
#######################################
@@ -373,18 +376,18 @@ GPU Performance Checklist: Summary
Since OpenVINO relies on the OpenCL kernels for the GPU implementation, many general OpenCL tips apply:
- Prefer ``FP16`` inference precision over ``FP32``, as Model Optimizer can generate both variants, and the ``FP32`` is the default. To learn about optimization options, see :doc:`Optimization Guide<openvino_docs_model_optimization_guide>`.
- Try to group individual infer jobs by using :doc:`automatic batching<openvino_docs_OV_UG_Automatic_Batching>`.
- Consider :doc:`caching<openvino_docs_OV_UG_Model_caching_overview>` to minimize model load time.
- If your application performs inference on the CPU alongside the GPU, or otherwise loads the host heavily, make sure that the OpenCL driver threads do not starve. :doc:`CPU configuration options<openvino_docs_OV_UG_supported_plugins_CPU>` can be used to limit the number of inference threads for the CPU plugin.
- Even in the GPU-only scenario, a GPU driver might occupy a CPU core with spin-loop polling for completion. If CPU load is a concern, consider the dedicated ``queue_throttle`` property mentioned previously. Note that this option may increase inference latency, so consider combining it with multiple GPU streams or :doc:`throughput performance hints<openvino_docs_OV_UG_Performance_Hints>`.
- When operating media inputs, consider :doc:`remote tensors API of the GPU Plugin<openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API>`.
- Try to group individual infer jobs by using :doc:`automatic batching <openvino_docs_OV_UG_Automatic_Batching>`.
- Consider :doc:`caching <openvino_docs_OV_UG_Model_caching_overview>` to minimize model load time.
- If your application performs inference on the CPU alongside the GPU, or otherwise loads the host heavily, make sure that the OpenCL driver threads do not starve. :doc:`CPU configuration options <openvino_docs_OV_UG_supported_plugins_CPU>` can be used to limit the number of inference threads for the CPU plugin.
- Even in the GPU-only scenario, a GPU driver might occupy a CPU core with spin-loop polling for completion. If CPU load is a concern, consider the dedicated ``queue_throttle`` property mentioned previously. Note that this option may increase inference latency, so consider combining it with multiple GPU streams or :doc:`throughput performance hints <openvino_docs_OV_UG_Performance_Hints>`.
- When operating media inputs, consider :doc:`remote tensors API of the GPU Plugin <openvino_docs_OV_UG_supported_plugins_GPU_RemoteTensor_API>`.
Additional Resources
#######################################
* :doc:`Supported Devices<openvino_docs_OV_UG_supported_plugins_Supported_Devices>`.
* :doc:`Optimization guide<openvino_docs_deployment_optimization_guide_dldt_optimization_guide>`.
* :doc:`Supported Devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`.
* :doc:`Optimization guide <openvino_docs_deployment_optimization_guide_dldt_optimization_guide>`.
* `GPU plugin developers documentation <https://github.com/openvinotoolkit/openvino/blob/master/src/plugins/intel_gpu/README.md>`__

View File

@@ -17,21 +17,21 @@ The OpenVINO Runtime provides unique capabilities to infer deep learning models
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
| OpenVINO Device | Supported Hardware |
+==========================================================================+===============================================================================================================+
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel&reg; Processor Graphics, including Intel&reg; HD Graphics and Intel&reg; Iris&reg; Graphics |
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel&reg; Xeon&reg; with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel&reg; Core&trade; Processors with Intel&reg; |
|| | AVX2, Intel&reg; Atom&reg; Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® |
|| | AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`GNA plugin <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; |
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; |
|| | Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; |
|| | Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, |
|| | Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; |
|| | Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; |
|| | i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, |
|| | Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, |
|| | Intel&reg; Core&trade; i3-1000G4 Processor |
|| :doc:`GNA plugin <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® |
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® |
|| | Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® |
|| | Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, |
|| | Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® |
|| | Core i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ |
|| | i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, |
|| | Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, |
|| | Intel® Core™ i3-1000G4 Processor |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`Arm® CPU <openvino_docs_OV_UG_supported_plugins_ARM_CPU>` | Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|| (unavailable in the Intel® Distribution of OpenVINO™ toolkit) | |

View File

@@ -1,421 +1,511 @@
Network model,Release,IE-Type,Platform name,Throughput-INT8,Throughput-FP16,Throughput-FP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency
begin_rec,,,,,,,,,,,,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,96.06,,35.627,0.146,0.582,$658 ,165,1,$658 ,165,17.1432
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,53.093,,22.253,0.081,0.322,$658 ,165,1,$658 ,165,22.0002
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,108.306,,44.797,0.165,0.656,$658 ,165,1,$658 ,165,
bert-base-cased ,OV-2022.3-8991,atom,Intel® Atomx5-E3940 CPU-only,2.763,,1.332,0.081,0.291,$34 ,9.5,1,$34 ,9.5,350.2746
bert-base-cased ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,5.694,,2.002,0.085,0.475,$67 ,12,1,$67 ,12,183.1711
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,16.11,,10.009,0.24,1.343,$67 ,12,1,$67 ,12,79.7607
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,21.128,,11.81,0.315,1.761,$67 ,12,1,$67 ,12,
bert-base-cased ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,14.212,,4.255,0.12,0.947,$118 ,15,1,$118 ,15,72.6516
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,18.983,,8.696,0.161,1.266,$118 ,15,1,$118 ,15,62.9729
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,30.866,,12.087,0.262,2.058,$118 ,15,1,$118 ,15,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,645.77,,213.877,0.138,1.745,"$4,678 ",370,2,"$2,339 ",185,6.7612
bert-base-cased ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,23.721,,14.457,0.203,0.365,$117 ,65,1,$117 ,65,44.0371
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,30.141,,16.38,0.141,0.861,$214 ,35,1,$214 ,35,46.6064
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,30.541,,19.319,0.159,0.47,$192 ,65,1,$192 ,65,27.8871
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,41.504,,22.75,0.137,1.186,$303 ,35,1,$303 ,35,27.974
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,32.073,,16.558,0.066,0.916,$488 ,35,1,$488 ,35,39.7617
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,69.053,,40.243,0.116,0.552,$594 ,125,1,$594 ,125,18.309
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,23.402,,14.614,0.094,0.33,$249 ,71,1,$249 ,71,44.8984
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,266.949,,79.033,0.085,1.271,"$3,144 ",210,2,"$1,572 ",105,12.4065
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,2090.76,,326.55,0.292,4.646,"$7,166 ",450,2,"$3,583 ",225,4.61
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,682.593,,225.713,0.04,1.665,"$16,954 ",410,2,"$8,477 ",205,6.9035
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,256.994,,75.502,0.128,1.028,"$2,004 ",250,2,"$1,002 ",125,13.0382
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,64.632,,18.394,0.138,2.308,$469 ,28,1,$469 ,28,17.638
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,95.656,,44.056,0.204,3.416,$469 ,28,1,$469 ,28,14.1005
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,128.005,,50.592,0.273,4.572,$469 ,28,1,$469 ,28,
bert-base-cased ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,906.3,348.52,,0.471,6.042,"$1,925 ",150,1,"$1,925 ",150,7.381
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,7.714,,3.093,0.012,0.047,$658 ,165,1,$658 ,165,155.3633
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,5.617,,1.978,0.009,0.034,$658 ,165,1,$658 ,165,181.8303
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,10.602,,3.753,0.016,0.064,$658 ,165,1,$658 ,165,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.272,,0.125,0.008,0.029,$34 ,9.5,1,$34 ,9.5,3861.0657
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.488,,0.188,0.007,0.041,$67 ,12,1,$67 ,12,2090.8266
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,1.395,,0.774,0.021,0.116,$67 ,12,1,$67 ,12,727.6781
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,1.827,,0.845,0.027,0.152,$67 ,12,1,$67 ,12,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,1.199,,0.377,0.01,0.08,$118 ,15,1,$118 ,15,831.301
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,2.051,,0.766,0.017,0.137,$118 ,15,1,$118 ,15,494.5363
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,3.118,,1.127,0.026,0.208,$118 ,15,1,$118 ,15,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,43.436,,17.379,0.009,0.117,"$4,678 ",370,2,"$2,339 ",185,52.2862
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,2.067,,1.278,0.018,0.032,$117 ,65,1,$117 ,65,495.0786
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,2.619,,1.536,0.012,0.075,$214 ,35,1,$214 ,35,502.3687
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,2.72,,1.679,0.014,0.042,$192 ,65,1,$192 ,65,320.168
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,3.625,,2.11,0.012,0.104,$303 ,35,1,$303 ,35,309.2848
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.906,,1.693,0.006,0.083,$488 ,35,1,$488 ,35,386.4947
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,4.801,,2.729,0.008,0.038,$594 ,125,1,$594 ,125,200.0794
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.098,,1.32,0.008,0.03,$249 ,71,1,$249 ,71,492.0938
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,21.062,,7.021,0.007,0.1,"$3,144 ",210,2,"$1,572 ",105,101.4694
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,651.95,,91.18,0.091,1.449,"$7,166 ",450,2,"$3,583 ",225,12.87
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,46.064,,19.051,0.003,0.112,"$16,954 ",410,2,"$8,477 ",205,49.4869
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,20.014,,6.726,0.01,0.08,"$2,004 ",250,2,"$1,002 ",125,105.9423
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,5.192,,1.626,0.011,0.185,$469 ,28,1,$469 ,28,203.6311
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,10.476,,3.914,0.022,0.374,$469 ,28,1,$469 ,28,95.6598
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,11.75,,4.168,0.025,0.42,$469 ,28,1,$469 ,28,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,74.47,25.77,,0.039,0.496,"$1,925 ",150,1,"$1,925 ",150,19.768
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,99.078,,36.552,0.151,0.6,$658 ,165,1,$658 ,165,11.269
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,57.707,,13.789,0.088,0.35,$658 ,165,1,$658 ,165,16.263
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,115.59,,39.82,0.176,0.701,$658 ,165,1,$658 ,165,
deeplabv3,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,3.327,,1.496,0.098,0.35,$34 ,9.5,1,$34 ,9.5,308.0916
deeplabv3,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,6.07,,3.041,0.091,0.506,$67 ,12,1,$67 ,12,166.5404
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,,,5.196,0,0,$67 ,12,1,$67 ,12,217.0439
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,9.877,,7.145,0.147,0.823,$67 ,12,1,$67 ,12,
deeplabv3,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,13.516,,4.681,0.115,0.901,$118 ,15,1,$118 ,15,74.1061
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,22.35,,4.635,0.189,1.49,$118 ,15,1,$118 ,15,42.9657
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,34.576,,8.955,0.293,2.305,$118 ,15,1,$118 ,15,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,559.145,,134.159,0.12,1.511,"$4,678 ",370,2,"$2,339 ",185,5.356
deeplabv3,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,27.171,,15.947,0.232,0.418,$117 ,65,1,$117 ,65,36.6584
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,34.907,,16.72,0.163,0.997,$214 ,35,1,$214 ,35,38.8986
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,35.07,,20.497,0.183,0.54,$192 ,65,1,$192 ,65,22.1865
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,47.647,,23.747,0.157,1.361,$303 ,35,1,$303 ,35,22.628
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,36.559,,18.235,0.075,1.045,$488 ,35,1,$488 ,35,27.138
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,79.42,,21.03,0.134,0.635,$594 ,125,1,$594 ,125,12.8397
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,26.173,,16.906,0.105,0.369,$249 ,71,1,$249 ,71,37.9245
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,248.049,,81.667,0.079,1.181,"$3,144 ",210,2,"$1,572 ",105,8.9485
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,1139.5,,271.62,0.159,2.532,"$7,166 ",450,2,"$3,583 ",225,2.47
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,632.113,,168.65,0.037,1.542,"$16,954 ",410,2,"$8,477 ",205,4.0073
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,241.703,,78.963,0.121,0.967,"$2,004 ",250,2,"$1,002 ",125,9.356
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,64.13,,18.519,0.137,2.29,$469 ,28,1,$469 ,28,16.6586
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,104.926,,24.592,0.224,3.747,$469 ,28,1,$469 ,28,9.1435
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,121.441,,30.498,0.259,4.337,$469 ,28,1,$469 ,28,
deeplabv3,OV-2022.3-8991,accel,Intel® Flex-170 GPU,882.04,98.95,,0.458,5.88,"$1,925 ",150,1,"$1,925 ",150,2.674
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,457.193,,165.166,0.695,2.771,$658 ,165,1,$658 ,165,3.141
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,203.417,,68.438,0.309,1.233,$658 ,165,1,$658 ,165,6.6728
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,575.442,,179.858,0.875,3.488,$658 ,165,1,$658 ,165,
densenet-121,OV-2022.3-8991,atom,Intel® Atomx5-E3940 CPU-only,13.344,,5.882,0.392,1.405,$34 ,9.5,1,$34 ,9.5,80.7014
densenet-121,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,24.172,,10.554,0.361,2.014,$67 ,12,1,$67 ,12,43.668
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,,,30.615,0,0,$67 ,12,1,$67 ,12,30.0241
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,39.365,,38.926,0.588,3.28,$67 ,12,1,$67 ,12,
densenet-121,OV-2022.3-8991,atom,Intel® Celeron 6305E CPU-only,58.965,,15.713,0.5,3.931,$118 ,15,1,$118 ,15,18.3425
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,86.162,,25.34,0.73,5.744,$118 ,15,1,$118 ,15,20.7907
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,140.891,,39.929,1.194,9.393,$118 ,15,1,$118 ,15,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,3094.742,,701.32,0.662,8.364,"$4,678 ",370,2,"$2,339 ",185,2.131
densenet-121,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,120.121,,66.624,1.027,1.848,$117 ,65,1,$117 ,65,9.3755
densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,142.623,,80.872,0.666,4.075,$214 ,35,1,$214 ,35,10.2536
densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,149.112,,85.051,0.777,2.294,$192 ,65,1,$192 ,65,6.0817
densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,194.559,,111.988,0.642,5.559,$303 ,35,1,$303 ,35,6.1906
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,146.463,,69.186,0.3,4.185,$488 ,35,1,$488 ,35,8.6496
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,360.501,,182.543,0.607,2.884,$594 ,125,1,$594 ,125,3.6046
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,114.844,,67.188,0.461,1.618,$249 ,71,1,$249 ,71,9.7609
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,1116.372,,295.952,0.355,5.316,"$3,144 ",210,2,"$1,572 ",105,3.9606
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,8279.14,,1137.41,1.155,18.398,"$7,166 ",450,2,"$3,583 ",225,2.39
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,3155.106,,815.725,0.186,7.695,"$16,954 ",410,2,"$8,477 ",205,2.8831
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1064.824,,283.423,0.531,4.259,"$2,004 ",250,2,"$1,002 ",125,4.0689
densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,265.167,,74.501,0.565,9.47,$469 ,28,1,$469 ,28,4.7413
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,391.185,,123.519,0.834,13.971,$469 ,28,1,$469 ,28,6.5259
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,526.12,,150.35,1.122,18.79,$469 ,28,1,$469 ,28,
densenet-121,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3440.18,1178.68,,1.787,22.935,"$1,925 ",150,1,"$1,925 ",150,3.302
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,112.297,,64.06,0.171,0.681,$658 ,165,1,$658 ,165,11.8265
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,73.766,,38.742,0.112,0.447,$658 ,165,1,$658 ,165,21.403
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,128.735,,76.62,0.196,0.78,$658 ,165,1,$658 ,165,
efficientdet-d0,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,3.812,,2.565,0.112,0.401,$34 ,9.5,1,$34 ,9.5,274.5947
efficientdet-d0,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,7.248,,5.17,0.108,0.604,$67 ,12,1,$67 ,12,143.7999
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,22.697,,15.635,0.339,1.891,$67 ,12,1,$67 ,12,59.0651
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,26.855,,17.296,0.401,2.238,$67 ,12,1,$67 ,12,
efficientdet-d0,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,15.949,,10.417,0.135,1.063,$118 ,15,1,$118 ,15,62.2765
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,25.936,,14.073,0.22,1.729,$118 ,15,1,$118 ,15,54.0166
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,32.767,,20.733,0.278,2.184,$118 ,15,1,$118 ,15,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,424.388,,256.503,0.091,1.147,"$4,678 ",370,2,"$2,339 ",185,
efficientdet-d0,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,36.666,,24.041,0.313,0.564,$117 ,65,1,$117 ,65,30.2521
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,45.331,,26.92,0.212,1.295,$214 ,35,1,$214 ,35,32.8946
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,44.921,,32.357,0.234,0.691,$192 ,65,1,$192 ,65,19.7048
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,62.749,,37.807,0.207,1.793,$303 ,35,1,$303 ,35,19.901
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,50.35,,29.935,0.103,1.439,$488 ,35,1,$488 ,35,24.2916
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,94.981,,36.434,0.16,0.76,$594 ,125,1,$594 ,125,12.658
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,35.831,,27.306,0.144,0.505,$249 ,71,1,$249 ,71,30.9469
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,239.06,,161.224,0.076,1.138,"$3,144 ",210,2,"$1,572 ",105,13.9735
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,875.53,,560.48,0.122,1.946,"$7,166 ",450,2,"$3,583 ",225,5.07
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,471.02,,300.291,0.028,1.149,"$16,954 ",410,2,"$8,477 ",205,9.3866
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,231.873,,156.285,0.116,0.927,"$2,004 ",250,2,"$1,002 ",125,14.1605
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,71.482,,41.123,0.152,2.553,$469 ,28,1,$469 ,28,16.6952
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,92.52,,50.538,0.197,3.304,$469 ,28,1,$469 ,28,17.295
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,107.688,,56.901,0.23,3.846,$469 ,28,1,$469 ,28,
efficientdet-d0,OV-2022.3-8991,accel,Intel® Flex-170 GPU,463.67,295.13,,0.241,3.091,"$1,925 ",150,1,"$1,925 ",150,5.603
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,12.921,,4.016,0.02,0.078,$658 ,165,1,$658 ,165,89.8929
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,6.802,,1.82,0.01,0.041,$658 ,165,1,$658 ,165,149.7396
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,15.679,,4.499,0.024,0.095,$658 ,165,1,$658 ,165,
faster_rcnn_resnet50_coco,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.32,,0.131,0.009,0.034,$34 ,9.5,1,$34 ,9.5,3206.1652
faster_rcnn_resnet50_coco,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.592,,0.242,0.009,0.049,$67 ,12,1,$67 ,12,1727.27
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,1.301,,0.728,0.019,0.108,$67 ,12,1,$67 ,12,776.0692
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,1.683,,0.865,0.025,0.14,$67 ,12,1,$67 ,12,
faster_rcnn_resnet50_coco,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,1.563,,0.417,0.013,0.104,$118 ,15,1,$118 ,15,640.0005
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,2.616,,0.725,0.022,0.174,$118 ,15,1,$118 ,15,389.3563
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,4.056,,1.107,0.034,0.27,$118 ,15,1,$118 ,15,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,74.93,,19.965,0.016,0.203,"$4,678 ",370,2,"$2,339 ",185,65.5753
faster_rcnn_resnet50_coco,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,2.988,,1.473,0.026,0.046,$117 ,65,1,$117 ,65,340.7313
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,3.633,,1.926,0.017,0.104,$214 ,35,1,$214 ,35,430.1967
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,3.852,,1.982,0.02,0.059,$192 ,65,1,$192 ,65,241.5513
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,4.999,,2.648,0.016,0.143,$303 ,35,1,$303 ,35,260.2284
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,3.71,,2.005,0.008,0.106,$488 ,35,1,$488 ,35,280.1493
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,8.977,,4.542,0.015,0.072,$594 ,125,1,$594 ,125,137.1747
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.867,,1.464,0.012,0.04,$249 ,71,1,$249 ,71,353.2042
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,29.332,,8.19,0.009,0.14,"$3,144 ",210,2,"$1,572 ",105,78.1722
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,282.45,,32.43,0.003,0.044,"$7,166 ",450,2,"$3,583 ",225,12.03
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,85.213,,22.066,0.005,0.208,"$16,954 ",410,2,"$8,477 ",205,30.4317
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,27.847,,7.786,0.014,0.111,"$2,004 ",250,2,"$1,002 ",125,78.6604
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,7.027,,1.855,0.015,0.251,$469 ,28,1,$469 ,28,151.8783
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,13.823,,3.545,0.029,0.494,$469 ,28,1,$469 ,28,70.7933
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,16.898,,4.191,0.036,0.604,$469 ,28,1,$469 ,28,
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Flex-170 GPU,216.3,23.42,,0.112,1.442,"$1,925 ",150,1,"$1,925 ",150,9.137
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,121.813,,39.391,0.185,0.738,$658 ,165,1,$658 ,165,11.0425
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,71.229,,17.755,0.108,0.432,$658 ,165,1,$658 ,165,19.7132
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,175.049,,44.894,0.266,1.061,$658 ,165,1,$658 ,165,
Inception-V4,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,3.12,,1.344,0.092,0.328,$34 ,9.5,1,$34 ,9.5,335.3712
Inception-V4,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,5.677,,2.364,0.085,0.473,$67 ,12,1,$67 ,12,181.8897
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,17.009,,8.302,0.254,1.417,$67 ,12,1,$67 ,12,78.0548
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,21.713,,10.37,0.324,1.809,$67 ,12,1,$67 ,12,
Inception-V4,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,15.576,,4.073,0.132,1.038,$118 ,15,1,$118 ,15,65.7272
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,28.105,,6.681,0.238,1.874,$118 ,15,1,$118 ,15,46.2616
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,41.918,,10.163,0.355,2.795,$118 ,15,1,$118 ,15,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,881.403,,205.296,0.188,2.382,"$4,678 ",370,2,"$2,339 ",185,4.7029
Inception-V4,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,30.004,,15.51,0.256,0.462,$117 ,65,1,$117 ,65,35.0513
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,35.882,,19.222,0.168,1.025,$214 ,35,1,$214 ,35,37.6472
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,37.987,,19.998,0.198,0.584,$192 ,65,1,$192 ,65,21.5144
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,48.903,,26.356,0.161,1.397,$303 ,35,1,$303 ,35,22.2402
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,37.301,,19.475,0.076,1.066,$488 ,35,1,$488 ,35,28.572
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,92.646,,44.966,0.156,0.741,$594 ,125,1,$594 ,125,12.3153
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,28.537,,15.13,0.115,0.402,$249 ,71,1,$249 ,71,36.8888
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,301.215,,77.005,0.096,1.434,"$3,144 ",210,2,"$1,572 ",105,10.5711
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,3406.7,,331.56,0.475,7.57,"$7,166 ",450,2,"$3,583 ",225,3.23
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,937.139,,225.776,0.055,2.286,"$16,954 ",410,2,"$8,477 ",205,5.6984
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,287.767,,73.617,0.144,1.151,"$2,004 ",250,2,"$1,002 ",125,11.1114
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,71.295,,18.482,0.152,2.546,$469 ,28,1,$469 ,28,15.8294
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,158.282,,36.884,0.337,5.653,$469 ,28,1,$469 ,28,10.6245
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,182.132,,44.198,0.388,6.505,$469 ,28,1,$469 ,28,
Inception-V4,OV-2022.3-8991,accel,Intel® Flex-170 GPU,2986.91,298.6,,1.552,19.913,"$1,925 ",150,1,"$1,925 ",150,3.968
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,1054.462,,346.546,1.603,6.391,$658 ,165,1,$658 ,165,1.4898
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,493.088,,145.503,0.749,2.988,$658 ,165,1,$658 ,165,2.472
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,1056.241,,361.472,1.605,6.401,$658 ,165,1,$658 ,165,
mobilenet-ssd ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,28.032,,12.691,0.824,2.951,$34 ,9.5,1,$34 ,9.5,38.1991
mobilenet-ssd ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,50.352,,23.675,0.752,4.196,$67 ,12,1,$67 ,12,20.8993
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,100.377,,60.647,1.498,8.365,$67 ,12,1,$67 ,12,11.6812
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,148.458,,79.575,2.216,12.372,$67 ,12,1,$67 ,12,
mobilenet-ssd ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,123.806,,38.981,1.049,8.254,$118 ,15,1,$118 ,15,8.4121
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,168.473,,53.272,1.428,11.232,$118 ,15,1,$118 ,15,7.8961
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,238.511,,83.721,2.021,15.901,$118 ,15,1,$118 ,15,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,6730.68,,1634.937,1.439,18.191,"$4,678 ",370,2,"$2,339 ",185,0.7265
mobilenet-ssd ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,244.598,,140.009,2.091,3.763,$117 ,65,1,$117 ,65,4.4481
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,303.938,,173.562,1.42,8.684,$214 ,35,1,$214 ,35,4.7946
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,304.802,,185.045,1.588,4.689,$192 ,65,1,$192 ,65,2.8007
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,412.261,,241.37,1.361,11.779,$303 ,35,1,$303 ,35,2.8749
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,315.107,,152.342,0.646,9.003,$488 ,35,1,$488 ,35,3.5896
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,774.346,,345.309,1.304,6.195,$594 ,125,1,$594 ,125,1.5452
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,233.43,,147.098,0.937,3.288,$249 ,71,1,$249 ,71,4.5879
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,2331.207,,691.743,0.741,11.101,"$3,144 ",210,2,"$1,572 ",105,1.4852
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,16445.75,,2736.2,2.295,36.546,"$7,166 ",450,2,"$3,583 ",225,0.65
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,6691.915,,1796.357,0.395,16.322,"$16,954 ",410,2,"$8,477 ",205,1.0518
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,2225.935,,667.692,1.111,8.904,"$2,004 ",250,2,"$1,002 ",125,1.5444
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,579.307,,166.959,1.235,20.69,$469 ,28,1,$469 ,28,2.0215
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,582.636,,243.945,1.242,20.808,$469 ,28,1,$469 ,28,2.548
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,744.231,,292.071,1.587,26.58,$469 ,28,1,$469 ,28,
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3548.98,1412.68,,1.844,23.66,"$1,925 ",150,1,"$1,925 ",150,1.344
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,2446.221,,1003.129,3.718,14.826,$658 ,165,1,$658 ,165,0.7182
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,1265.969,,389.894,1.924,7.673,$658 ,165,1,$658 ,165,1.3894
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,2680.458,,1013.049,4.074,16.245,$658 ,165,1,$658 ,165,
mobilenet-v2 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,81.572,,45.013,2.399,8.587,$34 ,9.5,1,$34 ,9.5,13.4692
mobilenet-v2 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,143.134,,81.991,2.136,11.928,$67 ,12,1,$67 ,12,7.609
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,,,164.945,0,0,$67 ,12,1,$67 ,12,7.0306
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,227.898,,202.181,3.401,18.992,$67 ,12,1,$67 ,12,
mobilenet-v2 ,OV-2022.3-8991,atom,Intel® Celeron 6305E CPU-only,316.763,,124.654,2.684,21.118,$118 ,15,1,$118 ,15,3.391
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,525.084,,141.61,4.45,35.006,$118 ,15,1,$118 ,15,4.9197
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,807.397,,255.964,6.842,53.826,$118 ,15,1,$118 ,15,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,14679.84,,4065.139,3.138,39.675,"$4,678 ",370,2,"$2,339 ",185,0.4828
mobilenet-v2 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,619.507,,452.366,5.295,9.531,$117 ,65,1,$117 ,65,1.8067
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,806.314,,441.904,3.768,23.038,$214 ,35,1,$214 ,35,2.1078
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,766.072,,558.975,3.99,11.786,$192 ,65,1,$192 ,65,1.2307
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,1081.253,,664.108,3.568,30.893,$303 ,35,1,$303 ,35,1.2788
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,825.071,,413.091,1.691,23.573,$488 ,35,1,$488 ,35,1.6818
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,2067.162,,868.25,3.48,16.537,$594 ,125,1,$594 ,125,0.7363
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,594.283,,479.567,2.387,8.37,$249 ,71,1,$249 ,71,1.8531
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,5882.455,,1895.498,1.871,28.012,"$3,144 ",210,2,"$1,572 ",105,1.3871
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,28383.76,,7254.28,3.961,63.075,"$7,166 ",450,2,"$3,583 ",225,0.55
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,15616.083,,4308.927,0.921,38.088,"$16,954 ",410,2,"$8,477 ",205,0.8685
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,5616.283,,1835.686,2.803,22.465,"$2,004 ",250,2,"$1,002 ",125,1.404
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,1463.21,,538.597,3.12,52.258,$469 ,28,1,$469 ,28,0.8864
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,2076.015,,544.641,4.426,74.143,$469 ,28,1,$469 ,28,1.7212
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,2677.374,,698.942,5.709,95.621,$469 ,28,1,$469 ,28,
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,18371.95,4738.33,,9.544,122.48,"$1,925 ",150,1,"$1,925 ",150,1.15
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,804.771,,212.574,1.223,4.877,$658 ,165,1,$658 ,165,1.3886
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,491.337,,146.839,0.747,2.978,$658 ,165,1,$658 ,165,2.2655
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,1180.984,,365.777,1.795,7.157,$658 ,165,1,$658 ,165,
resnet-18 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,22.96,,9.564,0.675,2.417,$34 ,9.5,1,$34 ,9.5,44.5491
resnet-18 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,40.944,,16.158,0.611,3.412,$67 ,12,1,$67 ,12,25.1377
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,161.381,,60.863,2.409,13.448,$67 ,12,1,$67 ,12,10.983
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,197.477,,73.927,2.947,16.456,$67 ,12,1,$67 ,12,
resnet-18 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,105.574,,28.914,0.895,7.038,$118 ,15,1,$118 ,15,9.6165
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,194.694,,55.341,1.65,12.98,$118 ,15,1,$118 ,15,6.643
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron 6305E CPU+iGPU,291.441,,82.105,2.47,19.429,$118 ,15,1,$118 ,15,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,5761.961,,1431.864,1.232,15.573,"$4,678 ",370,2,"$2,339 ",185,0.6512
resnet-18 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,208.65,,103.699,1.783,3.21,$117 ,65,1,$117 ,65,5.0231
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,252.987,,127.448,1.182,7.228,$214 ,35,1,$214 ,35,5.2921
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,262.785,,134.002,1.369,4.043,$192 ,65,1,$192 ,65,3.0597
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,344.219,,175.433,1.136,9.835,$303 ,35,1,$303 ,35,3.1665
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,265.351,,130.166,0.544,7.581,$488 ,35,1,$488 ,35,4.0471
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,654.533,,307.741,1.102,5.236,$594 ,125,1,$594 ,125,1.6723
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,198.189,,101.399,0.796,2.791,$249 ,71,1,$249 ,71,5.2039
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,2017.368,,547.47,0.642,9.607,"$3,144 ",210,2,"$1,572 ",105,1.2913
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,27331.02,,2329.12,3.814,60.736,"$7,166 ",450,2,"$3,583 ",225,0.38
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,6320.391,,1582.817,0.373,15.416,"$16,954 ",410,2,"$8,477 ",205,0.667
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1940.935,,522.654,0.969,7.764,"$2,004 ",250,2,"$1,002 ",125,1.3451
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,480.992,,126.244,1.026,17.178,$469 ,28,1,$469 ,28,2.242
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,1061.591,,297.705,2.264,37.914,$469 ,28,1,$469 ,28,1.793
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,1237.94,,342.513,2.64,44.212,$469 ,28,1,$469 ,28,
resnet-18 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,27454.08,2264.67,,14.262,183.027,"$1,925 ",150,1,"$1,925 ",150,0.946
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,400.118,,133.834,0.608,2.425,$658 ,165,1,$658 ,165,3.0384
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,229.863,,66.122,0.349,1.393,$658 ,165,1,$658 ,165,5.2538
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,574.341,,155.749,0.873,3.481,$658 ,165,1,$658 ,165,
resnet-50,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,11.094,,4.516,0.326,1.168,$34 ,9.5,1,$34 ,9.5,92.6182
resnet-50,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,20.114,,8.254,0.3,1.676,$67 ,12,1,$67 ,12,51.0598
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,66.119,,29.408,0.987,5.51,$67 ,12,1,$67 ,12,21.6857
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,82.95,,35.81,1.238,6.913,$67 ,12,1,$67 ,12,
resnet-50,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,52.004,,14.152,0.441,3.467,$118 ,15,1,$118 ,15,19.6053
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,90.685,,24.633,0.769,6.046,$118 ,15,1,$118 ,15,14.6415
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,140.062,,37.864,1.187,9.337,$118 ,15,1,$118 ,15,-
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,2793.997,,691.079,0.597,7.551,"$4,678 ",370,2,"$2,339 ",185,1.3288
resnet-50,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,102.328,,52.896,0.875,1.574,$117 ,65,1,$117 ,65,10.4475
resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,123.574,,63.836,0.577,3.531,$214 ,35,1,$214 ,35,11.6252
resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,129.174,,68.242,0.673,1.987,$192 ,65,1,$192 ,65,6.8498
resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,168.016,,88.675,0.555,4.8,$303 ,35,1,$303 ,35,6.9723
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,129.371,,56.366,0.265,3.696,$488 ,35,1,$488 ,35,8.7659
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,317.744,,149.441,0.535,2.542,$594 ,125,1,$594 ,125,3.6469
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,97.606,,52.17,0.392,1.375,$249 ,71,1,$249 ,71,10.851
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,980.813,,268.009,0.312,4.671,"$3,144 ",210,2,"$1,572 ",105,2.9838
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,2905.803,,748.583,0.405,6.457,"$7,166 ",450,2,"$3,583 ",225,1.475
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,11359.88,,1118.97,0.67,27.707,"$16,954 ",410,2,"$8,477 ",205,0.94
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,937.572,,255.866,0.468,3.75,"$2,004 ",250,2,"$1,002 ",125,3.0985
resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975
resnet-50,OV-2022.3-8991,accel,Intel® Flex-170 GPU,10810.92,1005.16,,5.616,72.073,"$1,925 ",150,1,"$1,925 ",150,1.624
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,6.712,,2.394,0.01,0.041,$658 ,165,1,$658 ,165,175.7493
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,4.228,,1.262,0.006,0.026,$658 ,165,1,$658 ,165,241.7838
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,6.666,,2.393,0.01,0.04,$658 ,165,1,$658 ,165,
ssd-resnet34-1200 ,OV-2022.3-8991,atom,Intel® Atomx5-E3940 CPU-only,0.171,,0.081,0.005,0.018,$34 ,9.5,1,$34 ,9.5,5985.7525
ssd-resnet34-1200 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.31,,0.133,0.005,0.026,$67 ,12,1,$67 ,12,3246.0878
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,0.965,,0.615,0.014,0.08,$67 ,12,1,$67 ,12,1053.0078
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,0.31,,0.133,0.005,0.026,$67 ,12,1,$67 ,12,
ssd-resnet34-1200 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,0.806,,0.23,0.007,0.054,$118 ,15,1,$118 ,15,1240.6212
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,1.582,,0.486,0.013,0.105,$118 ,15,1,$118 ,15,649.3806
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,0.806,,0.231,0.007,0.054,$118 ,15,1,$118 ,15,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,41.52,,12.672,0.009,0.112,"$4,678 ",370,2,"$2,339 ",185,79.0111
ssd-resnet34-1200 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,1.606,,0.959,0.014,0.025,$117 ,65,1,$117 ,65,644.0626
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,1.932,,1.177,0.009,0.055,$214 ,35,1,$214 ,35,712.3677
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,2.067,,1.248,0.011,0.032,$192 ,65,1,$192 ,65,401.8765
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,2.66,,1.606,0.009,0.076,$303 ,35,1,$303 ,35,434.9877
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.046,,1.242,0.004,0.058,$488 ,35,1,$488 ,35,485.4343
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,4.871,,2.935,0.008,0.039,$594 ,125,1,$594 ,125,239.8346
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,1.55,,0.919,0.006,0.022,$249 ,71,1,$249 ,71,665.2714
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,15.706,,4.572,0.005,0.075,"$3,144 ",210,2,"$1,572 ",105,132.0319
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,152.74,,20.32,0.021,0.339,"$7,166 ",450,2,"$3,583 ",225,14.48
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,47.365,,14.722,0.003,0.116,"$16,954 ",410,2,"$8,477 ",205,44.387
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,14.966,,4.35,0.007,0.06,"$2,004 ",250,2,"$1,002 ",125,138.9625
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,3.556,,1.015,0.008,0.127,$469 ,28,1,$469 ,28,284.2379
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,8.239,,2.545,0.018,0.294,$469 ,28,1,$469 ,28,122.4561
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,3.565,,1.01,0.008,0.127,$469 ,28,1,$469 ,28,
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,132.44,18.19,,0.069,0.883,"$1,925 ",150,1,"$1,925 ",150,19.933
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,10.652,,3.873,0.016,0.065,$658 ,165,1,$658 ,165,111.0757
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,7.059,,2.154,0.011,0.043,$658 ,165,1,$658 ,165,142.0745
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,14.933,,4.935,0.023,0.091,$658 ,165,1,$658 ,165,
unet-camvid--0001 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.258,,0.039,0.008,0.027,$34 ,9.5,1,$34 ,9.5,3959.594
unet-camvid--0001 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.482,,0.061,0.007,0.04,$67 ,12,1,$67 ,12,2094.2569
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,1.994,,0.989,0.03,0.166,$67 ,12,1,$67 ,12,502.6095
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,2.242,,0.597,0.033,0.187,$67 ,12,1,$67 ,12,
unet-camvid--0001 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,1.471,,0.374,0.012,0.098,$118 ,15,1,$118 ,15,678.4977
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,2.715,,0.802,0.023,0.181,$118 ,15,1,$118 ,15,368.8973
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,4.12,,1.142,0.035,0.275,$118 ,15,1,$118 ,15,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,81.838,,19.314,0.017,0.221,"$4,678 ",370,2,"$2,339 ",185,41.506
unet-camvid--0001 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,2.482,,1.54,0.021,0.038,$117 ,65,1,$117 ,65,412.1291
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,3.031,,1.9,0.014,0.087,$214 ,35,1,$214 ,35,457.5992
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Corei5-8500 CPU-only,3.227,,2.018,0.017,0.05,$192 ,65,1,$192 ,65,256.5479
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,4.155,,2.6,0.014,0.119,$303 ,35,1,$303 ,35,277.7416
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.907,,2.004,0.006,0.083,$488 ,35,1,$488 ,35,319.7667
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,7.413,,4.615,0.012,0.059,$594 ,125,1,$594 ,125,157.3622
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.386,,1.481,0.01,0.034,$249 ,71,1,$249 ,71,422.1157
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,29.251,,7.301,0.009,0.139,"$3,144 ",210,2,"$1,572 ",105,69.3596
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,381.85,,30.96,0.053,0.849,"$7,166 ",450,2,"$3,583 ",225,7.95
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,93.081,,21.382,0.005,0.227,"$16,954 ",410,2,"$8,477 ",205,22.9476
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,27.814,,6.966,0.014,0.111,"$2,004 ",250,2,"$1,002 ",125,72.9773
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,6.54,,1.677,0.014,0.234,$469 ,28,1,$469 ,28,152.602
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,15.391,,4.571,0.033,0.55,$469 ,28,1,$469 ,28,61.6002
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,17.962,,4.848,0.038,0.642,$469 ,28,1,$469 ,28,
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,218.12,35.2,,0.113,1.454,"$1,925 ",150,1,"$1,925 ",150,7.149
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,428.506,,162.077,0.651,2.597,$658 ,165,1,$658 ,165,2.4778
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,245.738,,84.457,0.373,1.489,$658 ,165,1,$658 ,165,3.8792
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,598.947,,195.608,0.91,3.63,$658 ,165,1,$658 ,165,
yolo_v3_tiny,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,12.406,,6.124,0.365,1.306,$34 ,9.5,1,$34 ,9.5,83.8614
yolo_v3_tiny,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,22.94,,10.395,0.342,1.912,$67 ,12,1,$67 ,12,44.6243
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,66.641,,38.178,0.995,5.553,$67 ,12,1,$67 ,12,15.7687
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,86.38,,45.819,1.289,7.198,$67 ,12,1,$67 ,12,
yolo_v3_tiny,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,55.629,,18.246,0.471,3.709,$118 ,15,1,$118 ,15,18.2291
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,106.588,,31.376,0.903,7.106,$118 ,15,1,$118 ,15,10.8727
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,153.471,,46.125,1.301,10.231,$118 ,15,1,$118 ,15,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,2733.627,,761.534,0.584,7.388,"$4,678 ",370,2,"$2,339 ",185,1.1267
yolo_v3_tiny,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,114.701,,66.266,0.98,1.765,$117 ,65,1,$117 ,65,8.9295
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,141.001,,79.694,0.659,4.029,$214 ,35,1,$214 ,35,9.9196
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,145.659,,85.158,0.759,2.241,$192 ,65,1,$192 ,65,5.465
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,191.931,,109.625,0.633,5.484,$303 ,35,1,$303 ,35,5.5981
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,147.041,,84.448,0.301,4.201,$488 ,35,1,$488 ,35,7.0171
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,359.61,,173.635,0.605,2.877,$594 ,125,1,$594 ,125,2.9037
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,109.066,,64.87,0.438,1.536,$249 ,71,1,$249 ,71,9.3792
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,1058.322,,337.035,0.337,5.04,"$3,144 ",210,2,"$1,572 ",105,2.4971
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,7344.88,,1405.51,1.025,16.322,"$7,166 ",450,2,"$3,583 ",225,1.06
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,2931.242,,901.832,0.173,7.149,"$16,954 ",410,2,"$8,477 ",205,1.215
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1015.77,,321.263,0.507,4.063,"$2,004 ",250,2,"$1,002 ",125,2.6076
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,258.05,,79.963,0.55,9.216,$469 ,28,1,$469 ,28,4.1833
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,492.645,,157.98,1.05,17.594,$469 ,28,1,$469 ,28,2.5788
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,606.117,,186.339,1.292,21.647,$469 ,28,1,$469 ,28,
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3634.16,1209.67,,1.888,24.228,"$1,925 ",150,1,"$1,925 ",150,1.293
end_rec,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,21.833,,7.096,0.033,0.132,$658 ,165,1,$658 ,165,58.4745
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,11.956,,3.869,0.018,0.072,$658 ,165,1,$658 ,165,85.1633
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,26.693,,8.644,0.041,0.162,$658 ,165,1,$658 ,165,
yolo_v4,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.522,,0.248,0.015,0.055,$34 ,9.5,1,$34 ,9.5,1900.0218
yolo_v4,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.99,,0.43,0.015,0.083,$67 ,12,1,$67 ,12,1019.82
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,3.413,,1.752,0.051,0.284,$67 ,12,1,$67 ,12,295.7702
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,3.999,,2.087,0.06,0.333,$67 ,12,1,$67 ,12,
yolo_v4,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,2.453,,0.748,0.021,0.164,$118 ,15,1,$118 ,15,407.2474
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Celeron 6305E iGPU-only,4.758,,1.434,0.04,0.317,$118 ,15,1,$118 ,15,212.7987
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,7.048,,2.122,0.06,0.47,$118 ,15,1,$118 ,15,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,126.954,,35.481,0.027,0.343,"$4,678 ",370,2,"$2,339 ",185,37.8189
yolo_v4,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,4.971,,2.885,0.042,0.076,$117 ,65,1,$117 ,65,203.4163
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,6.182,,3.532,0.029,0.177,$214 ,35,1,$214 ,35,227.5786
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,6.356,,3.757,0.033,0.098,$192 ,65,1,$192 ,65,123.3181
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,8.44,,4.868,0.028,0.241,$303 ,35,1,$303 ,35,135.9719
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,6.399,,3.765,0.013,0.183,$488 ,35,1,$488 ,35,155.642
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,15.614,,7.925,0.026,0.125,$594 ,125,1,$594 ,125,71.631
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,4.674,,2.804,0.019,0.066,$249 ,71,1,$249 ,71,214.0957
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,47.338,,14.464,0.015,0.225,"$3,144 ",210,2,"$1,572 ",105,45.7699
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,252.03,,58.12,0.035,0.56,"$7,166 ",450,2,"$3,583 ",225,15.01
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,131.466,,41.001,0.008,0.321,"$16,954 ",410,2,"$8,477 ",205,19.2807
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,45.047,,13.741,0.022,0.18,"$2,004 ",250,2,"$1,002 ",125,48.0344
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,11.067,,3.259,0.024,0.395,$469 ,28,1,$469 ,28,92.2912
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,25.048,,7.384,0.053,0.895,$469 ,28,1,$469 ,28,39.1492
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,29.658,,8.32,0.063,1.059,$469 ,28,1,$469 ,28,
yolo_v4,OV-2022.3-8991,accel,Intel® Flex-170 GPU,454.49,56.78,,0.236,3.03,"$1,925 ",150,1,"$1,925 ",150,6.969
end_rec,,,,,,,,,,,,,,
Network model,Release,IE-Type,Platform name,Throughput-INT8,Throughput-FP16,Throughput-FP32,Value,Efficiency,Price,TDP,Sockets,Price/socket,TDP/socket,Latency,,,
begin_rec,,,,,,,,,,,,,,,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,163.72,,57.83,0.273,1.310,$599 ,125,1,$599,125,15.53,,,
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,56.07,,23.3,0.094,0.449,$599 ,125,1,$599,125,19.62,,,
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,210.17,,85.83,0.351,1.681,$599 ,125,1,$599,125,,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Corei5-13600K CPU-only,128.05,,45.94,0.389,1.024,$329 ,125,1,$329,125,12.71,,,
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,53.03,,21.9,0.161,0.424,$329 ,125,1,$329,125,20.81,,,
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,163.33,,64.74,0.496,1.307,$329 ,125,1,$329,125,,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,96.06,,35.627,0.146,0.582,$658 ,165,1,$658 ,165,17.1432,,,
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,53.093,,22.253,0.081,0.322,$658 ,165,1,$658 ,165,22.0002,,,
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,108.306,,44.797,0.165,0.656,$658 ,165,1,$658 ,165,,,,
bert-base-cased ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,2.763,,1.332,0.081,0.291,$34 ,9.5,1,$34 ,9.5,350.2746,,,
bert-base-cased ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,5.694,,2.002,0.085,0.475,$67 ,12,1,$67 ,12,183.1711,,,
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,16.11,,10.009,0.240,1.343,$67 ,12,1,$67 ,12,79.7607,,,
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,21.128,,11.81,0.315,1.761,$67 ,12,1,$67 ,12,,,,
bert-base-cased ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,14.212,,4.255,0.120,0.947,$118 ,15,1,$118 ,15,72.6516,,,
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,18.983,,8.696,0.161,1.266,$118 ,15,1,$118 ,15,62.9729,,,
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,30.866,,12.087,0.262,2.058,$118 ,15,1,$118 ,15,,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,645.77,,213.877,0.138,1.745,"$4,678 ",370,2,"$2,339 ",185,6.7612,,,
bert-base-cased ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,23.721,,14.457,0.203,0.365,$117 ,65,1,$117 ,65,44.0371,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,30.141,,16.38,0.141,0.861,$214 ,35,1,$214 ,35,46.6064,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,30.541,,19.319,0.159,0.470,$192 ,65,1,$192 ,65,27.8871,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,41.504,,22.75,0.137,1.186,$303 ,35,1,$303 ,35,27.974,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,32.073,,16.558,0.066,0.916,$488 ,35,1,$488 ,35,39.7617,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,69.053,,40.243,0.116,0.552,$594 ,125,1,$594 ,125,18.309,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,23.402,,14.614,0.094,0.330,$249 ,71,1,$249 ,71,44.8984,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,266.949,,79.033,0.085,1.271,"$3,144 ",210,2,"$1,572 ",105,12.4065,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,2090.76,1372.2,1368.68,0.292,4.646,"$7,166 ",450,2,"$3,583 ",225,4.61,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,682.593,,225.713,0.040,1.665,"$16,954 ",410,2,"$8,477 ",205,6.9035,,,
bert-base-cased ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,256.994,,75.502,0.128,1.028,"$2,004 ",250,2,"$1,002 ",125,13.0382,,,
bert-base-cased ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,64.632,,18.394,0.138,2.308,$469 ,28,1,$469 ,28,17.638,,,
bert-base-cased ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,95.656,,44.056,0.204,3.416,$469 ,28,1,$469 ,28,14.1005,,,
bert-base-cased ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,128.005,,50.592,0.273,4.572,$469 ,28,1,$469 ,28,,,,
bert-base-cased ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,906.3,348.52,,0.471,6.042,"$1,925 ",150,1,"$1,925 ",150,7.381,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,51.72,,17.6,0.086,0.414,$599 ,125,1,$599 ,125,49.13,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,19.05,,6.88,0.032,0.152,$599 ,125,1,$599 ,125,55.82,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,53,,19.95,0.088,0.424,$599 ,125,1,$599 ,125,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,35.31,,11.04,0.107,0.282,$329 ,125,1,$329 ,125,41.56,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,17.93,,6.46,0.054,0.143,$329 ,125,1,$329 ,125,59.37,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,43.42,,16.19,0.132,0.347,$329 ,125,1,$329 ,125,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,7.714,,3.093,0.012,0.047,$658 ,165,1,$658 ,165,155.3633,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,5.617,,1.978,0.009,0.034,$658 ,165,1,$658 ,165,181.8303,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,10.602,,3.753,0.016,0.064,$658 ,165,1,$658 ,165,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.272,,0.125,0.008,0.029,$34 ,9.5,1,$34 ,9.5,3861.0657,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.488,,0.188,0.007,0.041,$67 ,12,1,$67 ,12,2090.8266,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,1.395,,0.774,0.021,0.116,$67 ,12,1,$67 ,12,727.6781,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,1.827,,0.845,0.027,0.152,$67 ,12,1,$67 ,12,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,1.199,,0.377,0.010,0.080,$118 ,15,1,$118 ,15,831.301,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Celeron 6305E iGPU-only,2.051,,0.766,0.017,0.137,$118 ,15,1,$118 ,15,494.5363,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron 6305E CPU+iGPU,3.118,,1.127,0.026,0.208,$118 ,15,1,$118 ,15,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,43.436,,17.379,0.009,0.117,"$4,678 ",370,2,"$2,339 ",185,52.2862,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,2.067,,1.278,0.018,0.032,$117 ,65,1,$117 ,65,495.0786,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,2.619,,1.536,0.012,0.075,$214 ,35,1,$214 ,35,502.3687,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,2.72,,1.679,0.014,0.042,$192 ,65,1,$192 ,65,320.168,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,3.625,,2.11,0.012,0.104,$303 ,35,1,$303 ,35,309.2848,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.906,,1.693,0.006,0.083,$488 ,35,1,$488 ,35,386.4947,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,4.801,,2.729,0.008,0.038,$594 ,125,1,$594 ,125,200.0794,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.098,,1.32,0.008,0.030,$249 ,71,1,$249 ,71,492.0938,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,21.062,,7.021,0.007,0.100,"$3,144 ",210,2,"$1,572 ",105,101.4694,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,651.95,378.57,384.02,0.091,1.449,"$7,166 ",450,2,"$3,583 ",225,12.87,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,46.064,,19.051,0.003,0.112,"$16,954 ",410,2,"$8,477 ",205,49.4869,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,20.014,,6.726,0.010,0.080,"$2,004 ",250,2,"$1,002 ",125,105.9423,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,5.192,,1.626,0.011,0.185,$469 ,28,1,$469 ,28,203.6311,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,10.476,,3.914,0.022,0.374,$469 ,28,1,$469 ,28,95.6598,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,11.75,,4.168,0.025,0.420,$469 ,28,1,$469 ,28,,,,
bert-large-uncased-whole-word-masking-squad-0001 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,74.47,25.77,,0.039,0.496,"$1,925 ",150,1,"$1,925 ",150,19.768,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,184.93,,63.79,0.309,1.479,$599 ,125,1,$599 ,125,10.31,,,
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,69.31,,22.67,0.116,0.554,$599 ,125,1,$599 ,125,15.02,,,
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,191.48,,62.99,0.320,1.532,$599 ,125,1,$599 ,125,,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,139.02,,48.48,0.423,1.112,$329 ,125,1,$329 ,125,10.48,,,
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,65.55,,21.24,0.199,0.524,$329 ,125,1,$329 ,125,16.12,,,
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,154.19,,52.87,0.469,1.234,$329 ,125,1,$329 ,125,,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,99.078,,36.552,0.151,0.600,$658 ,165,1,$658 ,165,11.269,,,
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,57.707,,13.789,0.088,0.350,$658 ,165,1,$658 ,165,16.263,,,
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,115.59,,39.82,0.176,0.701,$658 ,165,1,$658 ,165,,,,
deeplabv3,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,3.327,,1.496,0.098,0.350,$34 ,9.5,1,$34 ,9.5,308.0916,,,
deeplabv3,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,6.07,,3.041,0.091,0.506,$67 ,12,1,$67 ,12,166.5404,,,
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,,,5.196,0.000,0.000,$67 ,12,1,$67 ,12,217.0439,,,
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,9.877,,7.145,0.147,0.823,$67 ,12,1,$67 ,12,,,,
deeplabv3,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,13.516,,4.681,0.115,0.901,$118 ,15,1,$118 ,15,74.1061,,,
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,22.35,,4.635,0.189,1.490,$118 ,15,1,$118 ,15,42.9657,,,
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,34.576,,8.955,0.293,2.305,$118 ,15,1,$118 ,15,,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,559.145,,134.159,0.120,1.511,"$4,678 ",370,2,"$2,339 ",185,5.356,,,
deeplabv3,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,27.171,,15.947,0.232,0.418,$117 ,65,1,$117 ,65,36.6584,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,34.907,,16.72,0.163,0.997,$214 ,35,1,$214 ,35,38.8986,,,
deeplabv3,OV-2022.3-8991,core,Intel® Corei5-8500 CPU-only,35.07,,20.497,0.183,0.540,$192 ,65,1,$192 ,65,22.1865,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,47.647,,23.747,0.157,1.361,$303 ,35,1,$303 ,35,22.628,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,36.559,,18.235,0.075,1.045,$488 ,35,1,$488 ,35,27.138,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,79.42,,21.03,0.134,0.635,$594 ,125,1,$594 ,125,12.8397,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,26.173,,16.906,0.105,0.369,$249 ,71,1,$249 ,71,37.9245,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,248.049,,81.667,0.079,1.181,"$3,144 ",210,2,"$1,572 ",105,8.9485,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,1139.5,702.28,699.04,0.159,2.532,"$7,166 ",450,2,"$3,583 ",225,2.47,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,632.113,,168.65,0.037,1.542,"$16,954 ",410,2,"$8,477 ",205,4.0073,,,
deeplabv3,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,241.703,,78.963,0.121,0.967,"$2,004 ",250,2,"$1,002 ",125,9.356,,,
deeplabv3,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,64.13,,18.519,0.137,2.290,$469 ,28,1,$469 ,28,16.6586,,,
deeplabv3,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,104.926,,24.592,0.224,3.747,$469 ,28,1,$469 ,28,9.1435,,,
deeplabv3,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,121.441,,30.498,0.259,4.337,$469 ,28,1,$469 ,28,,,,
deeplabv3,OV-2022.3-8991,accel,Intel® Flex-170 GPU,882.04,98.95,,0.458,5.88,"$1,925 ",150,1,"$1,925 ",150,2.674,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,777.86,,284.56,1.299,6.223,$599 ,125,1,$599 ,125,3.26,,,
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,195.3,,66.46,0.326,1.562,$599 ,125,1,$599 ,125,6.8,,,
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,899.5,,293.29,1.502,7.196,$599 ,125,1,$599 ,125,,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,612.99,,184.9,1.863,4.904,$329 ,125,1,$329 ,125,3.12,,,
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,178.37,,62.69,0.542,1.427,$329 ,125,1,$329 ,125,8.37,,,
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,707.99,,207.12,2.152,5.664,$329 ,125,1,$329 ,125,,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,457.193,,165.166,0.695,2.771,$658 ,165,1,$658 ,165,3.141,,,
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,203.417,,68.438,0.309,1.233,$658 ,165,1,$658 ,165,6.6728,,,
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,575.442,,179.858,0.875,3.488,$658 ,165,1,$658 ,165,,,,
densenet-121,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,13.344,,5.882,0.392,1.405,$34 ,9.5,1,$34 ,9.5,80.7014,,,
densenet-121,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,24.172,,10.554,0.361,2.014,$67 ,12,1,$67 ,12,43.668,,,
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,,,30.615,0.000,0.000,$67 ,12,1,$67 ,12,30.0241,,,
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,39.365,,38.926,0.588,3.280,$67 ,12,1,$67 ,12,,,,
densenet-121,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,58.965,,15.713,0.500,3.931,$118 ,15,1,$118 ,15,18.3425,,,
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,86.162,,25.34,0.730,5.744,$118 ,15,1,$118 ,15,20.7907,,,
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,140.891,,39.929,1.194,9.393,$118 ,15,1,$118 ,15,,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,3094.742,,701.32,0.662,8.364,"$4,678 ",370,2,"$2,339 ",185,2.131,,,
densenet-121,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,120.121,,66.624,1.027,1.848,$117 ,65,1,$117 ,65,9.3755,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,142.623,,80.872,0.666,4.075,$214 ,35,1,$214 ,35,10.2536,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,149.112,,85.051,0.777,2.294,$192 ,65,1,$192 ,65,6.0817,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,194.559,,111.988,0.642,5.559,$303 ,35,1,$303 ,35,6.1906,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,146.463,,69.186,0.300,4.185,$488 ,35,1,$488 ,35,8.6496,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,360.501,,182.543,0.607,2.884,$594 ,125,1,$594 ,125,3.6046,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,114.844,,67.188,0.461,1.618,$249 ,71,1,$249 ,71,9.7609,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,1116.372,,295.952,0.355,5.316,"$3,144 ",210,2,"$1,572 ",105,3.9606,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,8279.14,4856.54,4862.51,1.155,18.398,"$7,166 ",450,2,"$3,583 ",225,2.39,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,3155.106,,815.725,0.186,7.695,"$16,954 ",410,2,"$8,477 ",205,2.8831,,,
densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1064.824,,283.423,0.531,4.259,"$2,004 ",250,2,"$1,002 ",125,4.0689,,,
densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,265.167,,74.501,0.565,9.470,$469 ,28,1,$469 ,28,4.7413,,,
densenet-121,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,391.185,,123.519,0.834,13.971,$469 ,28,1,$469 ,28,6.5259,,,
densenet-121,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,526.12,,150.35,1.122,18.790,$469 ,28,1,$469 ,28,,,,
densenet-121,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3440.18,1178.68,,1.787,22.935,"$1,925 ",150,1,"$1,925 ",150,3.302,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,209.26,,106.11,0.349,1.674,$599 ,125,1,$599 ,125,10.36,,,
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,82.04,,47.85,0.137,0.656,$599 ,125,1,$599 ,125,22.35,,,
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,197.85,,108.3,0.330,1.583,$599 ,125,1,$599 ,125,,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,155.65,,90.91,0.473,1.245,$329 ,125,1,$329 ,125,9.92,,,
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,77.28,,44.91,0.235,0.618,$329 ,125,1,$329 ,125,22.93,,,
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,172.54,,95.94,0.524,1.380,$329 ,125,1,$329 ,125,,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,112.297,,64.06,0.171,0.681,$658 ,165,1,$658 ,165,11.8265,,,
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,73.766,,38.742,0.112,0.447,$658 ,165,1,$658 ,165,21.403,,,
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,128.735,,76.62,0.196,0.780,$658 ,165,1,$658 ,165,,,,
efficientdet-d0,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,3.812,,2.565,0.112,0.401,$34 ,9.5,1,$34 ,9.5,274.5947,,,
efficientdet-d0,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,7.248,,5.17,0.108,0.604,$67 ,12,1,$67 ,12,143.7999,,,
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,22.697,,15.635,0.339,1.891,$67 ,12,1,$67 ,12,59.0651,,,
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,26.855,,17.296,0.401,2.238,$67 ,12,1,$67 ,12,,,,
efficientdet-d0,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,15.949,,10.417,0.135,1.063,$118 ,15,1,$118 ,15,62.2765,,,
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,25.936,,14.073,0.220,1.729,$118 ,15,1,$118 ,15,54.0166,,,
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,32.767,,20.733,0.278,2.184,$118 ,15,1,$118 ,15,,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,424.388,,256.503,0.091,1.147,"$4,678 ",370,2,"$2,339 ",185,,,,
efficientdet-d0,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,36.666,,24.041,0.313,0.564,$117 ,65,1,$117 ,65,30.2521,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,45.331,,26.92,0.212,1.295,$214 ,35,1,$214 ,35,32.8946,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,44.921,,32.357,0.234,0.691,$192 ,65,1,$192 ,65,19.7048,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,62.749,,37.807,0.207,1.793,$303 ,35,1,$303 ,35,19.901,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,50.35,,29.935,0.103,1.439,$488 ,35,1,$488 ,35,24.2916,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,94.981,,36.434,0.160,0.760,$594 ,125,1,$594 ,125,12.658,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,35.831,,27.306,0.144,0.505,$249 ,71,1,$249 ,71,30.9469,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,239.06,,161.224,0.076,1.138,"$3,144 ",210,2,"$1,572 ",105,13.9735,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,875.53,495.04,492.93,0.122,1.946,"$7,166 ",450,2,"$3,583 ",225,5.07,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,471.02,,300.291,0.028,1.149,"$16,954 ",410,2,"$8,477 ",205,9.3866,,,
efficientdet-d0,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,231.873,,156.285,0.116,0.927,"$2,004 ",250,2,"$1,002 ",125,14.1605,,,
efficientdet-d0,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,71.482,,41.123,0.152,2.553,$469 ,28,1,$469 ,28,16.6952,,,
efficientdet-d0,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,92.52,,50.538,0.197,3.304,$469 ,28,1,$469 ,28,17.295,,,
efficientdet-d0,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,107.688,,56.901,0.230,3.846,$469 ,28,1,$469 ,28,,,,
efficientdet-d0,OV-2022.3-8991,accel,Intel® Flex-170 GPU,463.67,295.13,,0.241,3.091,"$1,925 ",150,1,"$1,925 ",150,5.603,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,5.94,,2.41,0.010,0.048,$599 ,125,1,$599 ,125,270.57,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,2.3,,0.71,0.004,0.018,$599 ,125,1,$599 ,125,437.94,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,6.45,,2.25,0.011,0.052,$599 ,125,1,$599 ,125,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,4.55,,1.88,0.014,0.036,$329 ,125,1,$329 ,125,310.58,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,2.17,,0.67,0.007,0.017,$329 ,125,1,$329 ,125,465.03,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,5.3,,2.01,0.016,0.042,$329 ,125,1,$329 ,125,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,12.921,,4.016,0.020,0.078,$658 ,165,1,$658 ,165,89.8929,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,6.802,,1.82,0.010,0.041,$658 ,165,1,$658 ,165,149.7396,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,15.679,,4.499,0.024,0.095,$658 ,165,1,$658 ,165,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.32,,0.131,0.009,0.034,$34 ,9.5,1,$34 ,9.5,3206.1652,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.592,,0.242,0.009,0.049,$67 ,12,1,$67 ,12,1727.27,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,1.301,,0.728,0.019,0.108,$67 ,12,1,$67 ,12,776.0692,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,1.683,,0.865,0.025,0.140,$67 ,12,1,$67 ,12,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,1.563,,0.417,0.013,0.104,$118 ,15,1,$118 ,15,640.0005,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,2.616,,0.725,0.022,0.174,$118 ,15,1,$118 ,15,389.3563,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron 6305E CPU+iGPU,4.056,,1.107,0.034,0.270,$118 ,15,1,$118 ,15,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,74.93,,19.965,0.016,0.203,"$4,678 ",370,2,"$2,339 ",185,65.5753,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,2.988,,1.473,0.026,0.046,$117 ,65,1,$117 ,65,340.7313,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,3.633,,1.926,0.017,0.104,$214 ,35,1,$214 ,35,430.1967,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,3.852,,1.982,0.020,0.059,$192 ,65,1,$192 ,65,241.5513,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,4.999,,2.648,0.016,0.143,$303 ,35,1,$303 ,35,260.2284,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,3.71,,2.005,0.008,0.106,$488 ,35,1,$488 ,35,280.1493,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,8.977,,4.542,0.015,0.072,$594 ,125,1,$594 ,125,137.1747,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.867,,1.464,0.012,0.040,$249 ,71,1,$249 ,71,353.2042,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,29.332,,8.19,0.009,0.140,"$3,144 ",210,2,"$1,572 ",105,78.1722,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,19.71,18.01,18.15,0.003,0.044,"$7,166 ",450,2,"$3,583 ",225,129.2,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,85.213,,22.066,0.005,0.208,"$16,954 ",410,2,"$8,477 ",205,30.4317,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,27.847,,7.786,0.014,0.111,"$2,004 ",250,2,"$1,002 ",125,78.6604,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,7.027,,1.855,0.015,0.251,$469 ,28,1,$469 ,28,151.8783,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,13.823,,3.545,0.029,0.494,$469 ,28,1,$469 ,28,70.7933,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,16.898,,4.191,0.036,0.604,$469 ,28,1,$469 ,28,,,,
faster_rcnn_resnet50_coco,OV-2022.3-8991,accel,Intel® Flex-170 GPU,216.3,23.42,,0.112,1.442,"$1,925 ",150,1,"$1,925 ",150,9.137,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,219.06,,71.15,0.366,1.752,$599 ,125,1,$599 ,125,10.19,,,
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,65.91,,18.1,0.110,0.527,$599 ,125,1,$599 ,125,16.55,,,
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,279.58,,78.65,0.467,2.237,$599 ,125,1,$599 ,125,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,171.19,,45.8,0.520,1.370,$329 ,125,1,$329 ,125,9.14,,,
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,62.45,,17.02,0.190,0.500,$329 ,125,1,$329 ,125,17.48,,,
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,219.02,,52.56,0.666,1.752,$329 ,125,1,$329 ,125,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,121.813,,39.391,0.185,0.738,$658 ,165,1,$658 ,165,11.0425,,,
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,71.229,,17.755,0.108,0.432,$658 ,165,1,$658 ,165,19.7132,,,
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,175.049,,44.894,0.266,1.061,$658 ,165,1,$658 ,165,,,,
Inception-V4,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,3.12,,1.344,0.092,0.328,$34 ,9.5,1,$34 ,9.5,335.3712,,,
Inception-V4,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,5.677,,2.364,0.085,0.473,$67 ,12,1,$67 ,12,181.8897,,,
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,17.009,,8.302,0.254,1.417,$67 ,12,1,$67 ,12,78.0548,,,
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,21.713,,10.37,0.324,1.809,$67 ,12,1,$67 ,12,,,,
Inception-V4,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,15.576,,4.073,0.132,1.038,$118 ,15,1,$118 ,15,65.7272,,,
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,28.105,,6.681,0.238,1.874,$118 ,15,1,$118 ,15,46.2616,,,
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,41.918,,10.163,0.355,2.795,$118 ,15,1,$118 ,15,,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,881.403,,205.296,0.188,2.382,"$4,678 ",370,2,"$2,339 ",185,4.7029,,,
Inception-V4,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,30.004,,15.51,0.256,0.462,$117 ,65,1,$117 ,65,35.0513,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,35.882,,19.222,0.168,1.025,$214 ,35,1,$214 ,35,37.6472,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,37.987,,19.998,0.198,0.584,$192 ,65,1,$192 ,65,21.5144,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,48.903,,26.356,0.161,1.397,$303 ,35,1,$303 ,35,22.2402,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,37.301,,19.475,0.076,1.066,$488 ,35,1,$488 ,35,28.572,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,92.646,,44.966,0.156,0.741,$594 ,125,1,$594 ,125,12.3153,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,28.537,,15.13,0.115,0.402,$249 ,71,1,$249 ,71,36.8888,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,301.215,,77.005,0.096,1.434,"$3,144 ",210,2,"$1,572 ",105,10.5711,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,3406.7,1879.09,1867.99,0.475,7.570,"$7,166 ",450,2,"$3,583 ",225,3.23,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,937.139,,225.776,0.055,2.286,"$16,954 ",410,2,"$8,477 ",205,5.6984,,,
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,287.767,,73.617,0.144,1.151,"$2,004 ",250,2,"$1,002 ",125,11.1114,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,71.295,,18.482,0.152,2.546,$469 ,28,1,$469 ,28,15.8294,,,
Inception-V4,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,158.282,,36.884,0.337,5.653,$469 ,28,1,$469 ,28,10.6245,,,
Inception-V4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,182.132,,44.198,0.388,6.505,$469 ,28,1,$469 ,28,,,,
Inception-V4,OV-2022.3-8991,accel,Intel® Flex-170 GPU,2986.91,298.6,,1.552,19.913,"$1,925 ",150,1,"$1,925 ",150,3.968,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,1754.44,,664.82,2.929,14.036,$599 ,125,1,$599 ,125,1.4,,,
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,528.03,,168.57,0.882,4.224,$599 ,125,1,$599 ,125,2.35,,,
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,1568.83,,665.79,2.619,12.551,$599 ,125,1,$599 ,125,,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,1240.01,,437.11,3.769,9.920,$329 ,125,1,$329 ,125,1.47,,,
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,493.18,,157.94,1.499,3.945,$329 ,125,1,$329 ,125,2.43,,,
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,1063.27,,454.21,3.232,8.506,$329 ,125,1,$329 ,125,,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,1054.462,,346.546,1.603,6.391,$658 ,165,1,$658 ,165,1.4898,,,
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,493.088,,145.503,0.749,2.988,$658 ,165,1,$658 ,165,2.472,,,
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,1056.241,,361.472,1.605,6.401,$658 ,165,1,$658 ,165,,,,
mobilenet-ssd ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,28.032,,12.691,0.824,2.951,$34 ,9.5,1,$34 ,9.5,38.1991,,,
mobilenet-ssd ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,50.352,,23.675,0.752,4.196,$67 ,12,1,$67 ,12,20.8993,,,
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,100.377,,60.647,1.498,8.365,$67 ,12,1,$67 ,12,11.6812,,,
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,148.458,,79.575,2.216,12.372,$67 ,12,1,$67 ,12,,,,
mobilenet-ssd ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,123.806,,38.981,1.049,8.254,$118 ,15,1,$118 ,15,8.4121,,,
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,168.473,,53.272,1.428,11.232,$118 ,15,1,$118 ,15,7.8961,,,
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,238.511,,83.721,2.021,15.901,$118 ,15,1,$118 ,15,,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,6730.68,,1634.937,1.439,18.191,"$4,678 ",370,2,"$2,339 ",185,0.7265,,,
mobilenet-ssd ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,244.598,,140.009,2.091,3.763,$117 ,65,1,$117 ,65,4.4481,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,303.938,,173.562,1.420,8.684,$214 ,35,1,$214 ,35,4.7946,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,304.802,,185.045,1.588,4.689,$192 ,65,1,$192 ,65,2.8007,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,412.261,,241.37,1.361,11.779,$303 ,35,1,$303 ,35,2.8749,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,315.107,,152.342,0.646,9.003,$488 ,35,1,$488 ,35,3.5896,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,774.346,,345.309,1.304,6.195,$594 ,125,1,$594 ,125,1.5452,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,233.43,,147.098,0.937,3.288,$249 ,71,1,$249 ,71,4.5879,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,2331.207,,691.743,0.741,11.101,"$3,144 ",210,2,"$1,572 ",105,1.4852,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,16445.75,8733.64,8626.42,2.295,36.546,"$7,166 ",450,2,"$3,583 ",225,0.65,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,6691.915,,1796.357,0.395,16.322,"$16,954 ",410,2,"$8,477 ",205,1.0518,,,
mobilenet-ssd ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,2225.935,,667.692,1.111,8.904,"$2,004 ",250,2,"$1,002 ",125,1.5444,,,
mobilenet-ssd ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,579.307,,166.959,1.235,20.690,$469 ,28,1,$469 ,28,2.0215,,,
mobilenet-ssd ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,582.636,,243.945,1.242,20.808,$469 ,28,1,$469 ,28,2.548,,,
mobilenet-ssd ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,744.231,,292.071,1.587,26.580,$469 ,28,1,$469 ,28,,,,
mobilenet-ssd ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3548.98,1412.68,,1.844,23.66,"$1,925 ",150,1,"$1,925 ",150,1.344,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,4041.77,,2123.33,6.748,32.334,$599 ,125,1,$599 ,125,0.66,,,
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,978.61,,424.34,1.634,7.829,$599 ,125,1,$599 ,125,1.21,,,
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,4630.44,,1944.62,7.730,37.044,$599 ,125,1,$599 ,125,,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,3306.92,,1403.57,10.051,26.455,$329 ,125,1,$329 ,125,0.65,,,
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,919.85,,384.42,2.796,7.359,$329 ,125,1,$329 ,125,1.36,,,
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,3556.06,,1332.32,10.809,28.448,$329 ,125,1,$329 ,125,,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,2446.221,,1003.129,3.718,14.826,$658 ,165,1,$658 ,165,0.7182,,,
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,1265.969,,389.894,1.924,7.673,$658 ,165,1,$658 ,165,1.3894,,,
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,2680.458,,1013.049,4.074,16.245,$658 ,165,1,$658 ,165,,,,
mobilenet-v2 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,81.572,,45.013,2.399,8.587,$34 ,9.5,1,$34 ,9.5,13.4692,,,
mobilenet-v2 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,143.134,,81.991,2.136,11.928,$67 ,12,1,$67 ,12,7.609,,,
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,,,164.945,0.000,0.000,$67 ,12,1,$67 ,12,7.0306,,,
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,227.898,,202.181,3.401,18.992,$67 ,12,1,$67 ,12,,,,
mobilenet-v2 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,316.763,,124.654,2.684,21.118,$118 ,15,1,$118 ,15,3.391,,,
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,525.084,,141.61,4.450,35.006,$118 ,15,1,$118 ,15,4.9197,,,
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,807.397,,255.964,6.842,53.826,$118 ,15,1,$118 ,15,,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,14679.84,,4065.139,3.138,39.675,"$4,678 ",370,2,"$2,339 ",185,0.4828,,,
mobilenet-v2 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,619.507,,452.366,5.295,9.531,$117 ,65,1,$117 ,65,1.8067,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,806.314,,441.904,3.768,23.038,$214 ,35,1,$214 ,35,2.1078,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,766.072,,558.975,3.990,11.786,$192 ,65,1,$192 ,65,1.2307,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,1081.253,,664.108,3.568,30.893,$303 ,35,1,$303 ,35,1.2788,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,825.071,,413.091,1.691,23.573,$488 ,35,1,$488 ,35,1.6818,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,2067.162,,868.25,3.480,16.537,$594 ,125,1,$594 ,125,0.7363,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,594.283,,479.567,2.387,8.370,$249 ,71,1,$249 ,71,1.8531,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,5882.455,,1895.498,1.871,28.012,"$3,144 ",210,2,"$1,572 ",105,1.3871,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,28383.76,16212.74,16065.38,3.961,63.075,"$7,166 ",450,2,"$3,583 ",225,0.55,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,15616.083,,4308.927,0.921,38.088,"$16,954 ",410,2,"$8,477 ",205,0.8685,,,
mobilenet-v2 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,5616.283,,1835.686,2.803,22.465,"$2,004 ",250,2,"$1,002 ",125,1.404,,,
mobilenet-v2 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,1463.21,,538.597,3.120,52.258,$469 ,28,1,$469 ,28,0.8864,,,
mobilenet-v2 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,2076.015,,544.641,4.426,74.143,$469 ,28,1,$469 ,28,1.7212,,,
mobilenet-v2 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,2677.374,,698.942,5.709,95.621,$469 ,28,1,$469 ,28,,,,
mobilenet-v2 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,18371.95,4738.33,,9.544,122.48,"$1,925 ",150,1,"$1,925 ",150,1.15,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,1495.77,,415.82,2.497,11.966,$599 ,125,1,$599 ,125,1.38,,,
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,497.54,,150.99,0.831,3.980,$599 ,125,1,$599 ,125,2.19,,,
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,1821.4,,615.14,3.041,14.571,$599 ,125,1,$599 ,125,,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,1169.04,,336.09,3.553,9.352,$329 ,125,1,$329 ,125,1.5,,,
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,467.43,,141.76,1.421,3.739,$329 ,125,1,$329 ,125,2.36,,,
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Corei5-13600K CPU+iGPU,1443.42,,445.25,4.387,11.547,$329 ,125,1,$329 ,125,,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,804.771,,212.574,1.223,4.877,$658 ,165,1,$658 ,165,1.3886,,,
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,491.337,,146.839,0.747,2.978,$658 ,165,1,$658 ,165,2.2655,,,
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,1180.984,,365.777,1.795,7.157,$658 ,165,1,$658 ,165,,,,
resnet-18 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,22.96,,9.564,0.675,2.417,$34 ,9.5,1,$34 ,9.5,44.5491,,,
resnet-18 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,40.944,,16.158,0.611,3.412,$67 ,12,1,$67 ,12,25.1377,,,
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,161.381,,60.863,2.409,13.448,$67 ,12,1,$67 ,12,10.983,,,
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,197.477,,73.927,2.947,16.456,$67 ,12,1,$67 ,12,,,,
resnet-18 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,105.574,,28.914,0.895,7.038,$118 ,15,1,$118 ,15,9.6165,,,
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,194.694,,55.341,1.650,12.980,$118 ,15,1,$118 ,15,6.643,,,
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,291.441,,82.105,2.470,19.429,$118 ,15,1,$118 ,15,,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,5761.961,,1431.864,1.232,15.573,"$4,678 ",370,2,"$2,339 ",185,0.6512,,,
resnet-18 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,208.65,,103.699,1.783,3.210,$117 ,65,1,$117 ,65,5.0231,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,252.987,,127.448,1.182,7.228,$214 ,35,1,$214 ,35,5.2921,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,262.785,,134.002,1.369,4.043,$192 ,65,1,$192 ,65,3.0597,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,344.219,,175.433,1.136,9.835,$303 ,35,1,$303 ,35,3.1665,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,265.351,,130.166,0.544,7.581,$488 ,35,1,$488 ,35,4.0471,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,654.533,,307.741,1.102,5.236,$594 ,125,1,$594 ,125,1.6723,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,198.189,,101.399,0.796,2.791,$249 ,71,1,$249 ,71,5.2039,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,2017.368,,547.47,0.642,9.607,"$3,144 ",210,2,"$1,572 ",105,1.2913,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,27331.02,16095.24,16009.04,3.814,60.736,"$7,166 ",450,2,"$3,583 ",225,0.38,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,6320.391,,1582.817,0.373,15.416,"$16,954 ",410,2,"$8,477 ",205,0.667,,,
resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1940.935,,522.654,0.969,7.764,"$2,004 ",250,2,"$1,002 ",125,1.3451,,,
resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,480.992,,126.244,1.026,17.178,$469 ,28,1,$469 ,28,2.242,,,
resnet-18 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,1061.591,,297.705,2.264,37.914,$469 ,28,1,$469 ,28,1.793,,,
resnet-18 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,1237.94,,342.513,2.640,44.212,$469 ,28,1,$469 ,28,,,,
resnet-18 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,27454.08,2264.67,,14.262,183.027,"$1,925 ",150,1,"$1,925 ",150,0.946,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,729.93,,240.59,1.219,5.839,$599 ,125,1,$599 ,125,2.91,,,
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,238.44,,68.18,0.398,1.908,$599 ,125,1,$599 ,125,4.74,,,
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,895.28,,255.91,1.495,7.162,$599 ,125,1,$599 ,125,,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,576.86,,153.71,1.753,4.615,$329 ,125,1,$329 ,125,3.04,,,
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,216.97,,64.36,0.659,1.736,$329 ,125,1,$329 ,125,5.3,,,
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,717.91,,188.59,2.182,5.743,$329 ,125,1,$329 ,125,,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,400.118,,133.834,0.608,2.425,$658 ,165,1,$658 ,165,3.0384,,,
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,229.863,,66.122,0.349,1.393,$658 ,165,1,$658 ,165,5.2538,,,
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,574.341,,155.749,0.873,3.481,$658 ,165,1,$658 ,165,,,,
resnet-50,OV-2022.3-8991,atom,Intel® Atomx5-E3940 CPU-only,11.094,,4.516,0.326,1.168,$34 ,9.5,1,$34 ,9.5,92.6182,,,
resnet-50,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,20.114,,8.254,0.300,1.676,$67 ,12,1,$67 ,12,51.0598,,,````
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,66.119,,29.408,0.987,5.510,$67 ,12,1,$67 ,12,21.6857,,,
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,82.95,,35.81,1.238,6.913,$67 ,12,1,$67 ,12,,,,
resnet-50,OV-2022.3-8991,atom,Intel® Celeron 6305E CPU-only,52.004,,14.152,0.441,3.467,$118 ,15,1,$118 ,15,19.6053,,,
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Celeron 6305E iGPU-only,90.685,,24.633,0.769,6.046,$118 ,15,1,$118 ,15,14.6415,,,
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron 6305E CPU+iGPU,140.062,,37.864,1.187,9.337,$118 ,15,1,$118 ,15,-,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,2793.997,,691.079,0.597,7.551,"$4,678 ",370,2,"$2,339 ",185,1.3288,,,
resnet-50,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,102.328,,52.896,0.875,1.574,$117 ,65,1,$117 ,65,10.4475,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,123.574,,63.836,0.577,3.531,$214 ,35,1,$214 ,35,11.6252,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,129.174,,68.242,0.673,1.987,$192 ,65,1,$192 ,65,6.8498,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,168.016,,88.675,0.555,4.800,$303 ,35,1,$303 ,35,6.9723,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,129.371,,56.366,0.265,3.696,$488 ,35,1,$488 ,35,8.7659,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,317.744,,149.441,0.535,2.542,$594 ,125,1,$594 ,125,3.6469,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,97.606,,52.17,0.392,1.375,$249 ,71,1,$249 ,71,10.851,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,980.813,,268.009,0.312,4.671,"$3,144 ",210,2,"$1,572 ",105,2.9838,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,2905.803,,748.583,0.405,6.457,"$7,166 ",450,2,"$3,583 ",225,1.475,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,11359.88,5494.15,5497.22,0.670,27.707,"$16,954 ",410,2,"$8,477 ",205,0.94,,,
resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,937.572,,255.866,0.468,3.750,"$2,004 ",250,2,"$1,002 ",125,3.0985,,,
resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,235.061,,63.241,0.501,8.395,$469 ,28,1,$469 ,28,4.7975,,,
resnet-50,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,504.247,,125.407,1.075,18.009,$469 ,28,1,$469 ,28,4.7975,,,
resnet-50,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,595.133,,150.024,1.269,21.255,$469 ,28,1,$469 ,28,4.7975,,,
resnet-50,OV-2022.3-8991,accel,Intel® Flex-170 GPU,10810.92,1005.16,,5.616,72.073,"$1,925 ",150,1,"$1,925 ",150,1.624,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,11.75,,4.24,0.020,0.094,$599 ,125,1,$599 ,125,162.07,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,4.5,,1.45,0.008,0.036,$599 ,125,1,$599 ,125,226.99,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,11.63,,4.24,0.019,0.093,$599 ,125,1,$599 ,125,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,8.21,,2.7,0.025,0.066,$329 ,125,1,$329 ,125,147.53,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,4.22,,1.36,0.013,0.034,$329 ,125,1,$329 ,125,241.92,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,8,,2.7,0.024,0.064,$329 ,125,1,$329 ,125,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,6.712,,2.394,0.010,0.041,$658 ,165,1,$658 ,165,175.7493,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,4.228,,1.262,0.006,0.026,$658 ,165,1,$658 ,165,241.7838,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,6.666,,2.393,0.010,0.040,$658 ,165,1,$658 ,165,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.171,,0.081,0.005,0.018,$34 ,9.5,1,$34 ,9.5,5985.7525,,,
ssd-resnet34-1200 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.31,,0.133,0.005,0.026,$67 ,12,1,$67 ,12,3246.0878,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,0.965,,0.615,0.014,0.080,$67 ,12,1,$67 ,12,1053.0078,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,0.31,,0.133,0.005,0.026,$67 ,12,1,$67 ,12,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,0.806,,0.23,0.007,0.054,$118 ,15,1,$118 ,15,1240.6212,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,1.582,,0.486,0.013,0.105,$118 ,15,1,$118 ,15,649.3806,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,0.806,,0.231,0.007,0.054,$118 ,15,1,$118 ,15,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,41.52,,12.672,0.009,0.112,"$4,678 ",370,2,"$2,339 ",185,79.0111,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,1.606,,0.959,0.014,0.025,$117 ,65,1,$117 ,65,644.0626,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,1.932,,1.177,0.009,0.055,$214 ,35,1,$214 ,35,712.3677,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,2.067,,1.248,0.011,0.032,$192 ,65,1,$192 ,65,401.8765,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,2.66,,1.606,0.009,0.076,$303 ,35,1,$303 ,35,434.9877,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.046,,1.242,0.004,0.058,$488 ,35,1,$488 ,35,485.4343,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,4.871,,2.935,0.008,0.039,$594 ,125,1,$594 ,125,239.8346,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,1.55,,0.919,0.006,0.022,$249 ,71,1,$249 ,71,665.2714,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,15.706,,4.572,0.005,0.075,"$3,144 ",210,2,"$1,572 ",105,132.0319,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,152.74,144.16,144.02,0.021,0.339,"$7,166 ",450,2,"$3,583 ",225,14.48,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,47.365,,14.722,0.003,0.116,"$16,954 ",410,2,"$8,477 ",205,44.387,,,
ssd-resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,14.966,,4.35,0.007,0.060,"$2,004 ",250,2,"$1,002 ",125,138.9625,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,3.556,,1.015,0.008,0.127,$469 ,28,1,$469 ,28,284.2379,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,8.239,,2.545,0.018,0.294,$469 ,28,1,$469 ,28,122.4561,,,
ssd-resnet34-1200 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,3.565,,1.01,0.008,0.127,$469 ,28,1,$469 ,28,,,,
ssd-resnet34-1200 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,132.44,18.19,,0.069,0.883,"$1,925 ",150,1,"$1,925 ",150,19.933,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,18.79,,6.86,0.031,0.150,$599 ,125,1,$599 ,125,99.01,,,
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,7.59,,2.3,0.013,0.061,$599 ,125,1,$599 ,125,132.32,,,
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,18.14,,7,0.030,0.145,$599 ,125,1,$599 ,125,,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,12.91,,4.36,0.039,0.103,$329 ,125,1,$329 ,125,95.92,,,
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,7.13,,2.16,0.022,0.057,$329 ,125,1,$329 ,125,140.88,,,
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,16.63,,5.72,0.051,0.133,$329 ,125,1,$329 ,125,,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,10.652,,3.873,0.016,0.065,$658 ,165,1,$658 ,165,111.0757,,,
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,7.059,,2.154,0.011,0.043,$658 ,165,1,$658 ,165,142.0745,,,
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,14.933,,4.935,0.023,0.091,$658 ,165,1,$658 ,165,,,,
unet-camvid--0001 ,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.258,,0.039,0.008,0.027,$34 ,9.5,1,$34 ,9.5,3959.594,,,
unet-camvid--0001 ,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.482,,0.061,0.007,0.040,$67 ,12,1,$67 ,12,2094.2569,,,
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,1.994,,0.989,0.030,0.166,$67 ,12,1,$67 ,12,502.6095,,,
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,2.242,,0.597,0.033,0.187,$67 ,12,1,$67 ,12,,,,
unet-camvid--0001 ,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,1.471,,0.374,0.012,0.098,$118 ,15,1,$118 ,15,678.4977,,,
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,2.715,,0.802,0.023,0.181,$118 ,15,1,$118 ,15,368.8973,,,
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,4.12,,1.142,0.035,0.275,$118 ,15,1,$118 ,15,,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,81.838,,19.314,0.017,0.221,"$4,678 ",370,2,"$2,339 ",185,41.506,,,
unet-camvid--0001 ,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,2.482,,1.54,0.021,0.038,$117 ,65,1,$117 ,65,412.1291,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,3.031,,1.9,0.014,0.087,$214 ,35,1,$214 ,35,457.5992,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,3.227,,2.018,0.017,0.050,$192 ,65,1,$192 ,65,256.5479,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,4.155,,2.6,0.014,0.119,$303 ,35,1,$303 ,35,277.7416,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,2.907,,2.004,0.006,0.083,$488 ,35,1,$488 ,35,319.7667,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,7.413,,4.615,0.012,0.059,$594 ,125,1,$594 ,125,157.3622,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,2.386,,1.481,0.010,0.034,$249 ,71,1,$249 ,71,422.1157,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,29.251,,7.301,0.009,0.139,"$3,144 ",210,2,"$1,572 ",105,69.3596,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,381.85,151.97,151.98,0.053,0.849,"$7,166 ",450,2,"$3,583 ",225,7.95,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,93.081,,21.382,0.005,0.227,"$16,954 ",410,2,"$8,477 ",205,22.9476,,,
unet-camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,27.814,,6.966,0.014,0.111,"$2,004 ",250,2,"$1,002 ",125,72.9773,,,
unet-camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,6.54,,1.677,0.014,0.234,$469 ,28,1,$469 ,28,152.602,,,
unet-camvid--0001 ,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,15.391,,4.571,0.033,0.550,$469 ,28,1,$469 ,28,61.6002,,,
unet-camvid--0001 ,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,17.962,,4.848,0.038,0.642,$469 ,28,1,$469 ,28,,,,
unet-camvid--0001 ,OV-2022.3-8991,accel,Intel® Flex-170 GPU,218.12,35.2,,0.113,1.454,"$1,925 ",150,1,"$1,925 ",150,7.149,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,802.63,,252.57,1.340,6.421,$599 ,125,1,$599 ,125,2.69,,,
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,249.5,,86.81,0.417,1.996,$599 ,125,1,$599 ,125,4.79,,,
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,795.31,,247.17,1.328,6.362,$599 ,125,1,$599 ,125,,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,638.25,,206.62,1.940,5.106,$329 ,125,1,$329 ,125,2.59,,,
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,229.22,,81.49,0.697,1.834,$329 ,125,1,$329 ,125,5.22,,,
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,631.71,,205.81,1.920,5.054,$329 ,125,1,$329 ,125,,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,428.506,,162.077,0.651,2.597,$658 ,165,1,$658 ,165,2.4778,,,
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,245.738,,84.457,0.373,1.489,$658 ,165,1,$658 ,165,3.8792,,,
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,598.947,,195.608,0.910,3.630,$658 ,165,1,$658 ,165,,,,
yolo_v3_tiny,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,12.406,,6.124,0.365,1.306,$34 ,9.5,1,$34 ,9.5,83.8614,,,
yolo_v3_tiny,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,22.94,,10.395,0.342,1.912,$67 ,12,1,$67 ,12,44.6243,,,
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,66.641,,38.178,0.995,5.553,$67 ,12,1,$67 ,12,15.7687,,,
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,86.38,,45.819,1.289,7.198,$67 ,12,1,$67 ,12,,,,
yolo_v3_tiny,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,55.629,,18.246,0.471,3.709,$118 ,15,1,$118 ,15,18.2291,,,
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,106.588,,31.376,0.903,7.106,$118 ,15,1,$118 ,15,10.8727,,,
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,153.471,,46.125,1.301,10.231,$118 ,15,1,$118 ,15,,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,2733.627,,761.534,0.584,7.388,"$4,678 ",370,2,"$2,339 ",185,1.1267,,,
yolo_v3_tiny,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,114.701,,66.266,0.980,1.765,$117 ,65,1,$117 ,65,8.9295,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,141.001,,79.694,0.659,4.029,$214 ,35,1,$214 ,35,9.9196,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,145.659,,85.158,0.759,2.241,$192 ,65,1,$192 ,65,5.465,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,191.931,,109.625,0.633,5.484,$303 ,35,1,$303 ,35,5.5981,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,147.041,,84.448,0.301,4.201,$488 ,35,1,$488 ,35,7.0171,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,359.61,,173.635,0.605,2.877,$594 ,125,1,$594 ,125,2.9037,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,109.066,,64.87,0.438,1.536,$249 ,71,1,$249 ,71,9.3792,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,1058.322,,337.035,0.337,5.040,"$3,144 ",210,2,"$1,572 ",105,2.4971,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,7344.88,5212.9,5236.28,1.025,16.322,"$7,166 ",450,2,"$3,583 ",225,1.06,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,2931.242,,901.832,0.173,7.149,"$16,954 ",410,2,"$8,477 ",205,1.215,,,
yolo_v3_tiny,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,1015.77,,321.263,0.507,4.063,"$2,004 ",250,2,"$1,002 ",125,2.6076,,,
yolo_v3_tiny,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,258.05,,79.963,0.550,9.216,$469 ,28,1,$469 ,28,4.1833,,,
yolo_v3_tiny,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,492.645,,157.98,1.050,17.594,$469 ,28,1,$469 ,28,2.5788,,,
yolo_v3_tiny,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,606.117,,186.339,1.292,21.647,$469 ,28,1,$469 ,28,,,,
yolo_v3_tiny,OV-2022.3-8991,accel,Intel® Flex-170 GPU,3634.16,1209.67,,1.888,24.228,"$1,925 ",150,1,"$1,925 ",150,1.293,,,
end_rec,,,,,,,,,,,,,,,,,
begin_rec,,,,,,,,,,,,,,,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-13900K CPU-only,37.15,,13.03,0.062,0.297,$599 ,125,1,$599 ,125,55.96,,,
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-13900K iGPU-only,12.92,,4.26,0.022,0.103,$599 ,125,1,$599 ,125,78.73,,,
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-13900K CPU+iGPU,37.16,,13.54,0.062,0.297,$599 ,125,1,$599 ,125,,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i5-13600K CPU-only,25.5,,8.36,0.078,0.204,$329 ,125,1,$329 ,125,53.79,,,
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i5-13600K iGPU-only,12.15,,4,0.037,0.097,$329 ,125,1,$329 ,125,83.64,,,
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i5-13600K CPU+iGPU,31.99,,10.82,0.097,0.256,$329 ,125,1,$329 ,125,,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-12900K CPU-only,21.833,,7.096,0.033,0.132,$658 ,165,1,$658 ,165,58.4745,,,
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i9-12900K iGPU-only,11.956,,3.869,0.018,0.072,$658 ,165,1,$658 ,165,85.1633,,,
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i9-12900K CPU+iGPU,26.693,,8.644,0.041,0.162,$658 ,165,1,$658 ,165,,,,
yolo_v4,OV-2022.3-8991,atom,Intel® Atom™ x5-E3940 CPU-only,0.522,,0.248,0.015,0.055,$34 ,9.5,1,$34 ,9.5,1900.0218,,,
yolo_v4,OV-2022.3-8991,atom,Intel® Atom™ x6425E CPU-only,0.99,,0.43,0.015,0.083,$67 ,12,1,$67 ,12,1019.82,,,
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Atom™ x6425E iGPU-only,3.413,,1.752,0.051,0.284,$67 ,12,1,$67 ,12,295.7702,,,
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Atom™ x6425E CPU+iGPU,3.999,,2.087,0.060,0.333,$67 ,12,1,$67 ,12,,,,
yolo_v4,OV-2022.3-8991,atom,Intel® Celeron™ 6305E CPU-only,2.453,,0.748,0.021,0.164,$118 ,15,1,$118 ,15,407.2474,,,
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Celeron™ 6305E iGPU-only,4.758,,1.434,0.040,0.317,$118 ,15,1,$118 ,15,212.7987,,,
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Celeron™ 6305E CPU+iGPU,7.048,,2.122,0.060,0.470,$118 ,15,1,$118 ,15,,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6336Y CPU-only,126.954,,35.481,0.027,0.343,"$4,678 ",370,2,"$2,339 ",185,37.8189,,,
yolo_v4,OV-2022.3-8991,core ,Intel® Core™ i3-8100 CPU-only,4.971,,2.885,0.042,0.076,$117 ,65,1,$117 ,65,203.4163,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i5-10500TE CPU-only,6.182,,3.532,0.029,0.177,$214 ,35,1,$214 ,35,227.5786,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,6.356,,3.757,0.033,0.098,$192 ,65,1,$192 ,65,123.3181,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,8.44,,4.868,0.028,0.241,$303 ,35,1,$303 ,35,135.9719,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i9-10900TE CPU-only,6.399,,3.765,0.013,0.183,$488 ,35,1,$488 ,35,155.642,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® W1290P CPU-only,15.614,,7.925,0.026,0.125,$594 ,125,1,$594 ,125,71.631,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® E-2124G CPU-only,4.674,,2.804,0.019,0.066,$249 ,71,1,$249 ,71,214.0957,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 5218T CPU-only,47.338,,14.464,0.015,0.225,"$3,144 ",210,2,"$1,572 ",105,45.7699,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6448Y CPU-only,252.03,228.55,228.67,0.035,0.560,"$7,166 ",450,2,"$3,583 ",225,15.01,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Platinum 8270 CPU-only,131.466,,41.001,0.008,0.321,"$16,954 ",410,2,"$8,477 ",205,19.2807,,,
yolo_v4,OV-2022.3-8991,xeon,Intel® Xeon® Silver 4216R CPU-only,45.047,,13.741,0.022,0.180,"$2,004 ",250,2,"$1,002 ",125,48.0344,,,
yolo_v4,OV-2022.3-8991,core,Intel® Core™ i7-1165G7 CPU-only,11.067,,3.259,0.024,0.395,$469 ,28,1,$469 ,28,92.2912,,,
yolo_v4,OV-2022.3-8991,core-iGPU,Intel® Core™ i7-1165G7 iGPU-only,25.048,,7.384,0.053,0.895,$469 ,28,1,$469 ,28,39.1492,,,
yolo_v4,OV-2022.3-8991,core-CPU+iGPU,Intel® Core™ i7-1165G7 CPU+iGPU,29.658,,8.32,0.063,1.059,$469 ,28,1,$469 ,28,,,,
yolo_v4,OV-2022.3-8991,accel,Intel® Flex-170 GPU,454.49,56.78,,0.236,3.03,"$1,925 ",150,1,"$1,925 ",150,6.969,,,
end_rec,,,,,,,,,,,,,,,,,
1 Network model Release IE-Type Platform name Throughput-INT8 Throughput-FP16 Throughput-FP32 Value Efficiency Price TDP Sockets Price/socket TDP/socket Latency
2 begin_rec
3 bert-base-cased OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only Intel® Core™ i9-13900K CPU-only 96.06 163.72 35.627 57.83 0.146 0.273 0.582 1.310 $658 $599 165 125 1 $658 $599 165 125 17.1432 15.53
4 bert-base-cased OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only Intel® Core™ i9-13900K iGPU-only 53.093 56.07 22.253 23.3 0.081 0.094 0.322 0.449 $658 $599 165 125 1 $658 $599 165 125 22.0002 19.62
5 bert-base-cased OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 108.306 210.17 44.797 85.83 0.165 0.351 0.656 1.681 $658 $599 165 125 1 $658 $599 165 125
6 bert-base-cased OV-2022.3-8991 atom core Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i5-13600K CPU-only 2.763 128.05 1.332 45.94 0.081 0.389 0.291 1.024 $34 $329 9.5 125 1 $34 $329 9.5 125 350.2746 12.71
7 bert-base-cased OV-2022.3-8991 atom core-iGPU Intel® Atom™ x6425E CPU-only Intel® Core™ i5-13600K iGPU-only 5.694 53.03 2.002 21.9 0.085 0.161 0.475 0.424 $67 $329 12 125 1 $67 $329 12 125 183.1711 20.81
8 bert-base-cased OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Atom™ x6425E iGPU-only Intel® Core™ i5-13600K CPU+iGPU 16.11 163.33 10.009 64.74 0.24 0.496 1.343 1.307 $67 $329 12 125 1 $67 $329 12 125 79.7607
9 bert-base-cased OV-2022.3-8991 core-CPU+iGPU core Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i9-12900K CPU-only 21.128 96.06 11.81 35.627 0.315 0.146 1.761 0.582 $67 $658 12 165 1 $67 $658 12 165 17.1432
10 bert-base-cased OV-2022.3-8991 atom core-iGPU Intel® Celeron™ 6305E CPU-only Intel® Core™ i9-12900K iGPU-only 14.212 53.093 4.255 22.253 0.12 0.081 0.947 0.322 $118 $658 15 165 1 $118 $658 15 165 72.6516 22.0002
11 bert-base-cased OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Celeron™ 6305E iGPU-only Intel® Core™ i9-12900K CPU+iGPU 18.983 108.306 8.696 44.797 0.161 0.165 1.266 0.656 $118 $658 15 165 1 $118 $658 15 165 62.9729
12 bert-base-cased OV-2022.3-8991 core-CPU+iGPU atom Intel® Celeron™ 6305E CPU+iGPU Intel® Atom™ x5-E3940 CPU-only 30.866 2.763 12.087 1.332 0.262 0.081 2.058 0.291 $118 $34 15 9.5 1 $118 $34 15 9.5 350.2746
13 bert-base-cased OV-2022.3-8991 xeon atom Intel® Xeon® Gold 6336Y CPU-only Intel® Atom™ x6425E CPU-only 645.77 5.694 213.877 2.002 0.138 0.085 1.745 0.475 $4,678 $67 370 12 2 1 $2,339 $67 185 12 6.7612 183.1711
14 bert-base-cased OV-2022.3-8991 core core-iGPU Intel® Core™ i3-8100 CPU-only Intel® Atom™ x6425E iGPU-only 23.721 16.11 14.457 10.009 0.203 0.240 0.365 1.343 $117 $67 65 12 1 $117 $67 65 12 44.0371 79.7607
15 bert-base-cased OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i5-10500TE CPU-only Intel® Atom™ x6425E CPU+iGPU 30.141 21.128 16.38 11.81 0.141 0.315 0.861 1.761 $214 $67 35 12 1 $214 $67 35 12 46.6064
16 bert-base-cased OV-2022.3-8991 core atom Intel® Core™ i5-8500 CPU-only Intel® Celeron™ 6305E CPU-only 30.541 14.212 19.319 4.255 0.159 0.120 0.47 0.947 $192 $118 65 15 1 $192 $118 65 15 27.8871 72.6516
17 bert-base-cased OV-2022.3-8991 core core-iGPU Intel® Core™ i7-8700T CPU-only Intel® Celeron™ 6305E iGPU-only 41.504 18.983 22.75 8.696 0.137 0.161 1.186 1.266 $303 $118 35 15 1 $303 $118 35 15 27.974 62.9729
18 bert-base-cased OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i9-10900TE CPU-only Intel® Celeron™ 6305E CPU+iGPU 32.073 30.866 16.558 12.087 0.066 0.262 0.916 2.058 $488 $118 35 15 1 $488 $118 35 15 39.7617
19 bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only Intel® Xeon® Gold 6336Y CPU-only 69.053 645.77 40.243 213.877 0.116 0.138 0.552 1.745 $594 $4,678 125 370 1 2 $594 $2,339 125 185 18.309 6.7612
20 bert-base-cased OV-2022.3-8991 xeon core Intel® Xeon® E-2124G CPU-only Intel® Core™ i3-8100 CPU-only 23.402 23.721 14.614 14.457 0.094 0.203 0.33 0.365 $249 $117 71 65 1 $249 $117 71 65 44.8984 44.0371
21 bert-base-cased OV-2022.3-8991 xeon core Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i5-10500TE CPU-only 266.949 30.141 79.033 16.38 0.085 0.141 1.271 0.861 $3,144 $214 210 35 2 1 $1,572 $214 105 35 12.4065 46.6064
22 bert-base-cased OV-2022.3-8991 xeon core Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i5-8500 CPU-only 2090.76 30.541 326.55 19.319 0.292 0.159 4.646 0.470 $7,166 $192 450 65 2 1 $3,583 $192 225 65 4.61 27.8871
23 bert-base-cased OV-2022.3-8991 xeon core Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i7-8700T CPU-only 682.593 41.504 225.713 22.75 0.04 0.137 1.665 1.186 $16,954 $303 410 35 2 1 $8,477 $303 205 35 6.9035 27.974
24 bert-base-cased OV-2022.3-8991 xeon core Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i9-10900TE CPU-only 256.994 32.073 75.502 16.558 0.128 0.066 1.028 0.916 $2,004 $488 250 35 2 1 $1,002 $488 125 35 13.0382 39.7617
25 bert-base-cased OV-2022.3-8991 core xeon Intel® Core™ i7-1165G7 CPU-only Intel® Xeon® W1290P CPU-only 64.632 69.053 18.394 40.243 0.138 0.116 2.308 0.552 $469 $594 28 125 1 $469 $594 28 125 17.638 18.309
26 bert-base-cased OV-2022.3-8991 core-iGPU xeon Intel® Core™ i7-1165G7 iGPU-only Intel® Xeon® E-2124G CPU-only 95.656 23.402 44.056 14.614 0.204 0.094 3.416 0.330 $469 $249 28 71 1 $469 $249 28 71 14.1005 44.8984
27 bert-base-cased OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i7-1165G7 CPU+iGPU Intel® Xeon® Gold 5218T CPU-only 128.005 266.949 50.592 79.033 0.273 0.085 4.572 1.271 $469 $3,144 28 210 1 2 $469 $1,572 28 105 12.4065
28 bert-base-cased OV-2022.3-8991 accel xeon Intel® Flex-170 GPU Intel® Xeon® Gold 6448Y CPU-only 906.3 2090.76 348.52 1372.2 1368.68 0.471 0.292 6.042 4.646 $1,925 $7,166 150 450 1 2 $1,925 $3,583 150 225 7.381 4.61
29 end_rec bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 682.593 225.713 0.040 1.665 $16,954 410 2 $8,477 205 6.9035
30 begin_rec bert-base-cased OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 256.994 75.502 0.128 1.028 $2,004 250 2 $1,002 125 13.0382
31 bert-large-uncased-whole-word-masking-squad-0001 bert-base-cased OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only Intel® Core™ i7-1165G7 CPU-only 7.714 64.632 3.093 18.394 0.012 0.138 0.047 2.308 $658 $469 165 28 1 $658 $469 165 28 155.3633 17.638
32 bert-large-uncased-whole-word-masking-squad-0001 bert-base-cased OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only Intel® Core™ i7-1165G7 iGPU-only 5.617 95.656 1.978 44.056 0.009 0.204 0.034 3.416 $658 $469 165 28 1 $658 $469 165 28 181.8303 14.1005
33 bert-large-uncased-whole-word-masking-squad-0001 bert-base-cased OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 10.602 128.005 3.753 50.592 0.016 0.273 0.064 4.572 $658 $469 165 28 1 $658 $469 165 28
34 bert-large-uncased-whole-word-masking-squad-0001 bert-base-cased OV-2022.3-8991 atom accel Intel® Atom™ x5-E3940 CPU-only Intel® Flex-170 GPU 0.272 906.3 348.52 0.125 0.008 0.471 0.029 6.042 $34 $1,925 9.5 150 1 $34 $1,925 9.5 150 3861.0657 7.381
35 bert-large-uncased-whole-word-masking-squad-0001 end_rec OV-2022.3-8991 atom Intel® Atom™ x6425E CPU-only 0.488 0.188 0.007 0.041 $67 12 1 $67 12 2090.8266
36 bert-large-uncased-whole-word-masking-squad-0001 begin_rec OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only 1.395 0.774 0.021 0.116 $67 12 1 $67 12 727.6781
37 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU core Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i9-13900K CPU-only 1.827 51.72 0.845 17.6 0.027 0.086 0.152 0.414 $67 $599 12 125 1 $67 $599 12 125 49.13
38 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 atom core-iGPU Intel® Celeron™ 6305E CPU-only Intel® Core™ i9-13900K iGPU-only 1.199 19.05 0.377 6.88 0.01 0.032 0.08 0.152 $118 $599 15 125 1 $118 $599 15 125 831.301 55.82
39 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Celeron™ 6305E iGPU-only Intel® Core™ i9-13900K CPU+iGPU 2.051 53 0.766 19.95 0.017 0.088 0.137 0.424 $118 $599 15 125 1 $118 $599 15 125 494.5363
40 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU core Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i5-13600K CPU-only 3.118 35.31 1.127 11.04 0.026 0.107 0.208 0.282 $118 $329 15 125 1 $118 $329 15 125 41.56
41 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i5-13600K iGPU-only 43.436 17.93 17.379 6.46 0.009 0.054 0.117 0.143 $4,678 $329 370 125 2 1 $2,339 $329 185 125 52.2862 59.37
42 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i3-8100 CPU-only Intel® Core™ i5-13600K CPU+iGPU 2.067 43.42 1.278 16.19 0.018 0.132 0.032 0.347 $117 $329 65 125 1 $117 $329 65 125 495.0786
43 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only Intel® Core™ i9-12900K CPU-only 2.619 7.714 1.536 3.093 0.012 0.075 0.047 $214 $658 35 165 1 $214 $658 35 165 502.3687 155.3633
44 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core core-iGPU Intel® Core™ i5-8500 CPU-only Intel® Core™ i9-12900K iGPU-only 2.72 5.617 1.679 1.978 0.014 0.009 0.042 0.034 $192 $658 65 165 1 $192 $658 65 165 320.168 181.8303
45 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-8700T CPU-only Intel® Core™ i9-12900K CPU+iGPU 3.625 10.602 2.11 3.753 0.012 0.016 0.104 0.064 $303 $658 35 165 1 $303 $658 35 165 309.2848
46 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core atom Intel® Core™ i9-10900TE CPU-only Intel® Atom™ x5-E3940 CPU-only 2.906 0.272 1.693 0.125 0.006 0.008 0.083 0.029 $488 $34 35 9.5 1 $488 $34 35 9.5 386.4947 3861.0657
47 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon atom Intel® Xeon® W1290P CPU-only Intel® Atom™ x6425E CPU-only 4.801 0.488 2.729 0.188 0.008 0.007 0.038 0.041 $594 $67 125 12 1 $594 $67 125 12 200.0794 2090.8266
48 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® E-2124G CPU-only Intel® Atom™ x6425E iGPU-only 2.098 1.395 1.32 0.774 0.008 0.021 0.03 0.116 $249 $67 71 12 1 $249 $67 71 12 492.0938 727.6781
49 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 5218T CPU-only Intel® Atom™ x6425E CPU+iGPU 21.062 1.827 7.021 0.845 0.007 0.027 0.1 0.152 $3,144 $67 210 12 2 1 $1,572 $67 105 12 101.4694
50 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon atom Intel® Xeon® Gold 6448Y CPU-only Intel® Celeron™ 6305E CPU-only 651.95 1.199 91.18 0.377 0.091 0.010 1.449 0.080 $7,166 $118 450 15 2 1 $3,583 $118 225 15 12.87 831.301
51 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Platinum 8270 CPU-only Intel® Celeron™ 6305E iGPU-only 46.064 2.051 19.051 0.766 0.003 0.017 0.112 0.137 $16,954 $118 410 15 2 1 $8,477 $118 205 15 49.4869 494.5363
52 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Silver 4216R CPU-only Intel® Celeron™ 6305E CPU+iGPU 20.014 3.118 6.726 1.127 0.01 0.026 0.08 0.208 $2,004 $118 250 15 2 1 $1,002 $118 125 15 105.9423
53 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core xeon Intel® Core™ i7-1165G7 CPU-only Intel® Xeon® Gold 6336Y CPU-only 5.192 43.436 1.626 17.379 0.011 0.009 0.185 0.117 $469 $4,678 28 370 1 2 $469 $2,339 28 185 203.6311 52.2862
54 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU core Intel® Core™ i7-1165G7 iGPU-only Intel® Core™ i3-8100 CPU-only 10.476 2.067 3.914 1.278 0.022 0.018 0.374 0.032 $469 $117 28 65 1 $469 $117 28 65 95.6598 495.0786
55 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i5-10500TE CPU-only 11.75 2.619 4.168 1.536 0.025 0.012 0.42 0.075 $469 $214 28 35 1 $469 $214 28 35 502.3687
56 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 accel core Intel® Flex-170 GPU Intel® Core™ i5-8500 CPU-only 74.47 2.72 25.77 1.679 0.039 0.014 0.496 0.042 $1,925 $192 150 65 1 $1,925 $192 150 65 19.768 320.168
57 end_rec bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 3.625 2.11 0.012 0.104 $303 35 1 $303 35 309.2848
58 begin_rec bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only 2.906 1.693 0.006 0.083 $488 35 1 $488 35 386.4947
59 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core xeon Intel® Core™ i9-12900K CPU-only Intel® Xeon® W1290P CPU-only 99.078 4.801 36.552 2.729 0.151 0.008 0.6 0.038 $658 $594 165 125 1 $658 $594 165 125 11.269 200.0794
60 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU xeon Intel® Core™ i9-12900K iGPU-only Intel® Xeon® E-2124G CPU-only 57.707 2.098 13.789 1.32 0.088 0.008 0.35 0.030 $658 $249 165 71 1 $658 $249 165 71 16.263 492.0938
61 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i9-12900K CPU+iGPU Intel® Xeon® Gold 5218T CPU-only 115.59 21.062 39.82 7.021 0.176 0.007 0.701 0.100 $658 $3,144 165 210 1 2 $658 $1,572 165 105 101.4694
62 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 atom xeon Intel® Atom™ x5-E3940 CPU-only Intel® Xeon® Gold 6448Y CPU-only 3.327 651.95 378.57 1.496 384.02 0.098 0.091 0.35 1.449 $34 $7,166 9.5 450 1 2 $34 $3,583 9.5 225 308.0916 12.87
63 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 atom xeon Intel® Atom™ x6425E CPU-only Intel® Xeon® Platinum 8270 CPU-only 6.07 46.064 3.041 19.051 0.091 0.003 0.506 0.112 $67 $16,954 12 410 1 2 $67 $8,477 12 205 166.5404 49.4869
64 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU xeon Intel® Atom™ x6425E iGPU-only Intel® Xeon® Silver 4216R CPU-only 20.014 5.196 6.726 0 0.010 0 0.080 $67 $2,004 12 250 1 2 $67 $1,002 12 125 217.0439 105.9423
65 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU core Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i7-1165G7 CPU-only 9.877 5.192 7.145 1.626 0.147 0.011 0.823 0.185 $67 $469 12 28 1 $67 $469 12 28 203.6311
66 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 atom core-iGPU Intel® Celeron™ 6305E CPU-only Intel® Core™ i7-1165G7 iGPU-only 13.516 10.476 4.681 3.914 0.115 0.022 0.901 0.374 $118 $469 15 28 1 $118 $469 15 28 74.1061 95.6598
67 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Celeron™ 6305E iGPU-only Intel® Core™ i7-1165G7 CPU+iGPU 22.35 11.75 4.635 4.168 0.189 0.025 1.49 0.420 $118 $469 15 28 1 $118 $469 15 28 42.9657
68 deeplabv3 bert-large-uncased-whole-word-masking-squad-0001 OV-2022.3-8991 core-CPU+iGPU accel Intel® Celeron™ 6305E CPU+iGPU Intel® Flex-170 GPU 34.576 74.47 25.77 8.955 0.293 0.039 2.305 0.496 $118 $1,925 15 150 1 $118 $1,925 15 150 19.768
69 deeplabv3 end_rec OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only 559.145 134.159 0.12 1.511 $4,678 370 2 $2,339 185 5.356
70 deeplabv3 begin_rec OV-2022.3-8991 core Intel® Core™ i3-8100 CPU-only 27.171 15.947 0.232 0.418 $117 65 1 $117 65 36.6584
71 deeplabv3 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only Intel® Core™ i9-13900K CPU-only 34.907 184.93 16.72 63.79 0.163 0.309 0.997 1.479 $214 $599 35 125 1 $214 $599 35 125 38.8986 10.31
72 deeplabv3 OV-2022.3-8991 core core-iGPU Intel® Core™ i5-8500 CPU-only Intel® Core™ i9-13900K iGPU-only 35.07 69.31 20.497 22.67 0.183 0.116 0.54 0.554 $192 $599 65 125 1 $192 $599 65 125 22.1865 15.02
73 deeplabv3 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-8700T CPU-only Intel® Core™ i9-13900K CPU+iGPU 47.647 191.48 23.747 62.99 0.157 0.320 1.361 1.532 $303 $599 35 125 1 $303 $599 35 125 22.628
74 deeplabv3 OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only Intel® Core™ i5-13600K CPU-only 36.559 139.02 18.235 48.48 0.075 0.423 1.045 1.112 $488 $329 35 125 1 $488 $329 35 125 27.138 10.48
75 deeplabv3 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® W1290P CPU-only Intel® Core™ i5-13600K iGPU-only 79.42 65.55 21.03 21.24 0.134 0.199 0.635 0.524 $594 $329 125 1 $594 $329 125 12.8397 16.12
76 deeplabv3 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® E-2124G CPU-only Intel® Core™ i5-13600K CPU+iGPU 26.173 154.19 16.906 52.87 0.105 0.469 0.369 1.234 $249 $329 71 125 1 $249 $329 71 125 37.9245
77 deeplabv3 OV-2022.3-8991 xeon core Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i9-12900K CPU-only 248.049 99.078 81.667 36.552 0.079 0.151 1.181 0.600 $3,144 $658 210 165 2 1 $1,572 $658 105 165 8.9485 11.269
78 deeplabv3 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i9-12900K iGPU-only 1139.5 57.707 271.62 13.789 0.159 0.088 2.532 0.350 $7,166 $658 450 165 2 1 $3,583 $658 225 165 2.47 16.263
79 deeplabv3 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i9-12900K CPU+iGPU 632.113 115.59 168.65 39.82 0.037 0.176 1.542 0.701 $16,954 $658 410 165 2 1 $8,477 $658 205 165 4.0073
80 deeplabv3 OV-2022.3-8991 xeon atom Intel® Xeon® Silver 4216R CPU-only Intel® Atom™ x5-E3940 CPU-only 241.703 3.327 78.963 1.496 0.121 0.098 0.967 0.350 $2,004 $34 250 9.5 2 1 $1,002 $34 125 9.5 9.356 308.0916
81 deeplabv3 OV-2022.3-8991 core atom Intel® Core™ i7-1165G7 CPU-only Intel® Atom™ x6425E CPU-only 64.13 6.07 18.519 3.041 0.137 0.091 2.29 0.506 $469 $67 28 12 1 $469 $67 28 12 16.6586 166.5404
82 deeplabv3 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only Intel® Atom™ x6425E iGPU-only 104.926 24.592 5.196 0.224 0.000 3.747 0.000 $469 $67 28 12 1 $469 $67 28 12 9.1435 217.0439
83 deeplabv3 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 121.441 9.877 30.498 7.145 0.259 0.147 4.337 0.823 $469 $67 28 12 1 $469 $67 28 12
84 deeplabv3 OV-2022.3-8991 accel atom Intel® Flex-170 GPU Intel® Celeron™ 6305E CPU-only 882.04 13.516 98.95 4.681 0.458 0.115 5.88 0.901 $1,925 $118 150 15 1 $1,925 $118 150 15 2.674 74.1061
85 end_rec deeplabv3 OV-2022.3-8991 core-iGPU Intel® Celeron™ 6305E iGPU-only 22.35 4.635 0.189 1.490 $118 15 1 $118 15 42.9657
86 begin_rec deeplabv3 OV-2022.3-8991 core-CPU+iGPU Intel® Celeron™ 6305E CPU+iGPU 34.576 8.955 0.293 2.305 $118 15 1 $118 15
87 densenet-121 deeplabv3 OV-2022.3-8991 core xeon Intel® Core™ i9-12900K CPU-only Intel® Xeon® Gold 6336Y CPU-only 457.193 559.145 165.166 134.159 0.695 0.120 2.771 1.511 $658 $4,678 165 370 1 2 $658 $2,339 165 185 3.141 5.356
88 densenet-121 deeplabv3 OV-2022.3-8991 core-iGPU core Intel® Core™ i9-12900K iGPU-only Intel® Core™ i3-8100 CPU-only 203.417 27.171 68.438 15.947 0.309 0.232 1.233 0.418 $658 $117 165 65 1 $658 $117 165 65 6.6728 36.6584
89 densenet-121 deeplabv3 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i5-10500TE CPU-only 575.442 34.907 179.858 16.72 0.875 0.163 3.488 0.997 $658 $214 165 35 1 $658 $214 165 35 38.8986
90 densenet-121 deeplabv3 OV-2022.3-8991 atom core Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i5-8500 CPU-only 13.344 35.07 5.882 20.497 0.392 0.183 1.405 0.540 $34 $192 9.5 65 1 $34 $192 9.5 65 80.7014 22.1865
91 densenet-121 deeplabv3 OV-2022.3-8991 atom core Intel® Atom™ x6425E CPU-only Intel® Core™ i7-8700T CPU-only 24.172 47.647 10.554 23.747 0.361 0.157 2.014 1.361 $67 $303 12 35 1 $67 $303 12 35 43.668 22.628
92 densenet-121 deeplabv3 OV-2022.3-8991 core-iGPU core Intel® Atom™ x6425E iGPU-only Intel® Core™ i9-10900TE CPU-only 36.559 30.615 18.235 0 0.075 0 1.045 $67 $488 12 35 1 $67 $488 12 35 30.0241 27.138
93 densenet-121 deeplabv3 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Atom™ x6425E CPU+iGPU Intel® Xeon® W1290P CPU-only 39.365 79.42 38.926 21.03 0.588 0.134 3.28 0.635 $67 $594 12 125 1 $67 $594 12 125 12.8397
94 densenet-121 deeplabv3 OV-2022.3-8991 atom xeon Intel® Celeron™ 6305E CPU-only Intel® Xeon® E-2124G CPU-only 58.965 26.173 15.713 16.906 0.5 0.105 3.931 0.369 $118 $249 15 71 1 $118 $249 15 71 18.3425 37.9245
95 densenet-121 deeplabv3 OV-2022.3-8991 core-iGPU xeon Intel® Celeron™ 6305E iGPU-only Intel® Xeon® Gold 5218T CPU-only 86.162 248.049 25.34 81.667 0.73 0.079 5.744 1.181 $118 $3,144 15 210 1 2 $118 $1,572 15 105 20.7907 8.9485
96 densenet-121 deeplabv3 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Celeron™ 6305E CPU+iGPU Intel® Xeon® Gold 6448Y CPU-only 140.891 1139.5 702.28 39.929 699.04 1.194 0.159 9.393 2.532 $118 $7,166 15 450 1 2 $118 $3,583 15 225 2.47
97 densenet-121 deeplabv3 OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only Intel® Xeon® Platinum 8270 CPU-only 3094.742 632.113 701.32 168.65 0.662 0.037 8.364 1.542 $4,678 $16,954 370 410 2 $2,339 $8,477 185 205 2.131 4.0073
98 densenet-121 deeplabv3 OV-2022.3-8991 core xeon Intel® Core™ i3-8100 CPU-only Intel® Xeon® Silver 4216R CPU-only 120.121 241.703 66.624 78.963 1.027 0.121 1.848 0.967 $117 $2,004 65 250 1 2 $117 $1,002 65 125 9.3755 9.356
99 densenet-121 deeplabv3 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only Intel® Core™ i7-1165G7 CPU-only 142.623 64.13 80.872 18.519 0.666 0.137 4.075 2.290 $214 $469 35 28 1 $214 $469 35 28 10.2536 16.6586
100 densenet-121 deeplabv3 OV-2022.3-8991 core core-iGPU Intel® Core™ i5-8500 CPU-only Intel® Core™ i7-1165G7 iGPU-only 149.112 104.926 85.051 24.592 0.777 0.224 2.294 3.747 $192 $469 65 28 1 $192 $469 65 28 6.0817 9.1435
101 densenet-121 deeplabv3 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-8700T CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 194.559 121.441 111.988 30.498 0.642 0.259 5.559 4.337 $303 $469 35 28 1 $303 $469 35 28 6.1906
102 densenet-121 deeplabv3 OV-2022.3-8991 core accel Intel® Core™ i9-10900TE CPU-only Intel® Flex-170 GPU 146.463 882.04 98.95 69.186 0.3 0.458 4.185 5.88 $488 $1,925 35 150 1 $488 $1,925 35 150 8.6496 2.674
103 densenet-121 end_rec OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 360.501 182.543 0.607 2.884 $594 125 1 $594 125 3.6046
104 densenet-121 begin_rec OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 114.844 67.188 0.461 1.618 $249 71 1 $249 71 9.7609
105 densenet-121 OV-2022.3-8991 xeon core Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i9-13900K CPU-only 1116.372 777.86 295.952 284.56 0.355 1.299 5.316 6.223 $3,144 $599 210 125 2 1 $1,572 $599 105 125 3.9606 3.26
106 densenet-121 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i9-13900K iGPU-only 8279.14 195.3 1137.41 66.46 1.155 0.326 18.398 1.562 $7,166 $599 450 125 2 1 $3,583 $599 225 125 2.39 6.8
107 densenet-121 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i9-13900K CPU+iGPU 3155.106 899.5 815.725 293.29 0.186 1.502 7.695 7.196 $16,954 $599 410 125 2 1 $8,477 $599 205 125 2.8831
108 densenet-121 OV-2022.3-8991 xeon core Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i5-13600K CPU-only 1064.824 612.99 283.423 184.9 0.531 1.863 4.259 4.904 $2,004 $329 250 125 2 1 $1,002 $329 125 4.0689 3.12
109 densenet-121 OV-2022.3-8991 core core-iGPU Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i5-13600K iGPU-only 265.167 178.37 74.501 62.69 0.565 0.542 9.47 1.427 $469 $329 28 125 1 $469 $329 28 125 4.7413 8.37
110 densenet-121 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Core™ i7-1165G7 iGPU-only Intel® Core™ i5-13600K CPU+iGPU 391.185 707.99 123.519 207.12 0.834 2.152 13.971 5.664 $469 $329 28 125 1 $469 $329 28 125 6.5259
111 densenet-121 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i9-12900K CPU-only 526.12 457.193 150.35 165.166 1.122 0.695 18.79 2.771 $469 $658 28 165 1 $469 $658 28 165 3.141
112 densenet-121 OV-2022.3-8991 accel core-iGPU Intel® Flex-170 GPU Intel® Core™ i9-12900K iGPU-only 3440.18 203.417 1178.68 68.438 1.787 0.309 22.935 1.233 $1,925 $658 150 165 1 $1,925 $658 150 165 3.302 6.6728
113 end_rec densenet-121 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 575.442 179.858 0.875 3.488 $658 165 1 $658 165
114 begin_rec densenet-121 OV-2022.3-8991 atom Intel® Atom™ x5-E3940 CPU-only 13.344 5.882 0.392 1.405 $34 9.5 1 $34 9.5 80.7014
115 efficientdet-d0 densenet-121 OV-2022.3-8991 core atom Intel® Core™ i9-12900K CPU-only Intel® Atom™ x6425E CPU-only 112.297 24.172 64.06 10.554 0.171 0.361 0.681 2.014 $658 $67 165 12 1 $658 $67 165 12 11.8265 43.668
116 efficientdet-d0 densenet-121 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only Intel® Atom™ x6425E iGPU-only 73.766 38.742 30.615 0.112 0.000 0.447 0.000 $658 $67 165 12 1 $658 $67 165 12 21.403 30.0241
117 efficientdet-d0 densenet-121 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 128.735 39.365 76.62 38.926 0.196 0.588 0.78 3.280 $658 $67 165 12 1 $658 $67 165 12
118 efficientdet-d0 densenet-121 OV-2022.3-8991 atom Intel® Atom™ x5-E3940 CPU-only Intel® Celeron™ 6305E CPU-only 3.812 58.965 2.565 15.713 0.112 0.500 0.401 3.931 $34 $118 9.5 15 1 $34 $118 9.5 15 274.5947 18.3425
119 efficientdet-d0 densenet-121 OV-2022.3-8991 atom core-iGPU Intel® Atom™ x6425E CPU-only Intel® Celeron™ 6305E iGPU-only 7.248 86.162 5.17 25.34 0.108 0.730 0.604 5.744 $67 $118 12 15 1 $67 $118 12 15 143.7999 20.7907
120 efficientdet-d0 densenet-121 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Atom™ x6425E iGPU-only Intel® Celeron™ 6305E CPU+iGPU 22.697 140.891 15.635 39.929 0.339 1.194 1.891 9.393 $67 $118 12 15 1 $67 $118 12 15 59.0651
121 efficientdet-d0 densenet-121 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Atom™ x6425E CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only 26.855 3094.742 17.296 701.32 0.401 0.662 2.238 8.364 $67 $4,678 12 370 1 2 $67 $2,339 12 185 2.131
122 efficientdet-d0 densenet-121 OV-2022.3-8991 atom core Intel® Celeron™ 6305E CPU-only Intel® Core™ i3-8100 CPU-only 15.949 120.121 10.417 66.624 0.135 1.027 1.063 1.848 $118 $117 15 65 1 $118 $117 15 65 62.2765 9.3755
123 efficientdet-d0 densenet-121 OV-2022.3-8991 core-iGPU core Intel® Celeron™ 6305E iGPU-only Intel® Core™ i5-10500TE CPU-only 25.936 142.623 14.073 80.872 0.22 0.666 1.729 4.075 $118 $214 15 35 1 $118 $214 15 35 54.0166 10.2536
124 efficientdet-d0 densenet-121 OV-2022.3-8991 core-CPU+iGPU core Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i5-8500 CPU-only 32.767 149.112 20.733 85.051 0.278 0.777 2.184 2.294 $118 $192 15 65 1 $118 $192 15 65 6.0817
125 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i7-8700T CPU-only 424.388 194.559 256.503 111.988 0.091 0.642 1.147 5.559 $4,678 $303 370 35 2 1 $2,339 $303 185 35 6.1906
126 efficientdet-d0 densenet-121 OV-2022.3-8991 core core Intel® Core™ i3-8100 CPU-only Intel® Core™ i9-10900TE CPU-only 36.666 146.463 24.041 69.186 0.313 0.300 0.564 4.185 $117 $488 65 35 1 $117 $488 65 35 30.2521 8.6496
127 efficientdet-d0 densenet-121 OV-2022.3-8991 core xeon Intel® Core™ i5-10500TE CPU-only Intel® Xeon® W1290P CPU-only 45.331 360.501 26.92 182.543 0.212 0.607 1.295 2.884 $214 $594 35 125 1 $214 $594 35 125 32.8946 3.6046
128 efficientdet-d0 densenet-121 OV-2022.3-8991 core xeon Intel® Core™ i5-8500 CPU-only Intel® Xeon® E-2124G CPU-only 44.921 114.844 32.357 67.188 0.234 0.461 0.691 1.618 $192 $249 65 71 1 $192 $249 65 71 19.7048 9.7609
129 efficientdet-d0 densenet-121 OV-2022.3-8991 core xeon Intel® Core™ i7-8700T CPU-only Intel® Xeon® Gold 5218T CPU-only 62.749 1116.372 37.807 295.952 0.207 0.355 1.793 5.316 $303 $3,144 35 210 1 2 $303 $1,572 35 105 19.901 3.9606
130 efficientdet-d0 densenet-121 OV-2022.3-8991 core xeon Intel® Core™ i9-10900TE CPU-only Intel® Xeon® Gold 6448Y CPU-only 50.35 8279.14 4856.54 29.935 4862.51 0.103 1.155 1.439 18.398 $488 $7,166 35 450 1 2 $488 $3,583 35 225 24.2916 2.39
131 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only Intel® Xeon® Platinum 8270 CPU-only 94.981 3155.106 36.434 815.725 0.16 0.186 0.76 7.695 $594 $16,954 125 410 1 2 $594 $8,477 125 205 12.658 2.8831
132 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only Intel® Xeon® Silver 4216R CPU-only 35.831 1064.824 27.306 283.423 0.144 0.531 0.505 4.259 $249 $2,004 71 250 1 2 $249 $1,002 71 125 30.9469 4.0689
133 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon core Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i7-1165G7 CPU-only 239.06 265.167 161.224 74.501 0.076 0.565 1.138 9.470 $3,144 $469 210 28 2 1 $1,572 $469 105 28 13.9735 4.7413
134 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i7-1165G7 iGPU-only 875.53 391.185 560.48 123.519 0.122 0.834 1.946 13.971 $7,166 $469 450 28 2 1 $3,583 $469 225 28 5.07 6.5259
135 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 471.02 526.12 300.291 150.35 0.028 1.122 1.149 18.790 $16,954 $469 410 28 2 1 $8,477 $469 205 28 9.3866
136 efficientdet-d0 densenet-121 OV-2022.3-8991 xeon accel Intel® Xeon® Silver 4216R CPU-only Intel® Flex-170 GPU 231.873 3440.18 1178.68 156.285 0.116 1.787 0.927 22.935 $2,004 $1,925 250 150 2 1 $1,002 $1,925 125 150 14.1605 3.302
137 efficientdet-d0 end_rec OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 71.482 41.123 0.152 2.553 $469 28 1 $469 28 16.6952
138 efficientdet-d0 begin_rec OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 92.52 50.538 0.197 3.304 $469 28 1 $469 28 17.295
139 efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i9-13900K CPU-only 107.688 209.26 56.901 106.11 0.23 0.349 3.846 1.674 $469 $599 28 125 1 $469 $599 28 125 10.36
140 efficientdet-d0 OV-2022.3-8991 accel core-iGPU Intel® Flex-170 GPU Intel® Core™ i9-13900K iGPU-only 463.67 82.04 295.13 47.85 0.241 0.137 3.091 0.656 $1,925 $599 150 125 1 $1,925 $599 150 125 5.603 22.35
141 end_rec efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 197.85 108.3 0.330 1.583 $599 125 1 $599 125
142 begin_rec efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 155.65 90.91 0.473 1.245 $329 125 1 $329 125 9.92
143 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core core-iGPU Intel® Core™ i9-12900K CPU-only Intel® Core™ i5-13600K iGPU-only 12.921 77.28 4.016 44.91 0.02 0.235 0.078 0.618 $658 $329 165 125 1 $658 $329 165 125 89.8929 22.93
144 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Core™ i9-12900K iGPU-only Intel® Core™ i5-13600K CPU+iGPU 6.802 172.54 1.82 95.94 0.01 0.524 0.041 1.380 $658 $329 165 125 1 $658 $329 165 125 149.7396
145 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i9-12900K CPU-only 15.679 112.297 4.499 64.06 0.024 0.171 0.095 0.681 $658 165 1 $658 165 11.8265
146 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 atom core-iGPU Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i9-12900K iGPU-only 0.32 73.766 0.131 38.742 0.009 0.112 0.034 0.447 $34 $658 9.5 165 1 $34 $658 9.5 165 3206.1652 21.403
147 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 atom core-CPU+iGPU Intel® Atom™ x6425E CPU-only Intel® Core™ i9-12900K CPU+iGPU 0.592 128.735 0.242 76.62 0.009 0.196 0.049 0.780 $67 $658 12 165 1 $67 $658 12 165 1727.27
148 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-iGPU atom Intel® Atom™ x6425E iGPU-only Intel® Atom™ x5-E3940 CPU-only 1.301 3.812 0.728 2.565 0.019 0.112 0.108 0.401 $67 $34 12 9.5 1 $67 $34 12 9.5 776.0692 274.5947
149 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU atom Intel® Atom™ x6425E CPU+iGPU Intel® Atom™ x6425E CPU-only 1.683 7.248 0.865 5.17 0.025 0.108 0.14 0.604 $67 12 1 $67 12 143.7999
150 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 atom core-iGPU Intel® Celeron™ 6305E CPU-only Intel® Atom™ x6425E iGPU-only 1.563 22.697 0.417 15.635 0.013 0.339 0.104 1.891 $118 $67 15 12 1 $118 $67 15 12 640.0005 59.0651
151 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Celeron™ 6305E iGPU-only Intel® Atom™ x6425E CPU+iGPU 2.616 26.855 0.725 17.296 0.022 0.401 0.174 2.238 $118 $67 15 12 1 $118 $67 15 12 389.3563
152 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU atom Intel® Celeron™ 6305E CPU+iGPU Intel® Celeron™ 6305E CPU-only 4.056 15.949 1.107 10.417 0.034 0.135 0.27 1.063 $118 15 1 $118 15 62.2765
153 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 6336Y CPU-only Intel® Celeron™ 6305E iGPU-only 74.93 25.936 19.965 14.073 0.016 0.220 0.203 1.729 $4,678 $118 370 15 2 1 $2,339 $118 185 15 65.5753 54.0166
154 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i3-8100 CPU-only Intel® Celeron™ 6305E CPU+iGPU 2.988 32.767 1.473 20.733 0.026 0.278 0.046 2.184 $117 $118 65 15 1 $117 $118 65 15 340.7313
155 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core xeon Intel® Core™ i5-10500TE CPU-only Intel® Xeon® Gold 6336Y CPU-only 3.633 424.388 1.926 256.503 0.017 0.091 0.104 1.147 $214 $4,678 35 370 1 2 $214 $2,339 35 185 430.1967
156 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core core Intel® Core™ i5-8500 CPU-only Intel® Core™ i3-8100 CPU-only 3.852 36.666 1.982 24.041 0.02 0.313 0.059 0.564 $192 $117 65 1 $192 $117 65 241.5513 30.2521
157 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only Intel® Core™ i5-10500TE CPU-only 4.999 45.331 2.648 26.92 0.016 0.212 0.143 1.295 $303 $214 35 1 $303 $214 35 260.2284 32.8946
158 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only Intel® Core™ i5-8500 CPU-only 3.71 44.921 2.005 32.357 0.008 0.234 0.106 0.691 $488 $192 35 65 1 $488 $192 35 65 280.1493 19.7048
159 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon core Intel® Xeon® W1290P CPU-only Intel® Core™ i7-8700T CPU-only 8.977 62.749 4.542 37.807 0.015 0.207 0.072 1.793 $594 $303 125 35 1 $594 $303 125 35 137.1747 19.901
160 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon core Intel® Xeon® E-2124G CPU-only Intel® Core™ i9-10900TE CPU-only 2.867 50.35 1.464 29.935 0.012 0.103 0.04 1.439 $249 $488 71 35 1 $249 $488 71 35 353.2042 24.2916
161 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only Intel® Xeon® W1290P CPU-only 29.332 94.981 8.19 36.434 0.009 0.160 0.14 0.760 $3,144 $594 210 125 2 1 $1,572 $594 105 125 78.1722 12.658
162 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only Intel® Xeon® E-2124G CPU-only 282.45 35.831 32.43 27.306 0.003 0.144 0.044 0.505 $7,166 $249 450 71 2 1 $3,583 $249 225 71 12.03 30.9469
163 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only Intel® Xeon® Gold 5218T CPU-only 85.213 239.06 22.066 161.224 0.005 0.076 0.208 1.138 $16,954 $3,144 410 210 2 $8,477 $1,572 205 105 30.4317 13.9735
164 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only Intel® Xeon® Gold 6448Y CPU-only 27.847 875.53 495.04 7.786 492.93 0.014 0.122 0.111 1.946 $2,004 $7,166 250 450 2 $1,002 $3,583 125 225 78.6604 5.07
165 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core xeon Intel® Core™ i7-1165G7 CPU-only Intel® Xeon® Platinum 8270 CPU-only 7.027 471.02 1.855 300.291 0.015 0.028 0.251 1.149 $469 $16,954 28 410 1 2 $469 $8,477 28 205 151.8783 9.3866
166 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-iGPU xeon Intel® Core™ i7-1165G7 iGPU-only Intel® Xeon® Silver 4216R CPU-only 13.823 231.873 3.545 156.285 0.029 0.116 0.494 0.927 $469 $2,004 28 250 1 2 $469 $1,002 28 125 70.7933 14.1605
167 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i7-1165G7 CPU-only 16.898 71.482 4.191 41.123 0.036 0.152 0.604 2.553 $469 28 1 $469 28 16.6952
168 faster_rcnn_resnet50_coco efficientdet-d0 OV-2022.3-8991 accel core-iGPU Intel® Flex-170 GPU Intel® Core™ i7-1165G7 iGPU-only 216.3 92.52 23.42 50.538 0.112 0.197 1.442 3.304 $1,925 $469 150 28 1 $1,925 $469 150 28 9.137 17.295
169 end_rec efficientdet-d0 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 107.688 56.901 0.230 3.846 $469 28 1 $469 28
170 begin_rec efficientdet-d0 OV-2022.3-8991 accel Intel® Flex-170 GPU 463.67 295.13 0.241 3.091 $1,925 150 1 $1,925 150 5.603
171 Inception-V4 end_rec OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 121.813 39.391 0.185 0.738 $658 165 1 $658 165 11.0425
172 Inception-V4 begin_rec OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 71.229 17.755 0.108 0.432 $658 165 1 $658 165 19.7132
173 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i9-13900K CPU-only 175.049 5.94 44.894 2.41 0.266 0.010 1.061 0.048 $658 $599 165 125 1 $658 $599 165 125 270.57
174 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 atom core-iGPU Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i9-13900K iGPU-only 3.12 2.3 1.344 0.71 0.092 0.004 0.328 0.018 $34 $599 9.5 125 1 $34 $599 9.5 125 335.3712 437.94
175 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 atom core-CPU+iGPU Intel® Atom™ x6425E CPU-only Intel® Core™ i9-13900K CPU+iGPU 5.677 6.45 2.364 2.25 0.085 0.011 0.473 0.052 $67 $599 12 125 1 $67 $599 12 125 181.8897
176 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU core Intel® Atom™ x6425E iGPU-only Intel® Core™ i5-13600K CPU-only 17.009 4.55 8.302 1.88 0.254 0.014 1.417 0.036 $67 $329 12 125 1 $67 $329 12 125 78.0548 310.58
177 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i5-13600K iGPU-only 21.713 2.17 10.37 0.67 0.324 0.007 1.809 0.017 $67 $329 12 125 1 $67 $329 12 125 465.03
178 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 atom core-CPU+iGPU Intel® Celeron™ 6305E CPU-only Intel® Core™ i5-13600K CPU+iGPU 15.576 5.3 4.073 2.01 0.132 0.016 1.038 0.042 $118 $329 15 125 1 $118 $329 15 125 65.7272
179 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU core Intel® Celeron™ 6305E iGPU-only Intel® Core™ i9-12900K CPU-only 28.105 12.921 6.681 4.016 0.238 0.020 1.874 0.078 $118 $658 15 165 1 $118 $658 15 165 46.2616 89.8929
180 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i9-12900K iGPU-only 41.918 6.802 10.163 1.82 0.355 0.010 2.795 0.041 $118 $658 15 165 1 $118 $658 15 165 149.7396
181 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i9-12900K CPU+iGPU 881.403 15.679 205.296 4.499 0.188 0.024 2.382 0.095 $4,678 $658 370 165 2 1 $2,339 $658 185 165 4.7029
182 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core atom Intel® Core™ i3-8100 CPU-only Intel® Atom™ x5-E3940 CPU-only 30.004 0.32 15.51 0.131 0.256 0.009 0.462 0.034 $117 $34 65 9.5 1 $117 $34 65 9.5 35.0513 3206.1652
183 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core atom Intel® Core™ i5-10500TE CPU-only Intel® Atom™ x6425E CPU-only 35.882 0.592 19.222 0.242 0.168 0.009 1.025 0.049 $214 $67 35 12 1 $214 $67 35 12 37.6472 1727.27
184 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core core-iGPU Intel® Core™ i5-8500 CPU-only Intel® Atom™ x6425E iGPU-only 37.987 1.301 19.998 0.728 0.198 0.019 0.584 0.108 $192 $67 65 12 1 $192 $67 65 12 21.5144 776.0692
185 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-8700T CPU-only Intel® Atom™ x6425E CPU+iGPU 48.903 1.683 26.356 0.865 0.161 0.025 1.397 0.140 $303 $67 35 12 1 $303 $67 35 12 22.2402
186 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core atom Intel® Core™ i9-10900TE CPU-only Intel® Celeron™ 6305E CPU-only 37.301 1.563 19.475 0.417 0.076 0.013 1.066 0.104 $488 $118 35 15 1 $488 $118 35 15 28.572 640.0005
187 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon core-iGPU Intel® Xeon® W1290P CPU-only Intel® Celeron™ 6305E iGPU-only 92.646 2.616 44.966 0.725 0.156 0.022 0.741 0.174 $594 $118 125 15 1 $594 $118 125 15 12.3153 389.3563
188 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® E-2124G CPU-only Intel® Celeron™ 6305E CPU+iGPU 28.537 4.056 15.13 1.107 0.115 0.034 0.402 0.270 $249 $118 71 15 1 $249 $118 71 15 36.8888
189 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only Intel® Xeon® Gold 6336Y CPU-only 301.215 74.93 77.005 19.965 0.096 0.016 1.434 0.203 $3,144 $4,678 210 370 2 $1,572 $2,339 105 185 10.5711 65.5753
190 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon core Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i3-8100 CPU-only 3406.7 2.988 331.56 1.473 0.475 0.026 7.57 0.046 $7,166 $117 450 65 2 1 $3,583 $117 225 65 3.23 340.7313
191 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon core Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i5-10500TE CPU-only 937.139 3.633 225.776 1.926 0.055 0.017 2.286 0.104 $16,954 $214 410 35 2 1 $8,477 $214 205 35 5.6984 430.1967
192 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 xeon core Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i5-8500 CPU-only 287.767 3.852 73.617 1.982 0.144 0.020 1.151 0.059 $2,004 $192 250 65 2 1 $1,002 $192 125 65 11.1114 241.5513
193 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i7-8700T CPU-only 71.295 4.999 18.482 2.648 0.152 0.016 2.546 0.143 $469 $303 28 35 1 $469 $303 28 35 15.8294 260.2284
194 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU core Intel® Core™ i7-1165G7 iGPU-only Intel® Core™ i9-10900TE CPU-only 158.282 3.71 36.884 2.005 0.337 0.008 5.653 0.106 $469 $488 28 35 1 $469 $488 28 35 10.6245 280.1493
195 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i7-1165G7 CPU+iGPU Intel® Xeon® W1290P CPU-only 182.132 8.977 44.198 4.542 0.388 0.015 6.505 0.072 $469 $594 28 125 1 $469 $594 28 125 137.1747
196 Inception-V4 faster_rcnn_resnet50_coco OV-2022.3-8991 accel xeon Intel® Flex-170 GPU Intel® Xeon® E-2124G CPU-only 2986.91 2.867 298.6 1.464 1.552 0.012 19.913 0.040 $1,925 $249 150 71 1 $1,925 $249 150 71 3.968 353.2042
197 end_rec faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 29.332 8.19 0.009 0.140 $3,144 210 2 $1,572 105 78.1722
198 begin_rec faster_rcnn_resnet50_coco OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 19.71 18.01 18.15 0.003 0.044 $7,166 450 2 $3,583 225 129.2
199 mobilenet-ssd faster_rcnn_resnet50_coco OV-2022.3-8991 core xeon Intel® Core™ i9-12900K CPU-only Intel® Xeon® Platinum 8270 CPU-only 1054.462 85.213 346.546 22.066 1.603 0.005 6.391 0.208 $658 $16,954 165 410 1 2 $658 $8,477 165 205 1.4898 30.4317
200 mobilenet-ssd faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU xeon Intel® Core™ i9-12900K iGPU-only Intel® Xeon® Silver 4216R CPU-only 493.088 27.847 145.503 7.786 0.749 0.014 2.988 0.111 $658 $2,004 165 250 1 2 $658 $1,002 165 125 2.472 78.6604
201 mobilenet-ssd faster_rcnn_resnet50_coco OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i7-1165G7 CPU-only 1056.241 7.027 361.472 1.855 1.605 0.015 6.401 0.251 $658 $469 165 28 1 $658 $469 165 28 151.8783
202 mobilenet-ssd faster_rcnn_resnet50_coco OV-2022.3-8991 atom core-iGPU Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i7-1165G7 iGPU-only 28.032 13.823 12.691 3.545 0.824 0.029 2.951 0.494 $34 $469 9.5 28 1 $34 $469 9.5 28 38.1991 70.7933
203 mobilenet-ssd faster_rcnn_resnet50_coco OV-2022.3-8991 atom core-CPU+iGPU Intel® Atom™ x6425E CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 50.352 16.898 23.675 4.191 0.752 0.036 4.196 0.604 $67 $469 12 28 1 $67 $469 12 28 20.8993
204 mobilenet-ssd faster_rcnn_resnet50_coco OV-2022.3-8991 core-iGPU accel Intel® Atom™ x6425E iGPU-only Intel® Flex-170 GPU 100.377 216.3 23.42 60.647 1.498 0.112 8.365 1.442 $67 $1,925 12 150 1 $67 $1,925 12 150 11.6812 9.137
205 mobilenet-ssd end_rec OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 148.458 79.575 2.216 12.372 $67 12 1 $67 12
206 mobilenet-ssd begin_rec OV-2022.3-8991 atom Intel® Celeron™ 6305E CPU-only 123.806 38.981 1.049 8.254 $118 15 1 $118 15 8.4121
207 mobilenet-ssd Inception-V4 OV-2022.3-8991 core-iGPU core Intel® Celeron™ 6305E iGPU-only Intel® Core™ i9-13900K CPU-only 168.473 219.06 53.272 71.15 1.428 0.366 11.232 1.752 $118 $599 15 125 1 $118 $599 15 125 7.8961 10.19
208 mobilenet-ssd Inception-V4 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i9-13900K iGPU-only 238.511 65.91 83.721 18.1 2.021 0.110 15.901 0.527 $118 $599 15 125 1 $118 $599 15 125 16.55
209 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i9-13900K CPU+iGPU 6730.68 279.58 1634.937 78.65 1.439 0.467 18.191 2.237 $4,678 $599 370 125 2 1 $2,339 $599 185 125 0.7265
210 mobilenet-ssd Inception-V4 OV-2022.3-8991 core core Intel® Core™ i3-8100 CPU-only Intel® Core™ i5-13600K CPU-only 244.598 171.19 140.009 45.8 2.091 0.520 3.763 1.370 $117 $329 65 125 1 $117 $329 65 125 4.4481 9.14
211 mobilenet-ssd Inception-V4 OV-2022.3-8991 core core-iGPU Intel® Core™ i5-10500TE CPU-only Intel® Core™ i5-13600K iGPU-only 303.938 62.45 173.562 17.02 1.42 0.190 8.684 0.500 $214 $329 35 125 1 $214 $329 35 125 4.7946 17.48
212 mobilenet-ssd Inception-V4 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i5-8500 CPU-only Intel® Core™ i5-13600K CPU+iGPU 304.802 219.02 185.045 52.56 1.588 0.666 4.689 1.752 $192 $329 65 125 1 $192 $329 65 125 2.8007
213 mobilenet-ssd Inception-V4 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only Intel® Core™ i9-12900K CPU-only 412.261 121.813 241.37 39.391 1.361 0.185 11.779 0.738 $303 $658 35 165 1 $303 $658 35 165 2.8749 11.0425
214 mobilenet-ssd Inception-V4 OV-2022.3-8991 core core-iGPU Intel® Core™ i9-10900TE CPU-only Intel® Core™ i9-12900K iGPU-only 315.107 71.229 152.342 17.755 0.646 0.108 9.003 0.432 $488 $658 35 165 1 $488 $658 35 165 3.5896 19.7132
215 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® W1290P CPU-only Intel® Core™ i9-12900K CPU+iGPU 774.346 175.049 345.309 44.894 1.304 0.266 6.195 1.061 $594 $658 125 165 1 $594 $658 125 165 1.5452
216 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon atom Intel® Xeon® E-2124G CPU-only Intel® Atom™ x5-E3940 CPU-only 233.43 3.12 147.098 1.344 0.937 0.092 3.288 0.328 $249 $34 71 9.5 1 $249 $34 71 9.5 4.5879 335.3712
217 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon atom Intel® Xeon® Gold 5218T CPU-only Intel® Atom™ x6425E CPU-only 2331.207 5.677 691.743 2.364 0.741 0.085 11.101 0.473 $3,144 $67 210 12 2 1 $1,572 $67 105 12 1.4852 181.8897
218 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 6448Y CPU-only Intel® Atom™ x6425E iGPU-only 16445.75 17.009 2736.2 8.302 2.295 0.254 36.546 1.417 $7,166 $67 450 12 2 1 $3,583 $67 225 12 0.65 78.0548
219 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Platinum 8270 CPU-only Intel® Atom™ x6425E CPU+iGPU 6691.915 21.713 1796.357 10.37 0.395 0.324 16.322 1.809 $16,954 $67 410 12 2 1 $8,477 $67 205 12 1.0518
220 mobilenet-ssd Inception-V4 OV-2022.3-8991 xeon atom Intel® Xeon® Silver 4216R CPU-only Intel® Celeron™ 6305E CPU-only 2225.935 15.576 667.692 4.073 1.111 0.132 8.904 1.038 $2,004 $118 250 15 2 1 $1,002 $118 125 15 1.5444 65.7272
221 mobilenet-ssd Inception-V4 OV-2022.3-8991 core core-iGPU Intel® Core™ i7-1165G7 CPU-only Intel® Celeron™ 6305E iGPU-only 579.307 28.105 166.959 6.681 1.235 0.238 20.69 1.874 $469 $118 28 15 1 $469 $118 28 15 2.0215 46.2616
222 mobilenet-ssd Inception-V4 OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Core™ i7-1165G7 iGPU-only Intel® Celeron™ 6305E CPU+iGPU 582.636 41.918 243.945 10.163 1.242 0.355 20.808 2.795 $469 $118 28 15 1 $469 $118 28 15 2.548
223 mobilenet-ssd Inception-V4 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i7-1165G7 CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only 744.231 881.403 292.071 205.296 1.587 0.188 26.58 2.382 $469 $4,678 28 370 1 2 $469 $2,339 28 185 4.7029
224 mobilenet-ssd Inception-V4 OV-2022.3-8991 accel core Intel® Flex-170 GPU Intel® Core™ i3-8100 CPU-only 3548.98 30.004 1412.68 15.51 1.844 0.256 23.66 0.462 $1,925 $117 150 65 1 $1,925 $117 150 65 1.344 35.0513
225 end_rec Inception-V4 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only 35.882 19.222 0.168 1.025 $214 35 1 $214 35 37.6472
226 begin_rec Inception-V4 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 37.987 19.998 0.198 0.584 $192 65 1 $192 65 21.5144
227 mobilenet-v2 Inception-V4 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only Intel® Core™ i7-8700T CPU-only 2446.221 48.903 1003.129 26.356 3.718 0.161 14.826 1.397 $658 $303 165 35 1 $658 $303 165 35 0.7182 22.2402
228 mobilenet-v2 Inception-V4 OV-2022.3-8991 core-iGPU core Intel® Core™ i9-12900K iGPU-only Intel® Core™ i9-10900TE CPU-only 1265.969 37.301 389.894 19.475 1.924 0.076 7.673 1.066 $658 $488 165 35 1 $658 $488 165 35 1.3894 28.572
229 mobilenet-v2 Inception-V4 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i9-12900K CPU+iGPU Intel® Xeon® W1290P CPU-only 2680.458 92.646 1013.049 44.966 4.074 0.156 16.245 0.741 $658 $594 165 125 1 $658 $594 165 125 12.3153
230 mobilenet-v2 Inception-V4 OV-2022.3-8991 atom xeon Intel® Atom™ x5-E3940 CPU-only Intel® Xeon® E-2124G CPU-only 81.572 28.537 45.013 15.13 2.399 0.115 8.587 0.402 $34 $249 9.5 71 1 $34 $249 9.5 71 13.4692 36.8888
231 mobilenet-v2 Inception-V4 OV-2022.3-8991 atom xeon Intel® Atom™ x6425E CPU-only Intel® Xeon® Gold 5218T CPU-only 143.134 301.215 81.991 77.005 2.136 0.096 11.928 1.434 $67 $3,144 12 210 1 2 $67 $1,572 12 105 7.609 10.5711
232 mobilenet-v2 Inception-V4 OV-2022.3-8991 core-iGPU xeon Intel® Atom™ x6425E iGPU-only Intel® Xeon® Gold 6448Y CPU-only 3406.7 1879.09 164.945 1867.99 0 0.475 0 7.570 $67 $7,166 12 450 1 2 $67 $3,583 12 225 7.0306 3.23
233 mobilenet-v2 Inception-V4 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Atom™ x6425E CPU+iGPU Intel® Xeon® Platinum 8270 CPU-only 227.898 937.139 202.181 225.776 3.401 0.055 18.992 2.286 $67 $16,954 12 410 1 2 $67 $8,477 12 205 5.6984
234 mobilenet-v2 Inception-V4 OV-2022.3-8991 atom xeon Intel® Celeron™ 6305E CPU-only Intel® Xeon® Silver 4216R CPU-only 316.763 287.767 124.654 73.617 2.684 0.144 21.118 1.151 $118 $2,004 15 250 1 2 $118 $1,002 15 125 3.391 11.1114
235 mobilenet-v2 Inception-V4 OV-2022.3-8991 core-iGPU core Intel® Celeron™ 6305E iGPU-only Intel® Core™ i7-1165G7 CPU-only 525.084 71.295 141.61 18.482 4.45 0.152 35.006 2.546 $118 $469 15 28 1 $118 $469 15 28 4.9197 15.8294
236 mobilenet-v2 Inception-V4 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i7-1165G7 iGPU-only 807.397 158.282 255.964 36.884 6.842 0.337 53.826 5.653 $118 $469 15 28 1 $118 $469 15 28 10.6245
237 mobilenet-v2 Inception-V4 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 14679.84 182.132 4065.139 44.198 3.138 0.388 39.675 6.505 $4,678 $469 370 28 2 1 $2,339 $469 185 28 0.4828
238 mobilenet-v2 Inception-V4 OV-2022.3-8991 core accel Intel® Core™ i3-8100 CPU-only Intel® Flex-170 GPU 619.507 2986.91 298.6 452.366 5.295 1.552 9.531 19.913 $117 $1,925 65 150 1 $117 $1,925 65 150 1.8067 3.968
239 mobilenet-v2 end_rec OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only 806.314 441.904 3.768 23.038 $214 35 1 $214 35 2.1078
240 mobilenet-v2 begin_rec OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 766.072 558.975 3.99 11.786 $192 65 1 $192 65 1.2307
241 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only Intel® Core™ i9-13900K CPU-only 1081.253 1754.44 664.108 664.82 3.568 2.929 30.893 14.036 $303 $599 35 125 1 $303 $599 35 125 1.2788 1.4
242 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 core core-iGPU Intel® Core™ i9-10900TE CPU-only Intel® Core™ i9-13900K iGPU-only 825.071 528.03 413.091 168.57 1.691 0.882 23.573 4.224 $488 $599 35 125 1 $488 $599 35 125 1.6818 2.35
243 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® W1290P CPU-only Intel® Core™ i9-13900K CPU+iGPU 2067.162 1568.83 868.25 665.79 3.48 2.619 16.537 12.551 $594 $599 125 1 $594 $599 125 0.7363
244 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 xeon core Intel® Xeon® E-2124G CPU-only Intel® Core™ i5-13600K CPU-only 594.283 1240.01 479.567 437.11 2.387 3.769 8.37 9.920 $249 $329 71 125 1 $249 $329 71 125 1.8531 1.47
245 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i5-13600K iGPU-only 5882.455 493.18 1895.498 157.94 1.871 1.499 28.012 3.945 $3,144 $329 210 125 2 1 $1,572 $329 105 125 1.3871 2.43
246 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i5-13600K CPU+iGPU 28383.76 1063.27 7254.28 454.21 3.961 3.232 63.075 8.506 $7,166 $329 450 125 2 1 $3,583 $329 225 125 0.55
247 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 xeon core Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i9-12900K CPU-only 15616.083 1054.462 4308.927 346.546 0.921 1.603 38.088 6.391 $16,954 $658 410 165 2 1 $8,477 $658 205 165 0.8685 1.4898
248 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i9-12900K iGPU-only 5616.283 493.088 1835.686 145.503 2.803 0.749 22.465 2.988 $2,004 $658 250 165 2 1 $1,002 $658 125 165 1.404 2.472
249 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i9-12900K CPU+iGPU 1463.21 1056.241 538.597 361.472 3.12 1.605 52.258 6.401 $469 $658 28 165 1 $469 $658 28 165 0.8864
250 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 core-iGPU atom Intel® Core™ i7-1165G7 iGPU-only Intel® Atom™ x5-E3940 CPU-only 2076.015 28.032 544.641 12.691 4.426 0.824 74.143 2.951 $469 $34 28 9.5 1 $469 $34 28 9.5 1.7212 38.1991
251 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU atom Intel® Core™ i7-1165G7 CPU+iGPU Intel® Atom™ x6425E CPU-only 2677.374 50.352 698.942 23.675 5.709 0.752 95.621 4.196 $469 $67 28 12 1 $469 $67 28 12 20.8993
252 mobilenet-v2 mobilenet-ssd OV-2022.3-8991 accel core-iGPU Intel® Flex-170 GPU Intel® Atom™ x6425E iGPU-only 18371.95 100.377 4738.33 60.647 9.544 1.498 122.48 8.365 $1,925 $67 150 12 1 $1,925 $67 150 12 1.15 11.6812
253 end_rec mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 148.458 79.575 2.216 12.372 $67 12 1 $67 12
254 begin_rec mobilenet-ssd OV-2022.3-8991 atom Intel® Celeron™ 6305E CPU-only 123.806 38.981 1.049 8.254 $118 15 1 $118 15 8.4121
255 resnet-18 mobilenet-ssd OV-2022.3-8991 core core-iGPU Intel® Core™ i9-12900K CPU-only Intel® Celeron™ 6305E iGPU-only 804.771 168.473 212.574 53.272 1.223 1.428 4.877 11.232 $658 $118 165 15 1 $658 $118 165 15 1.3886 7.8961
256 resnet-18 mobilenet-ssd OV-2022.3-8991 core-iGPU core-CPU+iGPU Intel® Core™ i9-12900K iGPU-only Intel® Celeron™ 6305E CPU+iGPU 491.337 238.511 146.839 83.721 0.747 2.021 2.978 15.901 $658 $118 165 15 1 $658 $118 165 15 2.2655
257 resnet-18 mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i9-12900K CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only 1180.984 6730.68 365.777 1634.937 1.795 1.439 7.157 18.191 $658 $4,678 165 370 1 2 $658 $2,339 165 185 0.7265
258 resnet-18 mobilenet-ssd OV-2022.3-8991 atom core Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i3-8100 CPU-only 22.96 244.598 9.564 140.009 0.675 2.091 2.417 3.763 $34 $117 9.5 65 1 $34 $117 9.5 65 44.5491 4.4481
259 resnet-18 mobilenet-ssd OV-2022.3-8991 atom core Intel® Atom™ x6425E CPU-only Intel® Core™ i5-10500TE CPU-only 40.944 303.938 16.158 173.562 0.611 1.420 3.412 8.684 $67 $214 12 35 1 $67 $214 12 35 25.1377 4.7946
260 resnet-18 mobilenet-ssd OV-2022.3-8991 core-iGPU core Intel® Atom™ x6425E iGPU-only Intel® Core™ i5-8500 CPU-only 161.381 304.802 60.863 185.045 2.409 1.588 13.448 4.689 $67 $192 12 65 1 $67 $192 12 65 10.983 2.8007
261 resnet-18 mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU core Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i7-8700T CPU-only 197.477 412.261 73.927 241.37 2.947 1.361 16.456 11.779 $67 $303 12 35 1 $67 $303 12 35 2.8749
262 resnet-18 mobilenet-ssd OV-2022.3-8991 atom core Intel® Celeron™ 6305E CPU-only Intel® Core™ i9-10900TE CPU-only 105.574 315.107 28.914 152.342 0.895 0.646 7.038 9.003 $118 $488 15 35 1 $118 $488 15 35 9.6165 3.5896
263 resnet-18 mobilenet-ssd OV-2022.3-8991 core-iGPU xeon Intel® Celeron™ 6305E iGPU-only Intel® Xeon® W1290P CPU-only 194.694 774.346 55.341 345.309 1.65 1.304 12.98 6.195 $118 $594 15 125 1 $118 $594 15 125 6.643 1.5452
264 resnet-18 mobilenet-ssd OV-2022.3-8991 core-CPU+iGPU xeon Intel® Celeron™ 6305E CPU+iGPU Intel® Xeon® E-2124G CPU-only 291.441 233.43 82.105 147.098 2.47 0.937 19.429 3.288 $118 $249 15 71 1 $118 $249 15 71 4.5879
265 resnet-18 mobilenet-ssd OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only Intel® Xeon® Gold 5218T CPU-only 5761.961 2331.207 1431.864 691.743 1.232 0.741 15.573 11.101 $4,678 $3,144 370 210 2 $2,339 $1,572 185 105 0.6512 1.4852
266 resnet-18 mobilenet-ssd OV-2022.3-8991 core xeon Intel® Core™ i3-8100 CPU-only Intel® Xeon® Gold 6448Y CPU-only 208.65 16445.75 8733.64 103.699 8626.42 1.783 2.295 3.21 36.546 $117 $7,166 65 450 1 2 $117 $3,583 65 225 5.0231 0.65
267 resnet-18 mobilenet-ssd OV-2022.3-8991 core xeon Intel® Core™ i5-10500TE CPU-only Intel® Xeon® Platinum 8270 CPU-only 252.987 6691.915 127.448 1796.357 1.182 0.395 7.228 16.322 $214 $16,954 35 410 1 2 $214 $8,477 35 205 5.2921 1.0518
268 resnet-18 mobilenet-ssd OV-2022.3-8991 core xeon Intel® Core™ i5-8500 CPU-only Intel® Xeon® Silver 4216R CPU-only 262.785 2225.935 134.002 667.692 1.369 1.111 4.043 8.904 $192 $2,004 65 250 1 2 $192 $1,002 65 125 3.0597 1.5444
269 resnet-18 mobilenet-ssd OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only Intel® Core™ i7-1165G7 CPU-only 344.219 579.307 175.433 166.959 1.136 1.235 9.835 20.690 $303 $469 35 28 1 $303 $469 35 28 3.1665 2.0215
270 resnet-18 mobilenet-ssd OV-2022.3-8991 core core-iGPU Intel® Core™ i9-10900TE CPU-only Intel® Core™ i7-1165G7 iGPU-only 265.351 582.636 130.166 243.945 0.544 1.242 7.581 20.808 $488 $469 35 28 1 $488 $469 35 28 4.0471 2.548
271 resnet-18 mobilenet-ssd OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® W1290P CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 654.533 744.231 307.741 292.071 1.102 1.587 5.236 26.580 $594 $469 125 28 1 $594 $469 125 28 1.6723
272 resnet-18 mobilenet-ssd OV-2022.3-8991 xeon accel Intel® Xeon® E-2124G CPU-only Intel® Flex-170 GPU 198.189 3548.98 1412.68 101.399 0.796 1.844 2.791 23.66 $249 $1,925 71 150 1 $249 $1,925 71 150 5.2039 1.344
273 resnet-18 end_rec OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 2017.368 547.47 0.642 9.607 $3,144 210 2 $1,572 105 1.2913
274 resnet-18 begin_rec OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 27331.02 2329.12 3.814 60.736 $7,166 450 2 $3,583 225 0.38
275 resnet-18 mobilenet-v2 OV-2022.3-8991 xeon core Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i9-13900K CPU-only 6320.391 4041.77 1582.817 2123.33 0.373 6.748 15.416 32.334 $16,954 $599 410 125 2 1 $8,477 $599 205 125 0.667 0.66
276 resnet-18 mobilenet-v2 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i9-13900K iGPU-only 1940.935 978.61 522.654 424.34 0.969 1.634 7.764 7.829 $2,004 $599 250 125 2 1 $1,002 $599 125 1.3451 1.21
277 resnet-18 mobilenet-v2 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i9-13900K CPU+iGPU 480.992 4630.44 126.244 1944.62 1.026 7.730 17.178 37.044 $469 $599 28 125 1 $469 $599 28 125 2.242
278 resnet-18 mobilenet-v2 OV-2022.3-8991 core-iGPU core Intel® Core™ i7-1165G7 iGPU-only Intel® Core™ i5-13600K CPU-only 1061.591 3306.92 297.705 1403.57 2.264 10.051 37.914 26.455 $469 $329 28 125 1 $469 $329 28 125 1.793 0.65
279 resnet-18 mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i5-13600K iGPU-only 1237.94 919.85 342.513 384.42 2.64 2.796 44.212 7.359 $469 $329 28 125 1 $469 $329 28 125 1.36
280 resnet-18 mobilenet-v2 OV-2022.3-8991 accel core-CPU+iGPU Intel® Flex-170 GPU Intel® Core™ i5-13600K CPU+iGPU 27454.08 3556.06 2264.67 1332.32 14.262 10.809 183.027 28.448 $1,925 $329 150 125 1 $1,925 $329 150 125 0.946
281 end_rec mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 2446.221 1003.129 3.718 14.826 $658 165 1 $658 165 0.7182
282 begin_rec mobilenet-v2 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 1265.969 389.894 1.924 7.673 $658 165 1 $658 165 1.3894
283 resnet-50 mobilenet-v2 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i9-12900K CPU-only Intel® Core™ i9-12900K CPU+iGPU 400.118 2680.458 133.834 1013.049 0.608 4.074 2.425 16.245 $658 165 1 $658 165 3.0384
284 resnet-50 mobilenet-v2 OV-2022.3-8991 core-iGPU atom Intel® Core™ i9-12900K iGPU-only Intel® Atom™ x5-E3940 CPU-only 229.863 81.572 66.122 45.013 0.349 2.399 1.393 8.587 $658 $34 165 9.5 1 $658 $34 165 9.5 5.2538 13.4692
285 resnet-50 mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU atom Intel® Core™ i9-12900K CPU+iGPU Intel® Atom™ x6425E CPU-only 574.341 143.134 155.749 81.991 0.873 2.136 3.481 11.928 $658 $67 165 12 1 $658 $67 165 12 7.609
286 resnet-50 mobilenet-v2 OV-2022.3-8991 atom core-iGPU Intel® Atom™ x5-E3940 CPU-only Intel® Atom™ x6425E iGPU-only 11.094 4.516 164.945 0.326 0.000 1.168 0.000 $34 $67 9.5 12 1 $34 $67 9.5 12 92.6182 7.0306
287 resnet-50 mobilenet-v2 OV-2022.3-8991 atom core-CPU+iGPU Intel® Atom™ x6425E CPU-only Intel® Atom™ x6425E CPU+iGPU 20.114 227.898 8.254 202.181 0.3 3.401 1.676 18.992 $67 12 1 $67 12 51.0598
288 resnet-50 mobilenet-v2 OV-2022.3-8991 core-iGPU atom Intel® Atom™ x6425E iGPU-only Intel® Celeron™ 6305E CPU-only 66.119 316.763 29.408 124.654 0.987 2.684 5.51 21.118 $67 $118 12 15 1 $67 $118 12 15 21.6857 3.391
289 resnet-50 mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Atom™ x6425E CPU+iGPU Intel® Celeron™ 6305E iGPU-only 82.95 525.084 35.81 141.61 1.238 4.450 6.913 35.006 $67 $118 12 15 1 $67 $118 12 15 4.9197
290 resnet-50 mobilenet-v2 OV-2022.3-8991 atom core-CPU+iGPU Intel® Celeron™ 6305E CPU-only Intel® Celeron™ 6305E CPU+iGPU 52.004 807.397 14.152 255.964 0.441 6.842 3.467 53.826 $118 15 1 $118 15 19.6053
291 resnet-50 mobilenet-v2 OV-2022.3-8991 core-iGPU xeon Intel® Celeron™ 6305E iGPU-only Intel® Xeon® Gold 6336Y CPU-only 90.685 14679.84 24.633 4065.139 0.769 3.138 6.046 39.675 $118 $4,678 15 370 1 2 $118 $2,339 15 185 14.6415 0.4828
292 resnet-50 mobilenet-v2 OV-2022.3-8991 core-CPU+iGPU core Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i3-8100 CPU-only 140.062 619.507 37.864 452.366 1.187 5.295 9.337 9.531 $118 $117 15 65 1 $118 $117 15 65 - 1.8067
293 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i5-10500TE CPU-only 2793.997 806.314 691.079 441.904 0.597 3.768 7.551 23.038 $4,678 $214 370 35 2 1 $2,339 $214 185 35 1.3288 2.1078
294 resnet-50 mobilenet-v2 OV-2022.3-8991 core core Intel® Core™ i3-8100 CPU-only Intel® Core™ i5-8500 CPU-only 102.328 766.072 52.896 558.975 0.875 3.990 1.574 11.786 $117 $192 65 1 $117 $192 65 10.4475 1.2307
295 resnet-50 mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only Intel® Core™ i7-8700T CPU-only 123.574 1081.253 63.836 664.108 0.577 3.568 3.531 30.893 $214 $303 35 1 $214 $303 35 11.6252 1.2788
296 resnet-50 mobilenet-v2 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only Intel® Core™ i9-10900TE CPU-only 129.174 825.071 68.242 413.091 0.673 1.691 1.987 23.573 $192 $488 65 35 1 $192 $488 65 35 6.8498 1.6818
297 resnet-50 mobilenet-v2 OV-2022.3-8991 core xeon Intel® Core™ i7-8700T CPU-only Intel® Xeon® W1290P CPU-only 168.016 2067.162 88.675 868.25 0.555 3.480 4.8 16.537 $303 $594 35 125 1 $303 $594 35 125 6.9723 0.7363
298 resnet-50 mobilenet-v2 OV-2022.3-8991 core xeon Intel® Core™ i9-10900TE CPU-only Intel® Xeon® E-2124G CPU-only 129.371 594.283 56.366 479.567 0.265 2.387 3.696 8.370 $488 $249 35 71 1 $488 $249 35 71 8.7659 1.8531
299 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only Intel® Xeon® Gold 5218T CPU-only 317.744 5882.455 149.441 1895.498 0.535 1.871 2.542 28.012 $594 $3,144 125 210 1 2 $594 $1,572 125 105 3.6469 1.3871
300 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only Intel® Xeon® Gold 6448Y CPU-only 97.606 28383.76 16212.74 52.17 16065.38 0.392 3.961 1.375 63.075 $249 $7,166 71 450 1 2 $249 $3,583 71 225 10.851 0.55
301 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only Intel® Xeon® Platinum 8270 CPU-only 980.813 15616.083 268.009 4308.927 0.312 0.921 4.671 38.088 $3,144 $16,954 210 410 2 $1,572 $8,477 105 205 2.9838 0.8685
302 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only Intel® Xeon® Silver 4216R CPU-only 2905.803 5616.283 748.583 1835.686 0.405 2.803 6.457 22.465 $7,166 $2,004 450 250 2 $3,583 $1,002 225 125 1.475 1.404
303 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i7-1165G7 CPU-only 11359.88 1463.21 1118.97 538.597 0.67 3.120 27.707 52.258 $16,954 $469 410 28 2 1 $8,477 $469 205 28 0.94 0.8864
304 resnet-50 mobilenet-v2 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i7-1165G7 iGPU-only 937.572 2076.015 255.866 544.641 0.468 4.426 3.75 74.143 $2,004 $469 250 28 2 1 $1,002 $469 125 28 3.0985 1.7212
305 resnet-50 mobilenet-v2 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 235.061 2677.374 63.241 698.942 0.501 5.709 8.395 95.621 $469 28 1 $469 28 4.7975
306 resnet-50 mobilenet-v2 OV-2022.3-8991 core-iGPU accel Intel® Core™ i7-1165G7 iGPU-only Intel® Flex-170 GPU 235.061 18371.95 4738.33 63.241 0.501 9.544 8.395 122.48 $469 $1,925 28 150 1 $469 $1,925 28 150 4.7975 1.15
307 resnet-50 end_rec OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 235.061 63.241 0.501 8.395 $469 28 1 $469 28 4.7975
308 resnet-50 begin_rec OV-2022.3-8991 accel Intel® Flex-170 GPU 10810.92 1005.16 5.616 72.073 $1,925 150 1 $1,925 150 1.624
309 end_rec resnet-18 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 1495.77 415.82 2.497 11.966 $599 125 1 $599 125 1.38
310 begin_rec resnet-18 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 497.54 150.99 0.831 3.980 $599 125 1 $599 125 2.19
311 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i9-12900K CPU-only Intel® Core™ i9-13900K CPU+iGPU 6.712 1821.4 2.394 615.14 0.01 3.041 0.041 14.571 $658 $599 165 125 1 $658 $599 165 125 175.7493
312 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-iGPU core Intel® Core™ i9-12900K iGPU-only Intel® Core™ i5-13600K CPU-only 4.228 1169.04 1.262 336.09 0.006 3.553 0.026 9.352 $658 $329 165 125 1 $658 $329 165 125 241.7838 1.5
313 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i5-13600K iGPU-only 6.666 467.43 2.393 141.76 0.01 1.421 0.04 3.739 $658 $329 165 125 1 $658 $329 165 125 2.36
314 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 atom core-CPU+iGPU Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i5-13600K CPU+iGPU 0.171 1443.42 0.081 445.25 0.005 4.387 0.018 11.547 $34 $329 9.5 125 1 $34 $329 9.5 125 5985.7525
315 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 atom core Intel® Atom™ x6425E CPU-only Intel® Core™ i9-12900K CPU-only 0.31 804.771 0.133 212.574 0.005 1.223 0.026 4.877 $67 $658 12 165 1 $67 $658 12 165 3246.0878 1.3886
316 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only Intel® Core™ i9-12900K iGPU-only 0.965 491.337 0.615 146.839 0.014 0.747 0.08 2.978 $67 $658 12 165 1 $67 $658 12 165 1053.0078 2.2655
317 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 0.31 1180.984 0.133 365.777 0.005 1.795 0.026 7.157 $67 $658 12 165 1 $67 $658 12 165
318 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 atom Intel® Celeron™ 6305E CPU-only Intel® Atom™ x5-E3940 CPU-only 0.806 22.96 0.23 9.564 0.007 0.675 0.054 2.417 $118 $34 15 9.5 1 $118 $34 15 9.5 1240.6212 44.5491
319 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-iGPU atom Intel® Celeron™ 6305E iGPU-only Intel® Atom™ x6425E CPU-only 1.582 40.944 0.486 16.158 0.013 0.611 0.105 3.412 $118 $67 15 12 1 $118 $67 15 12 649.3806 25.1377
320 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Celeron™ 6305E CPU+iGPU Intel® Atom™ x6425E iGPU-only 0.806 161.381 0.231 60.863 0.007 2.409 0.054 13.448 $118 $67 15 12 1 $118 $67 15 12 10.983
321 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 6336Y CPU-only Intel® Atom™ x6425E CPU+iGPU 41.52 197.477 12.672 73.927 0.009 2.947 0.112 16.456 $4,678 $67 370 12 2 1 $2,339 $67 185 12 79.0111
322 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core atom Intel® Core™ i3-8100 CPU-only Intel® Celeron™ 6305E CPU-only 1.606 105.574 0.959 28.914 0.014 0.895 0.025 7.038 $117 $118 65 15 1 $117 $118 65 15 644.0626 9.6165
323 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core core-iGPU Intel® Core™ i5-10500TE CPU-only Intel® Celeron™ 6305E iGPU-only 1.932 194.694 1.177 55.341 0.009 1.650 0.055 12.980 $214 $118 35 15 1 $214 $118 35 15 712.3677 6.643
324 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i5-8500 CPU-only Intel® Celeron™ 6305E CPU+iGPU 2.067 291.441 1.248 82.105 0.011 2.470 0.032 19.429 $192 $118 65 15 1 $192 $118 65 15 401.8765
325 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core xeon Intel® Core™ i7-8700T CPU-only Intel® Xeon® Gold 6336Y CPU-only 2.66 5761.961 1.606 1431.864 0.009 1.232 0.076 15.573 $303 $4,678 35 370 1 2 $303 $2,339 35 185 434.9877 0.6512
326 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core core Intel® Core™ i9-10900TE CPU-only Intel® Core™ i3-8100 CPU-only 2.046 208.65 1.242 103.699 0.004 1.783 0.058 3.210 $488 $117 35 65 1 $488 $117 35 65 485.4343 5.0231
327 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon core Intel® Xeon® W1290P CPU-only Intel® Core™ i5-10500TE CPU-only 4.871 252.987 2.935 127.448 0.008 1.182 0.039 7.228 $594 $214 125 35 1 $594 $214 125 35 239.8346 5.2921
328 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon core Intel® Xeon® E-2124G CPU-only Intel® Core™ i5-8500 CPU-only 1.55 262.785 0.919 134.002 0.006 1.369 0.022 4.043 $249 $192 71 65 1 $249 $192 71 65 665.2714 3.0597
329 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon core Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i7-8700T CPU-only 15.706 344.219 4.572 175.433 0.005 1.136 0.075 9.835 $3,144 $303 210 35 2 1 $1,572 $303 105 35 132.0319 3.1665
330 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i9-10900TE CPU-only 152.74 265.351 20.32 130.166 0.021 0.544 0.339 7.581 $7,166 $488 450 35 2 1 $3,583 $488 225 35 14.48 4.0471
331 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only Intel® Xeon® W1290P CPU-only 47.365 654.533 14.722 307.741 0.003 1.102 0.116 5.236 $16,954 $594 410 125 2 1 $8,477 $594 205 125 44.387 1.6723
332 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only Intel® Xeon® E-2124G CPU-only 14.966 198.189 4.35 101.399 0.007 0.796 0.06 2.791 $2,004 $249 250 71 2 1 $1,002 $249 125 71 138.9625 5.2039
333 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core xeon Intel® Core™ i7-1165G7 CPU-only Intel® Xeon® Gold 5218T CPU-only 3.556 2017.368 1.015 547.47 0.008 0.642 0.127 9.607 $469 $3,144 28 210 1 2 $469 $1,572 28 105 284.2379 1.2913
334 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-iGPU xeon Intel® Core™ i7-1165G7 iGPU-only Intel® Xeon® Gold 6448Y CPU-only 8.239 27331.02 16095.24 2.545 16009.04 0.018 3.814 0.294 60.736 $469 $7,166 28 450 1 2 $469 $3,583 28 225 122.4561 0.38
335 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i7-1165G7 CPU+iGPU Intel® Xeon® Platinum 8270 CPU-only 3.565 6320.391 1.01 1582.817 0.008 0.373 0.127 15.416 $469 $16,954 28 410 1 2 $469 $8,477 28 205 0.667
336 ssd-resnet34-1200 resnet-18 OV-2022.3-8991 accel xeon Intel® Flex-170 GPU Intel® Xeon® Silver 4216R CPU-only 132.44 1940.935 18.19 522.654 0.069 0.969 0.883 7.764 $1,925 $2,004 150 250 1 2 $1,925 $1,002 150 125 19.933 1.3451
337 end_rec resnet-18 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 480.992 126.244 1.026 17.178 $469 28 1 $469 28 2.242
338 begin_rec resnet-18 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 1061.591 297.705 2.264 37.914 $469 28 1 $469 28 1.793
339 unet-camvid--0001 resnet-18 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i9-12900K CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 10.652 1237.94 3.873 342.513 0.016 2.640 0.065 44.212 $658 $469 165 28 1 $658 $469 165 28 111.0757
340 unet-camvid--0001 resnet-18 OV-2022.3-8991 core-iGPU accel Intel® Core™ i9-12900K iGPU-only Intel® Flex-170 GPU 7.059 27454.08 2264.67 2.154 0.011 14.262 0.043 183.027 $658 $1,925 165 150 1 $658 $1,925 165 150 142.0745 0.946
341 unet-camvid--0001 end_rec OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 14.933 4.935 0.023 0.091 $658 165 1 $658 165
342 unet-camvid--0001 begin_rec OV-2022.3-8991 atom Intel® Atom™ x5-E3940 CPU-only 0.258 0.039 0.008 0.027 $34 9.5 1 $34 9.5 3959.594
343 unet-camvid--0001 resnet-50 OV-2022.3-8991 atom core Intel® Atom™ x6425E CPU-only Intel® Core™ i9-13900K CPU-only 0.482 729.93 0.061 240.59 0.007 1.219 0.04 5.839 $67 $599 12 125 1 $67 $599 12 125 2094.2569 2.91
344 unet-camvid--0001 resnet-50 OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only Intel® Core™ i9-13900K iGPU-only 1.994 238.44 0.989 68.18 0.03 0.398 0.166 1.908 $67 $599 12 125 1 $67 $599 12 125 502.6095 4.74
345 unet-camvid--0001 resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 2.242 895.28 0.597 255.91 0.033 1.495 0.187 7.162 $67 $599 12 125 1 $67 $599 12 125
346 unet-camvid--0001 resnet-50 OV-2022.3-8991 atom core Intel® Celeron™ 6305E CPU-only Intel® Core™ i5-13600K CPU-only 1.471 576.86 0.374 153.71 0.012 1.753 0.098 4.615 $118 $329 15 125 1 $118 $329 15 125 678.4977 3.04
347 unet-camvid--0001 resnet-50 OV-2022.3-8991 core-iGPU Intel® Celeron™ 6305E iGPU-only Intel® Core™ i5-13600K iGPU-only 2.715 216.97 0.802 64.36 0.023 0.659 0.181 1.736 $118 $329 15 125 1 $118 $329 15 125 368.8973 5.3
348 unet-camvid--0001 resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Celeron™ 6305E CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 4.12 717.91 1.142 188.59 0.035 2.182 0.275 5.743 $118 $329 15 125 1 $118 $329 15 125
349 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i9-12900K CPU-only 81.838 400.118 19.314 133.834 0.017 0.608 0.221 2.425 $4,678 $658 370 165 2 1 $2,339 $658 185 165 41.506 3.0384
350 unet-camvid--0001 resnet-50 OV-2022.3-8991 core core-iGPU Intel® Core™ i3-8100 CPU-only Intel® Core™ i9-12900K iGPU-only 2.482 229.863 1.54 66.122 0.021 0.349 0.038 1.393 $117 $658 65 165 1 $117 $658 65 165 412.1291 5.2538
351 unet-camvid--0001 resnet-50 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i5-10500TE CPU-only Intel® Core™ i9-12900K CPU+iGPU 3.031 574.341 1.9 155.749 0.014 0.873 0.087 3.481 $214 $658 35 165 1 $214 $658 35 165 457.5992
352 unet-camvid--0001 resnet-50 OV-2022.3-8991 core atom Intel® Core™ i5-8500 CPU-only Intel® Atom™ x5-E3940 CPU-only 3.227 11.094 2.018 4.516 0.017 0.326 0.05 1.168 $192 $34 65 9.5 1 $192 $34 65 9.5 256.5479 92.6182
353 unet-camvid--0001 resnet-50 OV-2022.3-8991 core atom Intel® Core™ i7-8700T CPU-only Intel® Atom™ x6425E CPU-only 4.155 20.114 2.6 8.254 0.014 0.300 0.119 1.676 $303 $67 35 12 1 $303 $67 35 12 277.7416 51.0598 ````
354 unet-camvid--0001 resnet-50 OV-2022.3-8991 core core-iGPU Intel® Core™ i9-10900TE CPU-only Intel® Atom™ x6425E iGPU-only 2.907 66.119 2.004 29.408 0.006 0.987 0.083 5.510 $488 $67 35 12 1 $488 $67 35 12 319.7667 21.6857
355 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® W1290P CPU-only Intel® Atom™ x6425E CPU+iGPU 7.413 82.95 4.615 35.81 0.012 1.238 0.059 6.913 $594 $67 125 12 1 $594 $67 125 12 157.3622
356 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon atom Intel® Xeon® E-2124G CPU-only Intel® Celeron™ 6305E CPU-only 2.386 52.004 1.481 14.152 0.01 0.441 0.034 3.467 $249 $118 71 15 1 $249 $118 71 15 422.1157 19.6053
357 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Gold 5218T CPU-only Intel® Celeron™ 6305E iGPU-only 29.251 90.685 7.301 24.633 0.009 0.769 0.139 6.046 $3,144 $118 210 15 2 1 $1,572 $118 105 15 69.3596 14.6415
358 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 6448Y CPU-only Intel® Celeron™ 6305E CPU+iGPU 381.85 140.062 30.96 37.864 0.053 1.187 0.849 9.337 $7,166 $118 450 15 2 1 $3,583 $118 225 15 7.95 -
359 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only Intel® Xeon® Gold 6336Y CPU-only 93.081 2793.997 21.382 691.079 0.005 0.597 0.227 7.551 $16,954 $4,678 410 370 2 $8,477 $2,339 205 185 22.9476 1.3288
360 unet-camvid--0001 resnet-50 OV-2022.3-8991 xeon core Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i3-8100 CPU-only 27.814 102.328 6.966 52.896 0.014 0.875 0.111 1.574 $2,004 $117 250 65 2 1 $1,002 $117 125 65 72.9773 10.4475
361 unet-camvid--0001 resnet-50 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i5-10500TE CPU-only 6.54 123.574 1.677 63.836 0.014 0.577 0.234 3.531 $469 $214 28 35 1 $469 $214 28 35 152.602 11.6252
362 unet-camvid--0001 resnet-50 OV-2022.3-8991 core-iGPU core Intel® Core™ i7-1165G7 iGPU-only Intel® Core™ i5-8500 CPU-only 15.391 129.174 4.571 68.242 0.033 0.673 0.55 1.987 $469 $192 28 65 1 $469 $192 28 65 61.6002 6.8498
363 unet-camvid--0001 resnet-50 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i7-8700T CPU-only 17.962 168.016 4.848 88.675 0.038 0.555 0.642 4.800 $469 $303 28 35 1 $469 $303 28 35 6.9723
364 unet-camvid--0001 resnet-50 OV-2022.3-8991 accel core Intel® Flex-170 GPU Intel® Core™ i9-10900TE CPU-only 218.12 129.371 35.2 56.366 0.113 0.265 1.454 3.696 $1,925 $488 150 35 1 $1,925 $488 150 35 7.149 8.7659
365 end_rec resnet-50 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 317.744 149.441 0.535 2.542 $594 125 1 $594 125 3.6469
366 begin_rec resnet-50 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 97.606 52.17 0.392 1.375 $249 71 1 $249 71 10.851
367 yolo_v3_tiny resnet-50 OV-2022.3-8991 core xeon Intel® Core™ i9-12900K CPU-only Intel® Xeon® Gold 5218T CPU-only 428.506 980.813 162.077 268.009 0.651 0.312 2.597 4.671 $658 $3,144 165 210 1 2 $658 $1,572 165 105 2.4778 2.9838
368 yolo_v3_tiny resnet-50 OV-2022.3-8991 core-iGPU xeon Intel® Core™ i9-12900K iGPU-only Intel® Xeon® Platinum 8270 CPU-only 245.738 2905.803 84.457 748.583 0.373 0.405 1.489 6.457 $658 $7,166 165 450 1 2 $658 $3,583 165 225 3.8792 1.475
369 yolo_v3_tiny resnet-50 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Core™ i9-12900K CPU+iGPU Intel® Xeon® Gold 6448Y CPU-only 598.947 11359.88 5494.15 195.608 5497.22 0.91 0.670 3.63 27.707 $658 $16,954 165 410 1 2 $658 $8,477 165 205 0.94
370 yolo_v3_tiny resnet-50 OV-2022.3-8991 atom xeon Intel® Atom™ x5-E3940 CPU-only Intel® Xeon® Silver 4216R CPU-only 12.406 937.572 6.124 255.866 0.365 0.468 1.306 3.750 $34 $2,004 9.5 250 1 2 $34 $1,002 9.5 125 83.8614 3.0985
371 yolo_v3_tiny resnet-50 OV-2022.3-8991 atom core Intel® Atom™ x6425E CPU-only Intel® Core™ i7-1165G7 CPU-only 22.94 235.061 10.395 63.241 0.342 0.501 1.912 8.395 $67 $469 12 28 1 $67 $469 12 28 44.6243 4.7975
372 yolo_v3_tiny resnet-50 OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only Intel® Core™ i7-1165G7 iGPU-only 66.641 504.247 38.178 125.407 0.995 1.075 5.553 18.009 $67 $469 12 28 1 $67 $469 12 28 15.7687 4.7975
373 yolo_v3_tiny resnet-50 OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 86.38 595.133 45.819 150.024 1.289 1.269 7.198 21.255 $67 $469 12 28 1 $67 $469 12 28 4.7975
374 yolo_v3_tiny resnet-50 OV-2022.3-8991 atom accel Intel® Celeron™ 6305E CPU-only Intel® Flex-170 GPU 55.629 10810.92 1005.16 18.246 0.471 5.616 3.709 72.073 $118 $1,925 15 150 1 $118 $1,925 15 150 18.2291 1.624
375 yolo_v3_tiny end_rec OV-2022.3-8991 core-iGPU Intel® Celeron™ 6305E iGPU-only 106.588 31.376 0.903 7.106 $118 15 1 $118 15 10.8727
376 yolo_v3_tiny begin_rec OV-2022.3-8991 core-CPU+iGPU Intel® Celeron™ 6305E CPU+iGPU 153.471 46.125 1.301 10.231 $118 15 1 $118 15
377 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i9-13900K CPU-only 2733.627 11.75 761.534 4.24 0.584 0.020 7.388 0.094 $4,678 $599 370 125 2 1 $2,339 $599 185 125 1.1267 162.07
378 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core core-iGPU Intel® Core™ i3-8100 CPU-only Intel® Core™ i9-13900K iGPU-only 114.701 4.5 66.266 1.45 0.98 0.008 1.765 0.036 $117 $599 65 125 1 $117 $599 65 125 8.9295 226.99
379 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i5-10500TE CPU-only Intel® Core™ i9-13900K CPU+iGPU 141.001 11.63 79.694 4.24 0.659 0.019 4.029 0.093 $214 $599 35 125 1 $214 $599 35 125 9.9196
380 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only Intel® Core™ i5-13600K CPU-only 145.659 8.21 85.158 2.7 0.759 0.025 2.241 0.066 $192 $329 65 125 1 $192 $329 65 125 5.465 147.53
381 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core core-iGPU Intel® Core™ i7-8700T CPU-only Intel® Core™ i5-13600K iGPU-only 191.931 4.22 109.625 1.36 0.633 0.013 5.484 0.034 $303 $329 35 125 1 $303 $329 35 125 5.5981 241.92
382 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i9-10900TE CPU-only Intel® Core™ i5-13600K CPU+iGPU 147.041 8 84.448 2.7 0.301 0.024 4.201 0.064 $488 $329 35 125 1 $488 $329 35 125 7.0171
383 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon core Intel® Xeon® W1290P CPU-only Intel® Core™ i9-12900K CPU-only 359.61 6.712 173.635 2.394 0.605 0.010 2.877 0.041 $594 $658 125 165 1 $594 $658 125 165 2.9037 175.7493
384 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® E-2124G CPU-only Intel® Core™ i9-12900K iGPU-only 109.066 4.228 64.87 1.262 0.438 0.006 1.536 0.026 $249 $658 71 165 1 $249 $658 71 165 9.3792 241.7838
385 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i9-12900K CPU+iGPU 1058.322 6.666 337.035 2.393 0.337 0.010 5.04 0.040 $3,144 $658 210 165 2 1 $1,572 $658 105 165 2.4971
386 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon atom Intel® Xeon® Gold 6448Y CPU-only Intel® Atom™ x5-E3940 CPU-only 7344.88 0.171 1405.51 0.081 1.025 0.005 16.322 0.018 $7,166 $34 450 9.5 2 1 $3,583 $34 225 9.5 1.06 5985.7525
387 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon atom Intel® Xeon® Platinum 8270 CPU-only Intel® Atom™ x6425E CPU-only 2931.242 0.31 901.832 0.133 0.173 0.005 7.149 0.026 $16,954 $67 410 12 2 1 $8,477 $67 205 12 1.215 3246.0878
388 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Silver 4216R CPU-only Intel® Atom™ x6425E iGPU-only 1015.77 0.965 321.263 0.615 0.507 0.014 4.063 0.080 $2,004 $67 250 12 2 1 $1,002 $67 125 12 2.6076 1053.0078
389 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i7-1165G7 CPU-only Intel® Atom™ x6425E CPU+iGPU 258.05 0.31 79.963 0.133 0.55 0.005 9.216 0.026 $469 $67 28 12 1 $469 $67 28 12 4.1833
390 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core-iGPU atom Intel® Core™ i7-1165G7 iGPU-only Intel® Celeron™ 6305E CPU-only 492.645 0.806 157.98 0.23 1.05 0.007 17.594 0.054 $469 $118 28 15 1 $469 $118 28 15 2.5788 1240.6212
391 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU core-iGPU Intel® Core™ i7-1165G7 CPU+iGPU Intel® Celeron™ 6305E iGPU-only 606.117 1.582 186.339 0.486 1.292 0.013 21.647 0.105 $469 $118 28 15 1 $469 $118 28 15 649.3806
392 yolo_v3_tiny ssd-resnet34-1200 OV-2022.3-8991 accel core-CPU+iGPU Intel® Flex-170 GPU Intel® Celeron™ 6305E CPU+iGPU 3634.16 0.806 1209.67 0.231 1.888 0.007 24.228 0.054 $1,925 $118 150 15 1 $1,925 $118 150 15 1.293
393 end_rec ssd-resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only 41.52 12.672 0.009 0.112 $4,678 370 2 $2,339 185 79.0111
394 begin_rec ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i3-8100 CPU-only 1.606 0.959 0.014 0.025 $117 65 1 $117 65 644.0626
395 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only Intel® Core™ i5-10500TE CPU-only 21.833 1.932 7.096 1.177 0.033 0.009 0.132 0.055 $658 $214 165 35 1 $658 $214 165 35 58.4745 712.3677
396 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core-iGPU core Intel® Core™ i9-12900K iGPU-only Intel® Core™ i5-8500 CPU-only 11.956 2.067 3.869 1.248 0.018 0.011 0.072 0.032 $658 $192 165 65 1 $658 $192 165 65 85.1633 401.8765
397 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU core Intel® Core™ i9-12900K CPU+iGPU Intel® Core™ i7-8700T CPU-only 26.693 2.66 8.644 1.606 0.041 0.009 0.162 0.076 $658 $303 165 35 1 $658 $303 165 35 434.9877
398 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 atom core Intel® Atom™ x5-E3940 CPU-only Intel® Core™ i9-10900TE CPU-only 0.522 2.046 0.248 1.242 0.015 0.004 0.055 0.058 $34 $488 9.5 35 1 $34 $488 9.5 35 1900.0218 485.4343
399 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 atom xeon Intel® Atom™ x6425E CPU-only Intel® Xeon® W1290P CPU-only 0.99 4.871 0.43 2.935 0.015 0.008 0.083 0.039 $67 $594 12 125 1 $67 $594 12 125 1019.82 239.8346
400 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core-iGPU xeon Intel® Atom™ x6425E iGPU-only Intel® Xeon® E-2124G CPU-only 3.413 1.55 1.752 0.919 0.051 0.006 0.284 0.022 $67 $249 12 71 1 $67 $249 12 71 295.7702 665.2714
401 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Atom™ x6425E CPU+iGPU Intel® Xeon® Gold 5218T CPU-only 3.999 15.706 2.087 4.572 0.06 0.005 0.333 0.075 $67 $3,144 12 210 1 2 $67 $1,572 12 105 132.0319
402 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 atom xeon Intel® Celeron™ 6305E CPU-only Intel® Xeon® Gold 6448Y CPU-only 2.453 152.74 144.16 0.748 144.02 0.021 0.164 0.339 $118 $7,166 15 450 1 2 $118 $3,583 15 225 407.2474 14.48
403 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core-iGPU xeon Intel® Celeron™ 6305E iGPU-only Intel® Xeon® Platinum 8270 CPU-only 4.758 47.365 1.434 14.722 0.04 0.003 0.317 0.116 $118 $16,954 15 410 1 2 $118 $8,477 15 205 212.7987 44.387
404 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core-CPU+iGPU xeon Intel® Celeron™ 6305E CPU+iGPU Intel® Xeon® Silver 4216R CPU-only 7.048 14.966 2.122 4.35 0.06 0.007 0.47 0.060 $118 $2,004 15 250 1 2 $118 $1,002 15 125 138.9625
405 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6336Y CPU-only Intel® Core™ i7-1165G7 CPU-only 126.954 3.556 35.481 1.015 0.027 0.008 0.343 0.127 $4,678 $469 370 28 2 1 $2,339 $469 185 28 37.8189 284.2379
406 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core core-iGPU Intel® Core™ i3-8100 CPU-only Intel® Core™ i7-1165G7 iGPU-only 4.971 8.239 2.885 2.545 0.042 0.018 0.076 0.294 $117 $469 65 28 1 $117 $469 65 28 203.4163 122.4561
407 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core core-CPU+iGPU Intel® Core™ i5-10500TE CPU-only Intel® Core™ i7-1165G7 CPU+iGPU 6.182 3.565 3.532 1.01 0.029 0.008 0.177 0.127 $214 $469 35 28 1 $214 $469 35 28 227.5786
408 yolo_v4 ssd-resnet34-1200 OV-2022.3-8991 core accel Intel® Core™ i5-8500 CPU-only Intel® Flex-170 GPU 6.356 132.44 18.19 3.757 0.033 0.069 0.098 0.883 $192 $1,925 65 150 1 $192 $1,925 65 150 123.3181 19.933
409 yolo_v4 end_rec OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 8.44 4.868 0.028 0.241 $303 35 1 $303 35 135.9719
410 yolo_v4 begin_rec OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only 6.399 3.765 0.013 0.183 $488 35 1 $488 35 155.642
411 yolo_v4 unet-camvid--0001 OV-2022.3-8991 xeon core Intel® Xeon® W1290P CPU-only Intel® Core™ i9-13900K CPU-only 15.614 18.79 7.925 6.86 0.026 0.031 0.125 0.150 $594 $599 125 1 $594 $599 125 71.631 99.01
412 yolo_v4 unet-camvid--0001 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® E-2124G CPU-only Intel® Core™ i9-13900K iGPU-only 4.674 7.59 2.804 2.3 0.019 0.013 0.066 0.061 $249 $599 71 125 1 $249 $599 71 125 214.0957 132.32
413 yolo_v4 unet-camvid--0001 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Gold 5218T CPU-only Intel® Core™ i9-13900K CPU+iGPU 47.338 18.14 14.464 7 0.015 0.030 0.225 0.145 $3,144 $599 210 125 2 1 $1,572 $599 105 125 45.7699
414 yolo_v4 unet-camvid--0001 OV-2022.3-8991 xeon core Intel® Xeon® Gold 6448Y CPU-only Intel® Core™ i5-13600K CPU-only 252.03 12.91 58.12 4.36 0.035 0.039 0.56 0.103 $7,166 $329 450 125 2 1 $3,583 $329 225 125 15.01 95.92
415 yolo_v4 unet-camvid--0001 OV-2022.3-8991 xeon core-iGPU Intel® Xeon® Platinum 8270 CPU-only Intel® Core™ i5-13600K iGPU-only 131.466 7.13 41.001 2.16 0.008 0.022 0.321 0.057 $16,954 $329 410 125 2 1 $8,477 $329 205 125 19.2807 140.88
416 yolo_v4 unet-camvid--0001 OV-2022.3-8991 xeon core-CPU+iGPU Intel® Xeon® Silver 4216R CPU-only Intel® Core™ i5-13600K CPU+iGPU 45.047 16.63 13.741 5.72 0.022 0.051 0.18 0.133 $2,004 $329 250 125 2 1 $1,002 $329 125 48.0344
417 yolo_v4 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only Intel® Core™ i9-12900K CPU-only 11.067 10.652 3.259 3.873 0.024 0.016 0.395 0.065 $469 $658 28 165 1 $469 $658 28 165 92.2912 111.0757
418 yolo_v4 unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only Intel® Core™ i9-12900K iGPU-only 25.048 7.059 7.384 2.154 0.053 0.011 0.895 0.043 $469 $658 28 165 1 $469 $658 28 165 39.1492 142.0745
419 yolo_v4 unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 29.658 14.933 8.32 4.935 0.063 0.023 1.059 0.091 $469 $658 28 165 1 $469 $658 28 165
420 yolo_v4 unet-camvid--0001 OV-2022.3-8991 accel atom Intel® Flex-170 GPU Intel® Atom™ x5-E3940 CPU-only 454.49 0.258 56.78 0.039 0.236 0.008 3.03 0.027 $1,925 $34 150 9.5 1 $1,925 $34 150 9.5 6.969 3959.594
421 end_rec unet-camvid--0001 OV-2022.3-8991 atom Intel® Atom™ x6425E CPU-only 0.482 0.061 0.007 0.040 $67 12 1 $67 12 2094.2569
422 unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only 1.994 0.989 0.030 0.166 $67 12 1 $67 12 502.6095
423 unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 2.242 0.597 0.033 0.187 $67 12 1 $67 12
424 unet-camvid--0001 OV-2022.3-8991 atom Intel® Celeron™ 6305E CPU-only 1.471 0.374 0.012 0.098 $118 15 1 $118 15 678.4977
425 unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Celeron™ 6305E iGPU-only 2.715 0.802 0.023 0.181 $118 15 1 $118 15 368.8973
426 unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Celeron™ 6305E CPU+iGPU 4.12 1.142 0.035 0.275 $118 15 1 $118 15
427 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only 81.838 19.314 0.017 0.221 $4,678 370 2 $2,339 185 41.506
428 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i3-8100 CPU-only 2.482 1.54 0.021 0.038 $117 65 1 $117 65 412.1291
429 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only 3.031 1.9 0.014 0.087 $214 35 1 $214 35 457.5992
430 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 3.227 2.018 0.017 0.050 $192 65 1 $192 65 256.5479
431 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 4.155 2.6 0.014 0.119 $303 35 1 $303 35 277.7416
432 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only 2.907 2.004 0.006 0.083 $488 35 1 $488 35 319.7667
433 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 7.413 4.615 0.012 0.059 $594 125 1 $594 125 157.3622
434 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 2.386 1.481 0.010 0.034 $249 71 1 $249 71 422.1157
435 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 29.251 7.301 0.009 0.139 $3,144 210 2 $1,572 105 69.3596
436 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 381.85 151.97 151.98 0.053 0.849 $7,166 450 2 $3,583 225 7.95
437 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 93.081 21.382 0.005 0.227 $16,954 410 2 $8,477 205 22.9476
438 unet-camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 27.814 6.966 0.014 0.111 $2,004 250 2 $1,002 125 72.9773
439 unet-camvid--0001 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 6.54 1.677 0.014 0.234 $469 28 1 $469 28 152.602
440 unet-camvid--0001 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 15.391 4.571 0.033 0.550 $469 28 1 $469 28 61.6002
441 unet-camvid--0001 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 17.962 4.848 0.038 0.642 $469 28 1 $469 28
442 unet-camvid--0001 OV-2022.3-8991 accel Intel® Flex-170 GPU 218.12 35.2 0.113 1.454 $1,925 150 1 $1,925 150 7.149
443 end_rec
444 begin_rec
445 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 802.63 252.57 1.340 6.421 $599 125 1 $599 125 2.69
446 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 249.5 86.81 0.417 1.996 $599 125 1 $599 125 4.79
447 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 795.31 247.17 1.328 6.362 $599 125 1 $599 125
448 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 638.25 206.62 1.940 5.106 $329 125 1 $329 125 2.59
449 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 229.22 81.49 0.697 1.834 $329 125 1 $329 125 5.22
450 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 631.71 205.81 1.920 5.054 $329 125 1 $329 125
451 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 428.506 162.077 0.651 2.597 $658 165 1 $658 165 2.4778
452 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 245.738 84.457 0.373 1.489 $658 165 1 $658 165 3.8792
453 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 598.947 195.608 0.910 3.630 $658 165 1 $658 165
454 yolo_v3_tiny OV-2022.3-8991 atom Intel® Atom™ x5-E3940 CPU-only 12.406 6.124 0.365 1.306 $34 9.5 1 $34 9.5 83.8614
455 yolo_v3_tiny OV-2022.3-8991 atom Intel® Atom™ x6425E CPU-only 22.94 10.395 0.342 1.912 $67 12 1 $67 12 44.6243
456 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only 66.641 38.178 0.995 5.553 $67 12 1 $67 12 15.7687
457 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 86.38 45.819 1.289 7.198 $67 12 1 $67 12
458 yolo_v3_tiny OV-2022.3-8991 atom Intel® Celeron™ 6305E CPU-only 55.629 18.246 0.471 3.709 $118 15 1 $118 15 18.2291
459 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Celeron™ 6305E iGPU-only 106.588 31.376 0.903 7.106 $118 15 1 $118 15 10.8727
460 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Celeron™ 6305E CPU+iGPU 153.471 46.125 1.301 10.231 $118 15 1 $118 15
461 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only 2733.627 761.534 0.584 7.388 $4,678 370 2 $2,339 185 1.1267
462 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i3-8100 CPU-only 114.701 66.266 0.980 1.765 $117 65 1 $117 65 8.9295
463 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only 141.001 79.694 0.659 4.029 $214 35 1 $214 35 9.9196
464 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 145.659 85.158 0.759 2.241 $192 65 1 $192 65 5.465
465 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 191.931 109.625 0.633 5.484 $303 35 1 $303 35 5.5981
466 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only 147.041 84.448 0.301 4.201 $488 35 1 $488 35 7.0171
467 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 359.61 173.635 0.605 2.877 $594 125 1 $594 125 2.9037
468 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 109.066 64.87 0.438 1.536 $249 71 1 $249 71 9.3792
469 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 1058.322 337.035 0.337 5.040 $3,144 210 2 $1,572 105 2.4971
470 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 7344.88 5212.9 5236.28 1.025 16.322 $7,166 450 2 $3,583 225 1.06
471 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 2931.242 901.832 0.173 7.149 $16,954 410 2 $8,477 205 1.215
472 yolo_v3_tiny OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 1015.77 321.263 0.507 4.063 $2,004 250 2 $1,002 125 2.6076
473 yolo_v3_tiny OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 258.05 79.963 0.550 9.216 $469 28 1 $469 28 4.1833
474 yolo_v3_tiny OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 492.645 157.98 1.050 17.594 $469 28 1 $469 28 2.5788
475 yolo_v3_tiny OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 606.117 186.339 1.292 21.647 $469 28 1 $469 28
476 yolo_v3_tiny OV-2022.3-8991 accel Intel® Flex-170 GPU 3634.16 1209.67 1.888 24.228 $1,925 150 1 $1,925 150 1.293
477 end_rec
478 begin_rec
479 yolo_v4 OV-2022.3-8991 core Intel® Core™ i9-13900K CPU-only 37.15 13.03 0.062 0.297 $599 125 1 $599 125 55.96
480 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i9-13900K iGPU-only 12.92 4.26 0.022 0.103 $599 125 1 $599 125 78.73
481 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-13900K CPU+iGPU 37.16 13.54 0.062 0.297 $599 125 1 $599 125
482 yolo_v4 OV-2022.3-8991 core Intel® Core™ i5-13600K CPU-only 25.5 8.36 0.078 0.204 $329 125 1 $329 125 53.79
483 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i5-13600K iGPU-only 12.15 4 0.037 0.097 $329 125 1 $329 125 83.64
484 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i5-13600K CPU+iGPU 31.99 10.82 0.097 0.256 $329 125 1 $329 125
485 yolo_v4 OV-2022.3-8991 core Intel® Core™ i9-12900K CPU-only 21.833 7.096 0.033 0.132 $658 165 1 $658 165 58.4745
486 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i9-12900K iGPU-only 11.956 3.869 0.018 0.072 $658 165 1 $658 165 85.1633
487 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i9-12900K CPU+iGPU 26.693 8.644 0.041 0.162 $658 165 1 $658 165
488 yolo_v4 OV-2022.3-8991 atom Intel® Atom™ x5-E3940 CPU-only 0.522 0.248 0.015 0.055 $34 9.5 1 $34 9.5 1900.0218
489 yolo_v4 OV-2022.3-8991 atom Intel® Atom™ x6425E CPU-only 0.99 0.43 0.015 0.083 $67 12 1 $67 12 1019.82
490 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Atom™ x6425E iGPU-only 3.413 1.752 0.051 0.284 $67 12 1 $67 12 295.7702
491 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Atom™ x6425E CPU+iGPU 3.999 2.087 0.060 0.333 $67 12 1 $67 12
492 yolo_v4 OV-2022.3-8991 atom Intel® Celeron™ 6305E CPU-only 2.453 0.748 0.021 0.164 $118 15 1 $118 15 407.2474
493 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Celeron™ 6305E iGPU-only 4.758 1.434 0.040 0.317 $118 15 1 $118 15 212.7987
494 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Celeron™ 6305E CPU+iGPU 7.048 2.122 0.060 0.470 $118 15 1 $118 15
495 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6336Y CPU-only 126.954 35.481 0.027 0.343 $4,678 370 2 $2,339 185 37.8189
496 yolo_v4 OV-2022.3-8991 core Intel® Core™ i3-8100 CPU-only 4.971 2.885 0.042 0.076 $117 65 1 $117 65 203.4163
497 yolo_v4 OV-2022.3-8991 core Intel® Core™ i5-10500TE CPU-only 6.182 3.532 0.029 0.177 $214 35 1 $214 35 227.5786
498 yolo_v4 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 6.356 3.757 0.033 0.098 $192 65 1 $192 65 123.3181
499 yolo_v4 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 8.44 4.868 0.028 0.241 $303 35 1 $303 35 135.9719
500 yolo_v4 OV-2022.3-8991 core Intel® Core™ i9-10900TE CPU-only 6.399 3.765 0.013 0.183 $488 35 1 $488 35 155.642
501 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® W1290P CPU-only 15.614 7.925 0.026 0.125 $594 125 1 $594 125 71.631
502 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® E-2124G CPU-only 4.674 2.804 0.019 0.066 $249 71 1 $249 71 214.0957
503 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Gold 5218T CPU-only 47.338 14.464 0.015 0.225 $3,144 210 2 $1,572 105 45.7699
504 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6448Y CPU-only 252.03 228.55 228.67 0.035 0.560 $7,166 450 2 $3,583 225 15.01
505 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Platinum 8270 CPU-only 131.466 41.001 0.008 0.321 $16,954 410 2 $8,477 205 19.2807
506 yolo_v4 OV-2022.3-8991 xeon Intel® Xeon® Silver 4216R CPU-only 45.047 13.741 0.022 0.180 $2,004 250 2 $1,002 125 48.0344
507 yolo_v4 OV-2022.3-8991 core Intel® Core™ i7-1165G7 CPU-only 11.067 3.259 0.024 0.395 $469 28 1 $469 28 92.2912
508 yolo_v4 OV-2022.3-8991 core-iGPU Intel® Core™ i7-1165G7 iGPU-only 25.048 7.384 0.053 0.895 $469 28 1 $469 28 39.1492
509 yolo_v4 OV-2022.3-8991 core-CPU+iGPU Intel® Core™ i7-1165G7 CPU+iGPU 29.658 8.32 0.063 1.059 $469 28 1 $469 28
510 yolo_v4 OV-2022.3-8991 accel Intel® Flex-170 GPU 454.49 56.78 0.236 3.03 $1,925 150 1 $1,925 150 6.969
511 end_rec

View File

@@ -0,0 +1,105 @@
Network model,Release,IE-Type,Platform name,Throughput-OVMS-INT8,Throughput-OV-INT8,Throughput-OVMS-FP32,Throughput-OV-FP32
begin_rec,,,,,,,
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,19.05,19.24,12.84,13.02
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,21.75,22.97,17.16,17.32
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,18.00,18.33,11.91,12.06
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,81.48,87.59,46.81,48.37
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,207.39,231.10,104.07,125.89
bert-small-uncased-whole-word-masking-squad-0002,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,282.09,287.81,159.05,162.28
end_rec,,,,,,,
begin_rec,,,,,,,
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,28.29,31.56,15.94,16.90
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,37.92,40.93,19.35,20.38
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,26.10,27.99,15.33,15.78
DeeplabV3,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,118.32,142.36,26.18,27.37
DeeplabV3,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,347.24,391.34,53.95,73.45
DeeplabV3,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,425.70,538.96,125.09,132.23
end_rec,,,,,,,
begin_rec,,,,,,,
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,117.68,123.85,68.41,71.42
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,151.83,161.15,90.37,94.03
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,97.49,101.95,61.08,62.79
Densenet-121,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,765.57,857.26,205.00,225.97
Densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,2039.41,2205.00,582.14,600.78
Densenet-121,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,2316.39,2501.85,662.25,686.40
end_rec,,,,,,,
begin_rec,,,,,,,
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,42.26,43.69,25.09,26.62
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,49.48,50.11,29.37,30.93
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,37.48,38.96,26.29,27.90
Efficientdet-D0,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,125.90,143.68,51.04,55.33
Efficientdet-D0,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,302.06,335.20,168.52,177.62
Efficientdet-D0,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,362.66,415.28,244.88,254.03
end_rec,,,,,,,
begin_rec,,,,,,,
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,29.95,33.16,16.58,17.08
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,43.60,44.77,22.21,22.39
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,27.76,28.08,14.16,14.41
Inception-V4,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,253.30,275.06,60.19,63.55
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,656.23,690.46,158.05,161.39
Inception-V4,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,747.08,782.74,185.16,187.21
end_rec,,,,,,,
begin_rec,,,,,,,
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,247.50,275.77,133.42,148.03
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,311.96,358.32,176.63,199.53
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,213.07,237.43,128.63,138.09
Mobilenet-SSD ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,1382.37,1935.88,391.43,484.28
Mobilenet-SSD ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,3578.49,4790.04,1062.88,1141.50
Mobilenet-SSD ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,4131.44,5693.82,1319.32,1494.70
end_rec,,,,,,,
begin_rec,,,,,,,
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,470.51,546.68,286.64,336.47
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,567.21,690.80,378.24,462.46
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,399.15,470.87,283.32,318.23
Mobilenet-V2 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,2493.12,3426.14,765.45,941.54
Mobilenet-V2 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,6679.14,9143.29,2302.78,2511.31
Mobilenet-V2 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,7371.67,10494.29,2672.91,3192.44
end_rec,,,,,,,
begin_rec,,,,,,,
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,210.80,228.46,106.61,116.30
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,279.43,303.27,142.79,151.45
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,184.06,194.48,91.60,94.53
Resnet-18 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,1490.65,1809.32,409.17,464.62
Resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,3918.52,4568.67,1138.07,1166.20
Resnet-18 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,4477.09,5192.77,1294.96,1309.89
end_rec,,,,,,,
begin_rec,,,,,,,
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,108.35,114.48,55.15,57.62
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,142.74,149.99,73.33,75.63
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,98.10,100.62,47.21,48.40
Resnet-50,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,786.06,893.37,182.61,200.00
Resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,2066.51,2231.60,464.01,518.88
Resnet-50,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,2336.42,2508.77,613.40,632.38
end_rec,,,,,,,
begin_rec,,,,,,,
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,1.74,1.83,0.89,1.05
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,2.46,2.48,1.37,1.42
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,1.41,1.58,0.66,0.88
SSD-Resnet34-1200 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,14.59,15.29,3.97,4.03
SSD-Resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,35.42,36.77,10.14,10.46
SSD-Resnet34-1200 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,41.35,43.93,11.73,12.20
end_rec,,,,,,,
begin_rec,,,,,,,
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,2.57,2.78,1.62,1.70
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,3.68,3.71,2.15,2.29
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,2.25,2.38,1.36,1.45
Unet-Camvid--0001 ,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,25.52,26.93,5.57,5.69
Unet-Camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,60.15,65.11,15.01,15.15
Unet-Camvid--0001 ,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,69.58,76.46,17.16,17.97
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,114.02,127.37,67.06,72.20
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,148.72,168.41,85.62,91.66
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,98.44,107.53,56.42,60.41
Yolo_V3_Tiny,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,592.92,850.58,207.96,240.90
Yolo_V3_Tiny,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,1631.49,2031.46,534.51,611.68
Yolo_V3_Tiny,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,1774.41,2428.00,691.96,725.60
end_rec,,,,,,,
begin_rec,,,,,,,
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i3-10100 CPU-only,5.44,5.66,3.17,3.25
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i5-8500 CPU-only,7.24,7.40,4.19,4.21
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i7-8700T CPU-only,4.60,4.71,2.45,2.68
Yolo_V4,OV-2022.3-8991,core,Intel® Core™ i9-10920X CPU-only,36.33,40.21,10.52,10.90
Yolo_V4,OV-2022.3-8991,xeon,Intel® Xeon® Gold 6238 CPU-only,81.88,95.46,26.43,27.57
Yolo_V4,OV-2022.3-8991,xeon,Intel® Xeon® 8260 CPU-only,96.58,111.93,30.48,34.50
end_rec,,,,,,,
1 Network model Release IE-Type Platform name Throughput-OVMS-INT8 Throughput-OV-INT8 Throughput-OVMS-FP32 Throughput-OV-FP32
2 begin_rec
3 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 19.05 19.24 12.84 13.02
4 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 21.75 22.97 17.16 17.32
5 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 18.00 18.33 11.91 12.06
6 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 81.48 87.59 46.81 48.37
7 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 207.39 231.10 104.07 125.89
8 bert-small-uncased-whole-word-masking-squad-0002 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 282.09 287.81 159.05 162.28
9 end_rec
10 begin_rec
11 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 28.29 31.56 15.94 16.90
12 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 37.92 40.93 19.35 20.38
13 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 26.10 27.99 15.33 15.78
14 DeeplabV3 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 118.32 142.36 26.18 27.37
15 DeeplabV3 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 347.24 391.34 53.95 73.45
16 DeeplabV3 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 425.70 538.96 125.09 132.23
17 end_rec
18 begin_rec
19 Densenet-121 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 117.68 123.85 68.41 71.42
20 Densenet-121 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 151.83 161.15 90.37 94.03
21 Densenet-121 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 97.49 101.95 61.08 62.79
22 Densenet-121 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 765.57 857.26 205.00 225.97
23 Densenet-121 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 2039.41 2205.00 582.14 600.78
24 Densenet-121 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 2316.39 2501.85 662.25 686.40
25 end_rec
26 begin_rec
27 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 42.26 43.69 25.09 26.62
28 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 49.48 50.11 29.37 30.93
29 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 37.48 38.96 26.29 27.90
30 Efficientdet-D0 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 125.90 143.68 51.04 55.33
31 Efficientdet-D0 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 302.06 335.20 168.52 177.62
32 Efficientdet-D0 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 362.66 415.28 244.88 254.03
33 end_rec
34 begin_rec
35 Inception-V4 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 29.95 33.16 16.58 17.08
36 Inception-V4 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 43.60 44.77 22.21 22.39
37 Inception-V4 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 27.76 28.08 14.16 14.41
38 Inception-V4 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 253.30 275.06 60.19 63.55
39 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 656.23 690.46 158.05 161.39
40 Inception-V4 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 747.08 782.74 185.16 187.21
41 end_rec
42 begin_rec
43 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 247.50 275.77 133.42 148.03
44 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 311.96 358.32 176.63 199.53
45 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 213.07 237.43 128.63 138.09
46 Mobilenet-SSD OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 1382.37 1935.88 391.43 484.28
47 Mobilenet-SSD OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 3578.49 4790.04 1062.88 1141.50
48 Mobilenet-SSD OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 4131.44 5693.82 1319.32 1494.70
49 end_rec
50 begin_rec
51 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 470.51 546.68 286.64 336.47
52 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 567.21 690.80 378.24 462.46
53 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 399.15 470.87 283.32 318.23
54 Mobilenet-V2 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 2493.12 3426.14 765.45 941.54
55 Mobilenet-V2 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 6679.14 9143.29 2302.78 2511.31
56 Mobilenet-V2 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 7371.67 10494.29 2672.91 3192.44
57 end_rec
58 begin_rec
59 Resnet-18 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 210.80 228.46 106.61 116.30
60 Resnet-18 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 279.43 303.27 142.79 151.45
61 Resnet-18 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 184.06 194.48 91.60 94.53
62 Resnet-18 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 1490.65 1809.32 409.17 464.62
63 Resnet-18 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 3918.52 4568.67 1138.07 1166.20
64 Resnet-18 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 4477.09 5192.77 1294.96 1309.89
65 end_rec
66 begin_rec
67 Resnet-50 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 108.35 114.48 55.15 57.62
68 Resnet-50 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 142.74 149.99 73.33 75.63
69 Resnet-50 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 98.10 100.62 47.21 48.40
70 Resnet-50 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 786.06 893.37 182.61 200.00
71 Resnet-50 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 2066.51 2231.60 464.01 518.88
72 Resnet-50 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 2336.42 2508.77 613.40 632.38
73 end_rec
74 begin_rec
75 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 1.74 1.83 0.89 1.05
76 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 2.46 2.48 1.37 1.42
77 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 1.41 1.58 0.66 0.88
78 SSD-Resnet34-1200 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 14.59 15.29 3.97 4.03
79 SSD-Resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 35.42 36.77 10.14 10.46
80 SSD-Resnet34-1200 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 41.35 43.93 11.73 12.20
81 end_rec
82 begin_rec
83 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 2.57 2.78 1.62 1.70
84 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 3.68 3.71 2.15 2.29
85 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 2.25 2.38 1.36 1.45
86 Unet-Camvid--0001 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 25.52 26.93 5.57 5.69
87 Unet-Camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 60.15 65.11 15.01 15.15
88 Unet-Camvid--0001 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 69.58 76.46 17.16 17.97
89 end_rec
90 begin_rec
91 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 114.02 127.37 67.06 72.20
92 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 148.72 168.41 85.62 91.66
93 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 98.44 107.53 56.42 60.41
94 Yolo_V3_Tiny OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 592.92 850.58 207.96 240.90
95 Yolo_V3_Tiny OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 1631.49 2031.46 534.51 611.68
96 Yolo_V3_Tiny OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 1774.41 2428.00 691.96 725.60
97 end_rec
98 begin_rec
99 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i3-10100 CPU-only 5.44 5.66 3.17 3.25
100 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i5-8500 CPU-only 7.24 7.40 4.19 4.21
101 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i7-8700T CPU-only 4.60 4.71 2.45 2.68
102 Yolo_V4 OV-2022.3-8991 core Intel® Core™ i9-10920X CPU-only 36.33 40.21 10.52 10.90
103 Yolo_V4 OV-2022.3-8991 xeon Intel® Xeon® Gold 6238 CPU-only 81.88 95.46 26.43 27.57
104 Yolo_V4 OV-2022.3-8991 xeon Intel® Xeon® 8260 CPU-only 96.58 111.93 30.48 34.50
105 end_rec

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4b14b03ebb6a00b5f52a8404282f83d4ad214c8d04aea74738027a775c4ef545
size 100581

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cbfadd457b4d943ffb46906a7daf03516e971fe49d2806cd32c84c5015178f03
size 92819

3
docs/_static/images/fq.common.svg vendored Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d4daac1d270f60e4819683b467c20967f78cb736eef5ff760a9a15ad428ab48b
size 15681

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1d986eea3590b2c214551e4f76a323b1f3ff4f14d6237bd6faaca17c3a0fbb7
size 23275

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3b5ccafe14d5dae83894b520d8b6d65bc2cb08015b54cfa88c784db4eb009964
size 22741

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:81e8cda60a44b726cd6c021c452029c4d815f1ab2625a16a3022b206367840f9
size 27133

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:28a4d377a646d45905960e317b507e816ce60f66e9e015a91f06590ea1a884b8
size 29783

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:222f890cbcc7ca8e2498808a2d2d976a4c8f91e3152aaf4c69df8ae2464de7a4
size 39429

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e0bab657bf979494cb84459e29024e5b8b9cd320388c62c6a91b74b897b19718
size 18108

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e8a86ea362473121a266c0ec1257c8d428a4bb6438fecdc9d4a4f1ff5cfc9047
size 26220

3
docs/_static/images/step2_markup1.svg vendored Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e336651c517c77c32126fd0a718b15b704340216d7e3fb155b2e06743c24d3a8
size 62139

3
docs/_static/images/step2_markup2.svg vendored Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:abba6671b011a2a7c4126364e0b5e7ae5ebc95d2ea5cc4269afdbddbda31278f
size 63263

3
docs/_static/images/step2_markup3.svg vendored Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8e6244537b1cf1f1e1c72af87c7e8fff5e2d1f06b19e262aaad43da65deb5edd
size 63943

3
docs/_static/images/step2_markup4.svg vendored Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fdb5721ca6b5ffe1941f7bf799c2e0179ea24970f04d63f642e412f56cc34fb8
size 65682

3
docs/_static/images/step2_markup5.svg vendored Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa4e5a1055a3707c50936fab1266da11babad65c4857b5ecd8392617ebb5ea77
size 68218

Some files were not shown because too many files have changed in this diff Show More