Feature/azaytsev/docs 2021 4 (#6447)

* Added benchmark page changes

* Make the picture smaller

* Added Intel® Iris® Xe MAX Graphics

* Changed the TIP about DL WB

* Added Note on the driver for Intel® Iris® Xe MAX Graphics

* Fixed formatting

* Added the link to Intel® software for general purpose GPU capabilities

* OVSA ovsa_get_started updates

* Fixed link
This commit is contained in:
Andrey Zaytsev
2021-06-29 20:38:51 +03:00
committed by GitHub
parent af2fec9a00
commit a220a0a7af
23 changed files with 544 additions and 393 deletions

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c8ae479880ab43cdb12eeb2fbaaf3b7861f786413c583eeba906c5fdf4b66730
size 30696
oid sha256:e8a86ea362473121a266c0ec1257c8d428a4bb6438fecdc9d4a4f1ff5cfc9047
size 26220

View File

@@ -19,31 +19,34 @@ All of the performance benchmarks were generated using the open-sourced tool wit
#### 6. What image sizes are used for the classification network models?
The image size used in the inference depends on the network being benchmarked. The following table shows the list of input sizes for each network model.
| **Model** | **Public Network** | **Task** | **Input Size** (Height x Width) |
|------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|-----------------------------|-----------------------------------|
| [bert-large-uncased-whole-word-masking-squad](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/bert-large-uncased-whole-word-masking-squad-int8-0001) | BERT-large |question / answer |384|
| [deeplabv3-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3) | DeepLab v3 Tf |semantic segmentation | 513x513 |
| [densenet-121-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/densenet-121-tf) | Densenet-121 Tf |classification | 224x224 |
| [facenet-20180408-102900-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/facenet-20180408-102900) | FaceNet TF | face recognition | 160x160 |
| [faster_rcnn_resnet50_coco-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco) | Faster RCNN Tf | object detection | 600x1024 |
| [googlenet-v1-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/googlenet-v1-tf) | GoogLeNet_ILSVRC-2012 | classification | 224x224 |
| [inception-v3-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/googlenet-v3) | Inception v3 Tf | classification | 299x299 |
| [mobilenet-ssd-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-ssd) | SSD (MobileNet)_COCO-2017_Caffe | object detection | 300x300 |
| [mobilenet-v1-1.0-224-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v1-1.0-224-tf) | MobileNet v1 Tf | classification | 224x224 |
| [mobilenet-v2-1.0-224-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224) | MobileNet v2 Tf | classification | 224x224 |
| [mobilenet-v2-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-pytorch ) | Mobilenet V2 PyTorch | classification | 224x224 |
| [resnet-18-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-18-pytorch) | ResNet-18 PyTorch | classification | 224x224 |
| [resnet-50-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-pytorch) | ResNet-50 v1 PyTorch | classification | 224x224 |
| [resnet-50-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) | ResNet-50_v1_ILSVRC-2012 | classification | 224x224 |
| [se-resnext-50-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/se-resnext-50) | Se-ResNext-50_ILSVRC-2012_Caffe | classification | 224x224 |
| [squeezenet1.1-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/squeezenet1.1) | SqueezeNet_v1.1_ILSVRC-2012_Caffe | classification | 227x227 |
| [ssd300-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd300) | SSD (VGG-16)_VOC-2007_Caffe | object detection | 300x300 |
| [yolo_v3-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf) | TF Keras YOLO v3 Modelset | object detection | 300x300 |
| [yolo_v4-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v4-tf) | Yolo-V4 TF | object detection | 608x608 |
| [ssd_mobilenet_v1_coco-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd_mobilenet_v1_coco) | ssd_mobilenet_v1_coco | object detection | 300x300 |
| [ssdlite_mobilenet_v2-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) | ssd_mobilenet_v2 | object detection | 300x300 |
| [unet-camvid-onnx-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/unet-camvid-onnx-0001/description/unet-camvid-onnx-0001.md) | U-Net | semantic segmentation | 368x480 |
| **Model** | **Public Network** | **Task** | **Input Size** (Height x Width) |
|------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-----------------------------|-----------------------------------|
| [bert-large-uncased-whole-word-masking-squad](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/bert-large-uncased-whole-word-masking-squad-int8-0001) | BERT-large |question / answer |384|
| [brain-tumor-segmentation-0001-MXNET](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0001) | brain-tumor-segmentation-0001 | semantic segmentation | 128x128x128 |
| [brain-tumor-segmentation-0002-CF2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0002) | brain-tumor-segmentation-0002 | semantic segmentation | 128x128x128 |
| [deeplabv3-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3) | DeepLab v3 Tf | semantic segmentation | 513x513 |
| [densenet-121-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/densenet-121-tf) | Densenet-121 Tf | classification | 224x224 |
| [facenet-20180408-102900-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/facenet-20180408-102900) | FaceNet TF | face recognition | 160x160 |
| [faster_rcnn_resnet50_coco-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco) | Faster RCNN Tf | object detection | 600x1024 |
| [inception-v4-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/public/googlenet-v4-tf) | Inception v4 Tf (aka GoogleNet-V4) | classification | 299x299 |
| [inception-v3-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/googlenet-v3) | Inception v3 Tf | classification | 299x299 |
| [mobilenet-ssd-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-ssd) | SSD (MobileNet)_COCO-2017_Caffe | object detection | 300x300 |
| [mobilenet-v2-1.0-224-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224) | MobileNet v2 Tf | classification | 224x224 |
| [mobilenet-v2-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-pytorch ) | Mobilenet V2 PyTorch | classification | 224x224 |
| [resnet-18-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-18-pytorch) | ResNet-18 PyTorch | classification | 224x224 |
| [resnet-50-pytorch](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-pytorch) | ResNet-50 v1 PyTorch | classification | 224x224 |
| [resnet-50-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) | ResNet-50_v1_ILSVRC-2012 | classification | 224x224 |
| [se-resnext-50-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/se-resnext-50) | Se-ResNext-50_ILSVRC-2012_Caffe | classification | 224x224 |
| [squeezenet1.1-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/squeezenet1.1) | SqueezeNet_v1.1_ILSVRC-2012_Caffe | classification | 227x227 |
| [ssd300-CF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd300) | SSD (VGG-16)_VOC-2007_Caffe | object detection | 300x300 |
| [yolo_v4-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v4-tf) | Yolo-V4 TF | object detection | 608x608 |
| [ssd_mobilenet_v1_coco-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd_mobilenet_v1_coco) | ssd_mobilenet_v1_coco | object detection | 300x300 |
| [ssdlite_mobilenet_v2-TF](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) | ssdlite_mobilenet_v2 | object detection | 300x300 |
| [unet-camvid-onnx-0001](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/unet-camvid-onnx-0001/description/unet-camvid-onnx-0001.md) | U-Net | semantic segmentation | 368x480 |
| [yolo-v3-tiny-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/public/yolo-v3-tiny-tf) | YOLO v3 Tiny | object detection | 416x416 |
| [ssd-resnet34-1200-onnx](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/public/ssd-resnet34-1200-onnx) | ssd-resnet34 onnx model | object detection | 1200x1200 |
| [vgg19-caffe](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/vgg19-caffe2) | VGG-19 | classification | 224x224|
#### 7. Where can I purchase the specific hardware used in the benchmarking?
Intel partners with various vendors all over the world. Visit the [Intel® AI: In Production Partners & Solutions Catalog](https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/partners-solutions-catalog.html) for a list of Equipment Makers and the [Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md) documentation. You can also remotely test and run models before purchasing any hardware by using [Intel® DevCloud for the Edge](http://devcloud.intel.com/edge/).

View File

@@ -29,81 +29,86 @@ Measuring inference performance involves many variables and is extremely use-cas
\htmlonly
<script src="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-3-338-5.js" id="bert-large-uncased-whole-word-masking-squad-int8-0001-ov-2021-3-338-5"></script>
<script src="bert-large-uncased-whole-word-masking-squad-int8-0001-384-ov-2021-4-569.js" id="bert-large-uncased-whole-word-masking-squad-int8-0001-384-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="deeplabv3-tf-ov-2021-3-338-5.js" id="deeplabv3-tf-ov-2021-3-338-5"></script>
<script src="deeplabv3-tf-513x513-ov-2021-4-569.js" id="deeplabv3-tf-513x513-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="densenet-121-tf-ov-2021-3-338-5.js" id="densenet-121-tf-ov-2021-3-338-5"></script>
<script src="densenet-121-tf-224x224-ov-2021-4-569.js" id="densenet-121-tf-224x224-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="faster-rcnn-resnet50-coco-tf-ov-2021-3-338-5.js" id="faster-rcnn-resnet50-coco-tf-ov-2021-3-338-5"></script>
<script src="faster-rcnn-resnet50-coco-tf-600x1024-ov-2021-4-569.js" id="faster-rcnn-resnet50-coco-tf-600x1024-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="googlenet-v1-tf-ov-2021-3-338-5.js" id="googlenet-v1-tf-ov-2021-3-338-5"></script>
<script src="inception-v3-tf-299x299-ov-2021-4-569.js" id="inception-v3-tf-299x299-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="inception-v3-tf-ov-2021-3-338-5.js" id="inception-v3-tf-ov-2021-3-338-5"></script>
<script src="inception-v4-tf-299x299-ov-2021-4-569.js" id="inception-v4-tf-299x299-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-ssd-cf-ov-2021-3-338-5.js" id="mobilenet-ssd-cf-ov-2021-3-338-5"></script>
<script src="mobilenet-ssd-cf-300x300-ov-2021-4-569.js" id="mobilenet-ssd-cf-300x300-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-v1-1-0-224-tf-ov-2021-3-338-5.js" id="mobilenet-v1-1-0-224-tf-ov-2021-3-338-5"></script>
<script src="mobilenet-v2-pytorch-224x224-ov-2021-4-569.js" id="mobilenet-v2-pytorch-224x224-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="mobilenet-v2-pytorch-ov-2021-3-338-5.js" id="mobilenet-v2-pytorch-ov-2021-3-338-5"></script>
<script src="resnet-18-pytorch-224x224-ov-2021-4-569.js" id="resnet-18-pytorch-224x224-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="resnet-18-pytorch-ov-2021-3-338-5.js" id="resnet-18-pytorch-ov-2021-3-338-5"></script>
<script src="resnet-50-tf-224x224-ov-2021-4-569.js" id="resnet-50-tf-224x224-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="resnet-50-tf-ov-2021-3-338-5.js" id="resnet-50-tf-ov-2021-3-338-5"></script>
<script src="se-resnext-50-cf-224x224-ov-2021-4-569.js" id="se-resnext-50-cf-224x224-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="squeezenet1-1-cf-227x227-ov-2021-4-569.js" id="squeezenet1-1-cf-227x227-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="se-resnext-50-cf-ov-2021-3-338-5.js" id="se-resnext-50-cf-ov-2021-3-338-5"></script>
<script src="ssd300-cf-300x300-ov-2021-4-569.js" id="ssd300-cf-300x300-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="squeezenet1-1-cf-ov-2021-3-338-5.js" id="squeezenet1-1-cf-ov-2021-3-338-5"></script>
\endhtmlonly
\htmlonly
<script src="ssd300-cf-ov-2021-3-338-5.js" id="ssd300-cf-ov-2021-3-338-5"></script>
<script src="yolo-v3-tiny-tf-416x416-ov-2021-4-569.js" id="yolo-v3-tiny-tf-416x416-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="yolo-v3-tf-ov-2021-3-338-5.js" id="yolo-v3-tf-ov-2021-3-338-5"></script>
<script src="yolo-v4-tf-608x608-ov-2021-4-569.js" id="yolo-v4-tf-608x608-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="yolo-v4-tf-ov-2021-3-338-5.js" id="yolo-v4-tf-ov-2021-3-338-5"></script>
<script src="unet-camvid-onnx-0001-368x480-ov-2021-4-569.js" id="unet-camvid-onnx-0001-368x480-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="unet-camvid-onnx-0001-ov-2021-3-338-5.js" id="unet-camvid-onnx-0001-ov-2021-3-338-5"></script>
<script src="ssd-resnet34-1200-onnx-1200x1200-ov-2021-4-569.js" id="ssd-resnet34-1200-onnx-1200x1200-ov-2021-4-569"></script>
\endhtmlonly
\htmlonly
<script src="vgg19-caffe-224x224-ov-2021-4-569.js" id="vgg19-caffe-224x224-ov-2021-4-569"></script>
\endhtmlonly
## Platform Configurations
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2021.3.
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2021.4.
Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of March 15, 2021 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of June 18, 2021 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
@@ -127,15 +132,15 @@ Testing by Intel done on: see test date for each HW platform below.
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc. | Intel Corporation |
| BIOS Version | 0904 | 607 | SE5C620.86B.02.01.<br>0009.092820190230 |
| BIOS Release | April 12, 2019 | May 29, 2020 | September 28, 2019 |
| BIOS Version | 0904 | 607 | SE5C620.86B.02.01.<br>0013.121520200651 |
| BIOS Release | April 12, 2019 | May 29, 2020 | December 15, 2020 |
| BIOS Settings | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>change power policy <br>to "performance", <br>save & exit |
| Batch size | 1 | 1 | 1
| Precision | INT8 | INT8 | INT8
| Number of concurrent inference requests | 4 | 5 | 32
| Test Date | March 15, 2021 | March 15, 2021 | March 15, 2021
| Power dissipation, TDP in Watt | [71](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html#tab-blade-1-0-1) | [125](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) | [125](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) |
| CPU Price on Mach 15th, 2021, USD<br>Prices may vary | [213](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html) | [539](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) |[1,002](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html) |
| Test Date | June 18, 2021 | June 18, 2021 | June 18, 2021
| Rated maximum TDP/socket in Watt | [71](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html#tab-blade-1-0-1) | [125](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) | [125](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) |
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [213](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html) | [539](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) |[1,002](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html) |
**CPU Inference Engines (continue)**
@@ -149,84 +154,104 @@ Testing by Intel done on: see test date for each HW platform below.
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | Intel Corporation | Intel Corporation | Intel Corporation |
| BIOS Version | SE5C620.86B.02.01.<br>0009.092820190230 | SE5C620.86B.02.01.<br>0009.092820190230 | WLYDCRB1.SYS.0020.<br>P86.2103050636 |
| BIOS Release | September 28, 2019 | September 28, 2019 | March 5, 2021 |
| BIOS Version | SE5C620.86B.02.01.<br>0013.121520200651 | SE5C620.86B.02.01.<br>0013.121520200651 | WLYDCRB1.SYS.0020.<br>P86.2103050636 |
| BIOS Release | December 15, 2020 | December 15, 2020 | March 5, 2021 |
| BIOS Settings | Select optimized default settings, <br>change power policy to "performance", <br>save & exit | Select optimized default settings, <br>change power policy to "performance", <br>save & exit | Select optimized default settings, <br>change power policy to "performance", <br>save & exit |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests |32 | 52 | 80 |
| Test Date | March 15, 2021 | March 15, 2021 | March 22, 2021 |
| Power dissipation, TDP in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) | [270](https://ark.intel.com/content/www/us/en/ark/products/212287/intel-xeon-platinum-8380-processor-60m-cache-2-30-ghz.html) |
| CPU Price, USD<br>Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) (on Mach 15th, 2021) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) (on Mach 15th, 2021) | [8,099](https://ark.intel.com/content/www/us/en/ark/products/212287/intel-xeon-platinum-8380-processor-60m-cache-2-30-ghz.html) (on March 26th, 2021) |
| Test Date | June 18, 2021 | June 18, 2021 | June 18, 2021 |
| Rated maximum TDP/socket in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) | [270](https://ark.intel.com/content/www/us/en/ark/products/212287/intel-xeon-platinum-8380-processor-60m-cache-2-30-ghz.html) |
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) | [8,099](https://ark.intel.com/content/www/us/en/ark/products/212287/intel-xeon-platinum-8380-processor-60m-cache-2-30-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X | 11th Gen Intel® Core™ i7-1185G7 |
| -------------------- | ----------------------------------- |--------------------------------------| --------------------------------|
| Motherboard | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | Intel Corporation<br>internal/Reference<br>Validation Platform |
| CPU | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz | 11th Gen Intel® Core™ i7-1185G7 @ 3.00GHz |
| Hyper Threading | ON | ON | ON |
| Turbo Setting | ON | ON | ON |
| Memory | 4 x 16 GB DDR4 2400MHz | 4 x 16 GB DDR4 2666MHz | 2 x 8 GB DDR4 3200MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.8.0-05-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | Intel Corporation |
| BIOS Version | F11 | 505 | TGLSFWI1.R00.3425.<br>A00.2010162309 |
| BIOS Release | March 13, 2019 | December 17, 2019 | October 16, 2020 |
| BIOS Settings | Select optimized default settings, <br>set OS type to "other", <br>save & exit | Default Settings | Default Settings |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests |4 | 24 | 4 |
| Test Date | March 15, 2021 | March 15, 2021 | March 15, 2021 |
| Power dissipation, TDP in Watt | [35](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html#tab-blade-1-0-1) | [165](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [28](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-1) |
| CPU Price on Mach 15th, 2021, USD<br>Prices may vary | [303](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html) | [700](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [426](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-0) |
| | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X |
| -------------------- | ----------------------------------- |--------------------------------------|
| Motherboard | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II |
| CPU | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz |
| Hyper Threading | ON | ON |
| Turbo Setting | ON | ON |
| Memory | 4 x 16 GB DDR4 2400MHz | 4 x 16 GB DDR4 2666MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | F14c | 1004 |
| BIOS Release | March 23, 2021 | March 19, 2021 |
| BIOS Settings | Select optimized default settings, <br>set OS type to "other", <br>save & exit | Default Settings |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests |4 | 24 |
| Test Date | June 18, 2021 | June 18, 2021 |
| Rated maximum TDP/socket in Watt | [35](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html#tab-blade-1-0-1) | [165](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) |
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [303](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html) | [700](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) |
**CPU Inference Engines (continue)**
| | 11th Gen Intel® Core™ i7-1185G7 | 11th Gen Intel® Core™ i7-11850HE |
| -------------------- | --------------------------------|----------------------------------|
| Motherboard | Intel Corporation<br>internal/Reference<br>Validation Platform | Intel Corporation<br>internal/Reference<br>Validation Platform |
| CPU | 11th Gen Intel® Core™ i7-1185G7 @ 3.00GHz | 11th Gen Intel® Core™ i7-11850HE @ 2.60GHz |
| Hyper Threading | ON | ON |
| Turbo Setting | ON | ON |
| Memory | 2 x 8 GB DDR4 3200MHz | 2 x 16 GB DDR4 3200MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04.4 LTS |
| Kernel Version | 5.8.0-05-generic | 5.8.0-050800-generic |
| BIOS Vendor | Intel Corporation | Intel Corporation |
| BIOS Version | TGLSFWI1.R00.3425.<br>A00.2010162309 | TGLIFUI1.R00.4064.<br>A01.2102200132 |
| BIOS Release | October 16, 2020 | February 20, 2021 |
| BIOS Settings | Default Settings | Default Settings |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests |4 | 4 |
| Test Date | June 18, 2021 | June 18, 2021 |
| Rated maximum TDP/socket in Watt | [28](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html) | [45](https://ark.intel.com/content/www/us/en/ark/products/213799/intel-core-i7-11850h-processor-24m-cache-up-to-4-80-ghz.html) |
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [426](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html) | [395](https://ark.intel.com/content/www/us/en/ark/products/213799/intel-core-i7-11850h-processor-24m-cache-up-to-4-80-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel® Core™ i3-8100 | Intel® Core™ i5-8500 | Intel® Core™ i5-10500TE |
| -------------------- |----------------------------------- | ---------------------------------- | ----------------------------------- |
| Motherboard | GIGABYTE* Z390 UD | ASUS* PRIME Z370-A | GIGABYTE* Z490 AORUS PRO AX |
| CPU | Intel® Core™ i3-8100 CPU @ 3.60GHz | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz |
| Hyper Threading | OFF | OFF | ON |
| Turbo Setting | OFF | ON | ON |
| Memory | 4 x 8 GB DDR4 2400MHz | 2 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 @ 2666MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | F8 | 2401 | F3 |
| BIOS Release | May 24, 2019 | July 12, 2019 | March 25, 2020 |
| BIOS Settings | Select optimized default settings, <br> set OS type to "other", <br>save & exit | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>set OS type to "other", <br>save & exit |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 3 | 4 |
| Test Date | June 18, 2021 | June 18, 2021 | June 18, 2021 |
| Rated maximum TDP/socket in Watt | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)| [65](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html#tab-blade-1-0-1)| [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) | [192](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel® Core™ i5-8500 | Intel® Core™ i5-10500TE |
| -------------------- | ---------------------------------- | ----------------------------------- |
| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z490 AORUS PRO AX |
| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz |
| Hyper Threading | OFF | ON |
| Turbo Setting | ON | ON |
| Memory | 2 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 @ 2666MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 2401 | F3 |
| BIOS Release | July 12, 2019 | March 25, 2020 |
| BIOS Settings | Select optimized default settings, <br>save & exit | Select optimized default settings, <br>set OS type to "other", <br>save & exit |
| Batch size | 1 | 1 |
| Precision | INT8 | INT8 |
| Number of concurrent inference requests | 3 | 4 |
| Test Date | March 15, 2021 | March 15, 2021 |
| Power dissipation, TDP in Watt | [65](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html#tab-blade-1-0-1)| [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
| CPU Price on Mach 15th, 2021, USD<br>Prices may vary | [192](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) |
**CPU Inference Engines (continue)**
| | Intel Atom® x5-E3940 | Intel Atom® x6425RE | Intel® Core™ i3-8100 |
| -------------------- | --------------------------------------|------------------------------- |----------------------------------- |
| Motherboard | | Intel Corporation /<br>ElkhartLake LPDDR4x T3 CRB | GIGABYTE* Z390 UD |
| CPU | Intel Atom® Processor E3940 @ 1.60GHz | Intel Atom® x6425RE<br>Processor @ 1.90GHz | Intel® Core™ i3-8100 CPU @ 3.60GHz |
| Hyper Threading | OFF | OFF | OFF |
| Turbo Setting | ON | ON | OFF |
| Memory | 1 x 8 GB DDR3 1600MHz | 2 x 4GB DDR4 3200 MHz | 4 x 8 GB DDR4 2400MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS |
| Kernel Version | 5.3.0-24-generic | 5.8.0-050800-generic | 5.3.0-24-generic |
| BIOS Vendor | American Megatrends Inc.* | Intel Corporation | American Megatrends Inc.* |
| BIOS Version | 5.12 | EHLSFWI1.R00.2463.<br>A03.2011200425 | F8 |
| BIOS Release | September 6, 2017 | November 22, 2020 | May 24, 2019 |
| BIOS Settings | Default settings | Default settings | Select optimized default settings, <br> set OS type to "other", <br>save & exit |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 4 | 4 |
| Test Date | March 15, 2021 | March 15, 2021 | March 15, 2021 |
| Power dissipation, TDP in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [12](https://ark.intel.com/content/www/us/en/ark/products/207899/intel-atom-x6425re-processor-1-5m-cache-1-90-ghz.html) | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)|
| CPU Price, USD<br>Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) (on March 15th, 2021) | [59](https://ark.intel.com/content/www/us/en/ark/products/207899/intel-atom-x6425re-processor-1-5m-cache-1-90-ghz.html) (on March 26th, 2021) | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) (on March 15th, 2021) |
| | Intel Atom® x5-E3940 | Intel Atom® x6425RE | Intel® Celeron® 6305E |
| -------------------- | --------------------------------------|------------------------------- |----------------------------------|
| Motherboard | Intel Corporation<br>internal/Reference<br>Validation Platform | Intel Corporation<br>internal/Reference<br>Validation Platform | Intel Corporation<br>internal/Reference<br>Validation Platform |
| CPU | Intel Atom® Processor E3940 @ 1.60GHz | Intel Atom® x6425RE<br>Processor @ 1.90GHz | Intel® Celeron®<br>6305E @ 1.80GHz |
| Hyper Threading | OFF | OFF | OFF |
| Turbo Setting | ON | ON | ON |
| Memory | 1 x 8 GB DDR3 1600MHz | 2 x 4GB DDR4 3200MHz | 2 x 8 GB DDR4 3200MHz |
| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu 18.04.5 LTS |
| Kernel Version | 5.3.0-24-generic | 5.8.0-050800-generic | 5.8.0-050800-generic |
| BIOS Vendor | American Megatrends Inc.* | Intel Corporation | Intel Corporation |
| BIOS Version | 5.12 | EHLSFWI1.R00.2463.<br>A03.2011200425 | TGLIFUI1.R00.4064.A02.2102260133 |
| BIOS Release | September 6, 2017 | November 22, 2020 | February 26, 2021 |
| BIOS Settings | Default settings | Default settings | Default settings |
| Batch size | 1 | 1 | 1 |
| Precision | INT8 | INT8 | INT8 |
| Number of concurrent inference requests | 4 | 4 | 4|
| Test Date | June 18, 2021 | June 18, 2021 | June 18, 2021 |
| Rated maximum TDP/socket in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [12](https://ark.intel.com/content/www/us/en/ark/products/207899/intel-atom-x6425re-processor-1-5m-cache-1-90-ghz.html) | [15](https://ark.intel.com/content/www/us/en/ark/products/208072/intel-celeron-6305e-processor-4m-cache-1-80-ghz.html)|
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [59](https://ark.intel.com/content/www/us/en/ark/products/207899/intel-atom-x6425re-processor-1-5m-cache-1-90-ghz.html) |[107](https://ark.intel.com/content/www/us/en/ark/products/208072/intel-celeron-6305e-processor-4m-cache-1-80-ghz.html) |
@@ -239,8 +264,8 @@ Testing by Intel done on: see test date for each HW platform below.
| Batch size | 1 | 1 |
| Precision | FP16 | FP16 |
| Number of concurrent inference requests | 4 | 32 |
| Power dissipation, TDP in Watt | 2.5 | [30](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) |
| CPU Price, USD<br>Prices may vary | [69](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) (from March 15, 2021) | [1180](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) (from March 15, 2021) |
| Rated maximum TDP/socket in Watt | 2.5 | [30](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) |
| CPU Price/socket on June 21, 2021, USD<br>Prices may vary | [69](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) | [425](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) |
| Host Computer | Intel® Core™ i7 | Intel® Core™ i5 |
| Motherboard | ASUS* Z370-A II | Uzelinfo* / US-E1300 |
| CPU | Intel® Core™ i7-8700 CPU @ 3.20GHz | Intel® Core™ i5-6600 CPU @ 3.30GHz |
@@ -252,9 +277,9 @@ Testing by Intel done on: see test date for each HW platform below.
| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* |
| BIOS Version | 411 | 5.12 |
| BIOS Release | September 21, 2018 | September 21, 2018 |
| Test Date | March 15, 2021 | March 15, 2021 |
| Test Date | June 18, 2021 | June 18, 2021 |
Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.3.html)
Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.4.html)
\htmlonly
<style>

View File

@@ -18,20 +18,98 @@ OpenVINO™ Model Server is measured in multiple-client-single-server configurat
* **Execution Controller** is launched on the client platform. It is responsible for synchronization of the whole measurement process, downloading metrics from the load balancer, and presenting the final report of the execution.
## 3D U-Net (FP32)
![](../img/throughput_ovms_3dunet.png)
## resnet-50-TF (INT8)
![](../img/throughput_ovms_resnet50_int8.png)
## resnet-50-TF (FP32)
![](../img/throughput_ovms_resnet50_fp32.png)
## bert-large-uncased-whole-word-masking-squad-int8-0001 (INT8)
![](../img/throughput_ovms_bertlarge_int8.png)
![](../img/throughput_ovms_resnet50_fp32_bs_1.png)
## 3D U-Net (FP32)
![](../img/throughput_ovms_3dunet.png)
## yolo-v3-tf (FP32)
![](../img/throughput_ovms_yolo3_fp32.png)
## yolo-v3-tiny-tf (FP32)
![](../img/throughput_ovms_yolo3tiny_fp32.png)
## yolo-v4-tf (FP32)
![](../img/throughput_ovms_yolo4_fp32.png)
## bert-small-uncased-whole-word-masking-squad-0002 (FP32)
![](../img/throughput_ovms_bertsmall_fp32.png)
## bert-small-uncased-whole-word-masking-squad-int8-0002 (INT8)
![](../img/throughput_ovms_bertsmall_int8.png)
## bert-large-uncased-whole-word-masking-squad-0001 (FP32)
![](../img/throughput_ovms_bertlarge_fp32.png)
## bert-large-uncased-whole-word-masking-squad-int8-0001 (INT8)
![](../img/throughput_ovms_bertlarge_int8.png)
## mobilenet-v3-large-1.0-224-tf (FP32)
![](../img/throughput_ovms_mobilenet3large_fp32.png)
## ssd_mobilenet_v1_coco (FP32)
![](../img/throughput_ovms_ssdmobilenet1_fp32.png)
## Platform Configurations
OpenVINO™ Model Server performance benchmark numbers are based on release 2021.3. Performance results are based on testing as of March 15, 2021 and may not reflect all publicly available updates.
OpenVINO™ Model Server performance benchmark numbers are based on release 2021.4. Performance results are based on testing as of June 17, 2021 and may not reflect all publicly available updates.
**Platform with Intel® Xeon® Platinum 8260M**
<table>
<tr>
<th></th>
<th><strong>Server Platform</strong></th>
<th><strong>Client Platform</strong></th>
</tr>
<tr>
<td><strong>Motherboard</strong></td>
<td>Inspur YZMB-00882-104 NF5280M5</td>
<td>Intel® Server Board S2600WF H48104-872</td>
</tr>
<tr>
<td><strong>Memory</strong></td>
<td>Samsung 16 x 16GB @ 2666 MT/s DDR4</td>
<td>Hynix 16 x 16GB @ 2666 MT/s DDR4</td>
</tr>
<tr>
<td><strong>CPU</strong></td>
<td>Intel® Xeon® Platinum 8260M CPU @ 2.40GHz</td>
<td>Intel® Xeon® Gold 6252 CPU @ 2.10GHz</td>
</tr>
<tr>
<td><strong>Selected CPU Flags</strong></td>
<td>Hyper Threading, Turbo Boost, DL Boost</td>
<td>Hyper Threading, Turbo Boost, DL Boost</td>
</tr>
<tr>
<td><strong>CPU Thermal Design Power</strong></td>
<td>162 W</td>
<td>150 W</td>
</tr>
<tr>
<td><strong>Operating System</strong></td>
<td>Ubuntu 20.04.2 LTS</td>
<td>Ubuntu 20.04.2 LTS</td>
</tr>
<tr>
<td><strong>Kernel Version</strong></td>
<td>5.4.0-54-generic</td>
<td>5.4.0-65-generic</td>
</tr>
<tr>
<td><strong>BIOS Vendor</strong></td>
<td>American Megatrends Inc.</td>
<td>Intel® Corporation</td>
</tr>
<tr>
<td><strong>BIOS Version & Release</strong></td>
<td>4.1.16, date: 06/23/2020</td>
<td>SE5C620.86B.02.01, date: 03/26/2020</td>
</tr>
<tr>
<td><strong>Docker Version</strong></td>
<td>20.10.3</td>
<td>20.10.3</td>
</tr>
<tr>
<td><strong>Network Speed</strong></td>
<td colspan="2">40 Gb/s</td>
</tr>
</table>
**Platform with Intel® Xeon® Gold 6252**
@@ -65,7 +143,7 @@ OpenVINO™ Model Server performance benchmark numbers are based on release 2021
<td><strong>CPU Thermal Design Power</strong></td>
<td>150 W</td>
<td>162 W</td>
</tr>
</tr>
<tr>
<td><strong>Operating System</strong></td>
<td>Ubuntu 20.04.2 LTS</td>

View File

@@ -20,25 +20,25 @@ The table below illustrates the speed-up factor for the performance gain by swit
<td>bert-large-<br>uncased-whole-word-<br>masking-squad-0001</td>
<td>SQuAD</td>
<td>1.6</td>
<td>3.0</td>
<td>1.6</td>
<td>2.3</td>
<td>3.1</td>
<td>1.5</td>
<td>2.5</td>
</tr>
<tr>
<td>brain-tumor-<br>segmentation-<br>0001-MXNET</td>
<td>BraTS</td>
<td>1.6</td>
<td>1.9</td>
<td>1.7</td>
<td>1.7</td>
<td>2.0</td>
<td>1.8</td>
<td>1.8</td>
</tr>
<tr>
<td>deeplabv3-TF</td>
<td>VOC 2012<br>Segmentation</td>
<td>2.1</td>
<td>3.1</td>
<td>3.1</td>
<td>1.9</td>
<td>3.0</td>
<td>2.8</td>
<td>3.1</td>
</tr>
<tr>
<td>densenet-121-TF</td>
@@ -51,7 +51,7 @@ The table below illustrates the speed-up factor for the performance gain by swit
<tr>
<td>facenet-<br>20180408-<br>102900-TF</td>
<td>LFW</td>
<td>2.0</td>
<td>2.1</td>
<td>3.6</td>
<td>2.2</td>
<td>3.7</td>
@@ -60,17 +60,9 @@ The table below illustrates the speed-up factor for the performance gain by swit
<td>faster_rcnn_<br>resnet50_coco-TF</td>
<td>MS COCO</td>
<td>1.9</td>
<td>3.8</td>
<td>3.7</td>
<td>2.0</td>
<td>3.5</td>
</tr>
<tr>
<td>googlenet-v1-TF</td>
<td>ImageNet</td>
<td>1.8</td>
<td>3.6</td>
<td>2.0</td>
<td>3.9</td>
<td>3.4</td>
</tr>
<tr>
<td>inception-v3-TF</td>
@@ -78,24 +70,16 @@ The table below illustrates the speed-up factor for the performance gain by swit
<td>1.9</td>
<td>3.8</td>
<td>2.0</td>
<td>4.0</td>
<td>4.1</td>
</tr>
<tr>
<td>mobilenet-<br>ssd-CF</td>
<td>VOC2012</td>
<td>1.7</td>
<td>1.6</td>
<td>3.1</td>
<td>1.8</td>
<td>1.9</td>
<td>3.6</td>
</tr>
<tr>
<td>mobilenet-v1-1.0-<br>224-TF</td>
<td>ImageNet</td>
<td>1.7</td>
<td>3.1</td>
<td>1.8</td>
<td>4.1</td>
</tr>
<tr>
<td>mobilenet-v2-1.0-<br>224-TF</td>
<td>ImageNet</td>
@@ -107,10 +91,10 @@ The table below illustrates the speed-up factor for the performance gain by swit
<tr>
<td>mobilenet-v2-<br>pytorch</td>
<td>ImageNet</td>
<td>1.6</td>
<td>1.7</td>
<td>2.4</td>
<td>1.9</td>
<td>3.9</td>
<td>4.0</td>
</tr>
<tr>
<td>resnet-18-<br>pytorch</td>
@@ -124,7 +108,7 @@ The table below illustrates the speed-up factor for the performance gain by swit
<td>resnet-50-<br>pytorch</td>
<td>ImageNet</td>
<td>1.9</td>
<td>3.7</td>
<td>3.6</td>
<td>2.0</td>
<td>3.9</td>
</tr>
@@ -147,16 +131,16 @@ The table below illustrates the speed-up factor for the performance gain by swit
<tr>
<td>ssd_mobilenet_<br>v1_coco-tf</td>
<td>VOC2012</td>
<td>1.7</td>
<td>3.0</td>
<td>1.9</td>
<td>1.8</td>
<td>3.1</td>
<td>2.0</td>
<td>3.6</td>
</tr>
<tr>
<td>ssd300-CF</td>
<td>MS COCO</td>
<td>1.8</td>
<td>4.4</td>
<td>4.2</td>
<td>1.9</td>
<td>3.9</td>
</tr>
@@ -165,33 +149,57 @@ The table below illustrates the speed-up factor for the performance gain by swit
<td>MS COCO</td>
<td>1.7</td>
<td>2.5</td>
<td>2.2</td>
<td>3.4</td>
</tr>
<tr>
<td>yolo_v3-TF</td>
<td>MS COCO</td>
<td>1.8</td>
<td>4.0</td>
<td>1.9</td>
<td>3.9</td>
<td>2.4</td>
<td>3.5</td>
</tr>
<tr>
<td>yolo_v4-TF</td>
<td>MS COCO</td>
<td>1.7</td>
<td>1.9</td>
<td>3.6</td>
<td>2.0</td>
<td>3.4</td>
<td>1.7</td>
<td>2.8</td>
</tr>
<tr>
<td>unet-camvid-onnx-0001</td>
<td>MS COCO</td>
<td>1.6</td>
<td>3.8</td>
<td>1.6</td>
<td>1.7</td>
<td>3.9</td>
<td>1.7</td>
<td>3.7</td>
</tr>
<tr>
<td>ssd-resnet34-<br>1200-onnx</td>
<td>MS COCO</td>
<td>1.7</td>
<td>4.0</td>
<td>1.7</td>
<td>3.4</td>
</tr>
<tr>
<td>googlenet-v4-tf</td>
<td>ImageNet</td>
<td>1.9</td>
<td>3.9</td>
<td>2.0</td>
<td>4.1</td>
</tr>
<tr>
<td>vgg19-caffe</td>
<td>ImageNet</td>
<td>1.9</td>
<td>4.7</td>
<td>2.0</td>
<td>4.5</td>
</tr>
<tr>
<td>yolo-v3-tiny-tf</td>
<td>MS COCO</td>
<td>1.7</td>
<td>3.4</td>
<td>1.9</td>
<td>3.5</td>
</tr>
</table>
The following table shows the absolute accuracy drop that is calculated as the difference in accuracy between the FP32 representation of a model and its INT8 representation.
@@ -217,18 +225,18 @@ The following table shows the absolute accuracy drop that is calculated as the d
<td>SQuAD</td>
<td>F1</td>
<td>0.62</td>
<td>0.88</td>
<td>0.52</td>
<td>0.71</td>
<td>0.62</td>
<td>0.62</td>
</tr>
<tr>
<td>brain-tumor-<br>segmentation-<br>0001-MXNET</td>
<td>BraTS</td>
<td>Dice-index@ <br>Mean@ <br>Overall Tumor</td>
<td>0.09</td>
<td>0.08</td>
<td>0.10</td>
<td>0.11</td>
<td>0.09</td>
<td>0.10</td>
<td>0.08</td>
</tr>
<tr>
<td>deeplabv3-TF</td>
@@ -243,10 +251,10 @@ The following table shows the absolute accuracy drop that is calculated as the d
<td>densenet-121-TF</td>
<td>ImageNet</td>
<td>acc@top-1</td>
<td>0.54</td>
<td>0.57</td>
<td>0.57</td>
<td>0.54</td>
<td>0.49</td>
<td>0.56</td>
<td>0.56</td>
<td>0.49</td>
</tr>
<tr>
<td>facenet-<br>20180408-<br>102900-TF</td>
@@ -261,46 +269,28 @@ The following table shows the absolute accuracy drop that is calculated as the d
<td>faster_rcnn_<br>resnet50_coco-TF</td>
<td>MS COCO</td>
<td>coco_<br>precision</td>
<td>0.04</td>
<td>0.04</td>
<td>0.04</td>
<td>0.04</td>
</tr>
<tr>
<td>googlenet-v1-TF</td>
<td>ImageNet</td>
<td>acc@top-1</td>
<td>0.01</td>
<td>0.00</td>
<td>0.00</td>
<td>0.01</td>
<td>0.09</td>
<td>0.09</td>
<td>0.09</td>
<td>0.09</td>
</tr>
<tr>
<td>inception-v3-TF</td>
<td>ImageNet</td>
<td>acc@top-1</td>
<td>0.04</td>
<td>0.00</td>
<td>0.00</td>
<td>0.04</td>
<td>0.02</td>
<td>0.01</td>
<td>0.01</td>
<td>0.02</td>
</tr>
<tr>
<td>mobilenet-<br>ssd-CF</td>
<td>VOC2012</td>
<td>mAP</td>
<td>0.77</td>
<td>0.77</td>
<td>0.77</td>
<td>0.77</td>
</tr>
<tr>
<td>mobilenet-v1-1.0-<br>224-TF</td>
<td>ImageNet</td>
<td>acc@top-1</td>
<td>0.26</td>
<td>0.28</td>
<td>0.28</td>
<td>0.26</td>
<td>0.06</td>
<td>0.04</td>
<td>0.04</td>
<td>0.06</td>
</tr>
<tr>
<td>mobilenet-v2-1.0-<br>224-TF</td>
@@ -342,37 +332,37 @@ The following table shows the absolute accuracy drop that is calculated as the d
<td>resnet-50-<br>TF</td>
<td>ImageNet</td>
<td>acc@top-1</td>
<td>0.10</td>
<td>0.08</td>
<td>0.08</td>
<td>0.10</td>
<td>0.11</td>
<td>0.11</td>
<td>0.11</td>
<td>0.11</td>
</tr>
<tr>
<td>squeezenet1.1-<br>CF</td>
<td>ImageNet</td>
<td>acc@top-1</td>
<td>0.63</td>
<td>0.64</td>
<td>0.66</td>
<td>0.66</td>
<td>0.63</td>
<td>0.64</td>
</tr>
<tr>
<td>ssd_mobilenet_<br>v1_coco-tf</td>
<td>VOC2012</td>
<td>COCO mAp</td>
<td>0.18</td>
<td>3.06</td>
<td>3.06</td>
<td>0.18</td>
<td>0.17</td>
<td>2.96</td>
<td>2.96</td>
<td>0.17</td>
</tr>
<tr>
<td>ssd300-CF</td>
<td>MS COCO</td>
<td>COCO mAp</td>
<td>0.05</td>
<td>0.05</td>
<td>0.05</td>
<td>0.05</td>
<td>0.18</td>
<td>3.06</td>
<td>3.06</td>
<td>0.18</td>
</tr>
<tr>
<td>ssdlite_<br>mobilenet_<br>v2-TF</td>
@@ -383,32 +373,59 @@ The following table shows the absolute accuracy drop that is calculated as the d
<td>0.43</td>
<td>0.11</td>
</tr>
<tr>
<td>yolo_v3-TF</td>
<td>MS COCO</td>
<td>COCO mAp</td>
<td>0.11</td>
<td>0.24</td>
<td>0.24</td>
<td>0.11</td>
</tr>
<tr>
<td>yolo_v4-TF</td>
<td>MS COCO</td>
<td>COCO mAp</td>
<td>0.01</td>
<td>0.09</td>
<td>0.09</td>
<td>0.01</td>
<td>0.06</td>
<td>0.03</td>
<td>0.03</td>
<td>0.06</td>
</tr>
<tr>
<td>unet-camvid-<br>onnx-0001</td>
<td>MS COCO</td>
<td>COCO mAp</td>
<td>0.29</td>
<td>0.29</td>
<td>0.31</td>
<td>0.31</td>
<td>0.31</td>
<td>0.31</td>
<td>0.29</td>
</tr>
<tr>
<td>ssd-resnet34-<br>1200-onnx</td>
<td>MS COCO</td>
<td>COCO mAp</td>
<td>0.02</td>
<td>0.03</td>
<td>0.03</td>
<td>0.02</td>
</tr>
<tr>
<td>googlenet-v4-tf</td>
<td>ImageNet</td>
<td>COCO mAp</td>
<td>0.08</td>
<td>0.06</td>
<td>0.06</td>
<td>0.06</td>
</tr>
<tr>
<td>vgg19-caffe</td>
<td>ImageNet</td>
<td>COCO mAp</td>
<td>0.02</td>
<td>0.04</td>
<td>0.04</td>
<td>0.02</td>
</tr>
<tr>
<td>yolo-v3-tiny-tf</td>
<td>MS COCO</td>
<td>COCO mAp</td>
<td>0.02</td>
<td>0.6</td>
<td>0.6</td>
<td>0.02</td>
</tr>
</table>

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e14f77f61f12c96ccf302667d51348a1e03579679155199910e3ebdf7d6adf06
size 37915
oid sha256:8cbe1a1c1dc477edc6909a011c1467b375f4f2ba868007befa4b2eccbaa2f2b1
size 28229

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e5a472a62de53998194bc1471539139807e00cbb75fd9edc605e7ed99b5630af
size 18336
oid sha256:d4cbf542d393f920c5731ce973f09836e08aaa35987ef0a19355e3e895179936
size 17981

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2f7c58da93fc7966e154bdade48d408401b097f4b0306b7c85aa4256ad72b59d
size 18118
oid sha256:c57a6e967b6515a34e0c62c4dd850bebc2e009f75f17ddd0a5d74a1028e84668
size 19028

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:104d8cd5eac2d1714db85df9cba5c2cfcc113ec54d428cd6e979e75e10473be6
size 17924
oid sha256:690e57d94f5c0c0ea31fc04a214b56ab618eac988a72c89b3542f52b4f44d513
size 19507

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5663cfab7a1611e921fc0b775d946009d6f7a7019e5e9dc6ebe96ccb6c6f1d7f
size 20145

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aad18293f64089992862e6a17b5271cc982da89b6b7493516a59252368945c87
size 20998

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70daf9e0016e56d8c7bb2f0efe2ac592434962bb8bea95f9120acd7b14d8b5b0
size 21763

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3db1f5acdad5880e44965eb71a33ac47aee331ee2f4318e2214786ea5a1e5289
size 21923

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:67a7444a934da6e70c77c937fc7a830d1ba2fbde99f3f3260479c39b9b7b1cee
size 20279

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:32116d6d1acc20d8cb2fa10e290e052e3146ba1290f1c5e4aaf16a85388b6ec6
size 19387
oid sha256:5d96e146a1b7d4e48b683de3ed7665c41244ec68cdad94eb79ac497948af9b08
size 21255

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d1ab823ea109f908b3e38bf88a7004cfdc374746b5ec4870547fade0f7684035
size 20084

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b16674fabd80d73e455c276ef262f3d0a1cf6b00152340dd4e2645330f358432
size 19341

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:48bc60c34f141a3cb232ae8370468f2861ac36cb926be981ff3153f05d4d5187
size 19992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f472d1fa6058d7ce988e9a2da8b5c6c106d8aa7e90bf2d383d2eaf685a725ab4
size 19107

View File

@@ -5,14 +5,12 @@
> - If you are using Intel® Distribution of OpenVINO™ toolkit on Windows\* OS, see the [Installation Guide for Windows*](installing-openvino-windows.md).
> - CentOS and Yocto installations will require some modifications that are not covered in this guide.
> - An internet connection is required to follow the steps in this guide.
> - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
> **TIP**: If you want to [quick start with OpenVINO™ toolkit](@ref
> openvino_docs_get_started_get_started_dl_workbench), you can use
> the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench). DL Workbench is the OpenVINO™ toolkit UI
> that enables you to import a
> model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for deployment
> on various Intel® platforms. Begin your OpenVINO™ journey with [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Install).
> **TIP**: You can quick start with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref
> openvino_docs_get_started_get_started_dl_workbench) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is an OpenVINO™ UI that enables you to
> import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for
> deployment on various Intel® platforms.
## Introduction
@@ -20,7 +18,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio
The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
- Enables CNN-based deep learning inference on the edge
- Supports heterogeneous execution across Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
- Supports heterogeneous execution across Intel® CPU, Intel® GPU, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
- Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
- Includes optimized calls for computer vision standards including OpenCV\* and OpenCL™
@@ -47,6 +45,7 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
* Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
* Intel® Iris® Xe MAX Graphics
* Intel® Neural Compute Stick 2
* Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
@@ -63,6 +62,10 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*:
- Ubuntu 20.04.0 long-term support (LTS), 64-bit
- CentOS 7.6, 64-bit (for target only)
- Yocto Project v3.0, 64-bit (for target only and requires modifications)
- For deployment scenarios on Red Hat* Enterprise Linux* 8.2 (64 bit), you can use the of Intel® Distribution of OpenVINO™ toolkit run-time package that includes the Inference Engine core libraries, nGraph, OpenCV, Python bindings, CPU and GPU plugins. The package is available as:
- [Downloadable archive](https://storage.openvinotoolkit.org/repositories/openvino/packages/2021.3/l_openvino_toolkit_runtime_rhel8_p_2021.3.394.tgz)
- [PyPi package](https://pypi.org/project/openvino/)
- [Docker image](https://catalog.redhat.com/software/containers/intel/openvino-runtime/606ff4d7ecb5241699188fb3)
## Overview
@@ -279,20 +282,22 @@ The steps in this section are required only if you want to enable the toolkit co
cd /opt/intel/openvino_2021/install_dependencies/
```
2. Install the **Intel® Graphics Compute Runtime for OpenCL™** driver components required to use the GPU plugin and write custom layers for Intel® Integrated Graphics. The drivers are not included in the package, to install it, make sure you have the internet connection and run the installation script:
```sh
sudo -E ./install_NEO_OCL_driver.sh
```
The script compares the driver version on the system to the current version. If the driver version on the system is higher or equal to the current version, the script does
not install a new driver. If the version of the driver is lower than the current version, the script uninstalls the lower and installs the current version with your permission:
2. Install the **Intel® Graphics Compute Runtime for OpenCL™** driver components required to use the GPU plugin and write custom layers for Intel® Integrated Graphics. The drivers are not included in the package and must be installed separately.
> **NOTE**: To use the **Intel® Iris® Xe MAX Graphics**, see the [Intel® Iris® Xe MAX Graphics with Linux*](https://dgpu-docs.intel.com/devices/iris-xe-max-graphics/index.html) page for driver installation instructions.
To install the drivers, make sure you have the internet connection and run the installation script:
```sh
sudo -E ./install_NEO_OCL_driver.sh
```
The script compares the driver version on the system to the current version. If the driver version on the system is higher or equal to the current version, the script does not install a new driver. If the version of the driver is lower than the current version, the script uninstalls the lower and installs the current version with your permission:
![](../img/NEO_check_agreement.png)
Higher hardware versions require a higher driver version, namely 20.35 instead of 19.41. If the script fails to uninstall the driver, uninstall it manually. During the script execution, you may see the following command line output:
```sh
Add OpenCL user to video group
```
Ignore this suggestion and continue.<br>You can also find the most recent version of the driver, installation procedure and other information in the [https://github.com/intel/compute-runtime/](https://github.com/intel/compute-runtime/) repository.
Ignore this suggestion and continue.<br>You can also find the most recent version of the driver, installation procedure and other information on the [Intel® software for general purpose GPU capabilities](https://dgpu-docs.intel.com/index.html) site.
4. **Optional** Install header files to allow compiling a new code. You can find the header files at [Khronos OpenCL™ API Headers](https://github.com/KhronosGroup/OpenCL-Headers.git).
3. **Optional** Install header files to allow compiling a new code. You can find the header files at [Khronos OpenCL™ API Headers](https://github.com/KhronosGroup/OpenCL-Headers.git).
You've completed all required configuration steps to perform inference on processor graphics.
Proceed to the <a href="#get-started">Get Started</a> to get started with running code samples and demo applications.

View File

@@ -4,12 +4,11 @@
> - The Intel® Distribution of OpenVINO™ is supported on macOS\* 10.15.x versions.
> - An internet connection is required to follow the steps in this guide. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
> **TIP**: If you want to [quick start with OpenVINO™ toolkit](@ref
> openvino_docs_get_started_get_started_dl_workbench), you can use
> the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench). DL Workbench is the OpenVINO™ toolkit UI
> that enables you to import a
> model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for deployment
> on various Intel® platforms. Begin your OpenVINO™ journey with [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Install).
> **TIP**: You can quick start with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref
> openvino_docs_get_started_get_started_dl_workbench) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is an OpenVINO™ UI that enables you to
> import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for
> deployment on various Intel® platforms.
## Introduction

View File

@@ -2,14 +2,12 @@
> **NOTES**:
> - This guide applies to Microsoft Windows\* 10 64-bit. For Linux* OS information and instructions, see the [Installation Guide for Linux](installing-openvino-linux.md).
> - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
> **TIP**: If you want to [quick start with OpenVINO™ toolkit](@ref
> openvino_docs_get_started_get_started_dl_workbench), you can use
> the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction). DL Workbench is the OpenVINO™ toolkit UI
> that enables you to import a
> model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for deployment
> on various Intel® platforms. Begin your OpenVINO™ journey with [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Install).
> **TIP**: You can quick start with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref
> openvino_docs_get_started_get_started_dl_workbench) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is an OpenVINO™ UI that enables you to
> import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for
> deployment on various Intel® platforms.
## Introduction
@@ -53,7 +51,7 @@ For more information, see the online [Intel® Distribution of OpenVINO™ toolk
The Intel® Distribution of OpenVINO™ toolkit for Windows\* 10 OS:
- Enables CNN-based deep learning inference on the edge
- Supports heterogeneous execution across Intel® CPU, Intel® Processor Graphics (GPU), Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
- Supports heterogeneous execution across Intel® CPU, Intel® GPU, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
- Speeds time-to-market through an easy-to-use library of computer vision functions and pre-optimized kernels
- Includes optimized calls for computer vision standards including OpenCV\* and OpenCL™
@@ -81,6 +79,7 @@ The following components are installed by default:
* Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
* Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
* Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
* Intel® Iris® Xe MAX Graphics
* Intel® Neural Compute Stick 2
* Intel® Vision Accelerator Design with Intel® Movidius™ VPUs

View File

@@ -51,7 +51,7 @@ After the license is successfully validated, the OpenVINO™ Model Server loads
![Security Add-on Diagram](ovsa_diagram.png)
The binding between SWTPM (vTPM used in guest VM) and HW TPM (TPM on the host) is explained in [this document](https://github.com/openvinotoolkit/security_addon/blob/release_2021_3/docs/fingerprint-changes.md)
The binding between SWTPM (vTPM used in guest VM) and HW TPM (TPM on the host) is explained in [this document](https://github.com/openvinotoolkit/security_addon/blob/release_2021_4/docs/fingerprint-changes.md)
## About the Installation
The Model Developer, Independent Software Vendor, and User each must prepare one physical hardware machine and one Kernel-based Virtual Machine (KVM). In addition, each person must prepare a Guest Virtual Machine (Guest VM) for each role that person plays.
@@ -135,7 +135,7 @@ Begin this step on the Intel® Core™ or Xeon® processor machine that meets th
10. Install the [`tpm2-tools`](https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz).<br>
Installation information is at https://github.com/tpm2-software/tpm2-tools/blob/master/INSTALL.md
11. Install the [Docker packages](https://docs.docker.com/engine/install/ubuntu/).
> **NOTE**: Regardless of whether you used the `install_host_deps.sh` script, complete step 12 to finish setting up the packages on the Host Machine.
**NOTE**: Regardless of whether you used the `install_host_deps.sh` script, complete step 12 to finish setting up the packages on the Host Machine.
12. If you are running behind a proxy, [set up a proxy for Docker](https://docs.docker.com/config/daemon/systemd/).
The following are installed and ready to use:
@@ -255,7 +255,7 @@ Networking is set up on the Host Machine. Continue to the Step 3 to prepare a Gu
Download the [OpenVINO™ Security Add-on](https://github.com/openvinotoolkit/security_addon).
### Step 4: Set Up one Guest VM for the combined roles of Model Developer and Independent Software Vendor<a name="dev-isv-vm"></a>.
### Step 4: Set Up one Guest VM for the combined roles of Model Developer and Independent Software Vendor<a name="dev-isv-vm"></a>
For each separate role you play, you must prepare a virtual machine, called a Guest VM. Because in this release, the Model Developer and Independent Software Vendor roles are combined, these instructions guide you to set up one Guest VM, named `ovsa_isv`.
@@ -489,7 +489,7 @@ This step is for the combined role of Model Developer and Independent Software V
2. Build the OpenVINO™ Security Add-on:
```sh
make clean all
sudo make package
sudo -s make package
```
The following packages are created under the `release_files` directory:
- `ovsa-kvm-host.tar.gz`: Host Machine file
@@ -517,13 +517,13 @@ This step is for the combined role of Model Developer and Independent Software V
If you are using more than one Host Machine repeat Step 3 on each.
### Step 4: Set up packages on the Guest VM
### Step 4: Install the OpenVINO™ Security Add-on Model Developer / ISV Components
This step is for the combined role of Model Developer and Independent Software Vendor. References to the Guest VM are to `ovsa_isv_dev`.
1. Log on to the Guest VM.
1. Log on to the Guest VM as `<user>`.
2. Create the OpenVINO™ Security Add-on directory in the home directory
```sh
mkdir OVSA
mkdir -p ~/OVSA
```
3. Go to the Host Machine, outside of the Guest VM.
4. Copy `ovsa-developer.tar.gz` from `release_files` to the Guest VM:
@@ -532,27 +532,25 @@ This step is for the combined role of Model Developer and Independent Software V
scp ovsa-developer.tar.gz username@<isv-developer-vm-ip-address>:/<username-home-directory>/OVSA
```
5. Go to the Guest VM.
6. Install the software to the Guest VM:
6. Create `ovsa` user
``sh
sudo useradd -m ovsa
sudo passwd ovsa
```
7. Install the software to the Guest VM:
```sh
cd OVSA
cd ~/OVSA
tar xvfz ovsa-developer.tar.gz
cd ovsa-developer
sudo -s
./install.sh
sudo ./install.sh
```
7. Create a directory named `artefacts`. This directory will hold artefacts required to create licenses:
8. Start the license server on a separate terminal as `ovsa` user.
```sh
cd /<username-home-directory>/OVSA
mkdir artefacts
cd artefacts
```
8. Start the license server on a separate terminal.
```sh
sudo -s
source /opt/ovsa/scripts/setupvars.sh
cd /opt/ovsa/bin
./license_server
```
**NOTE**: If you are behind a firewall, check and set your proxy settings to ensure the license server is able to validate the certificates.
### Step 5: Install the OpenVINO™ Security Add-on Model Hosting Component
@@ -562,27 +560,27 @@ The Model Hosting components install the OpenVINO™ Security Add-on Runtime Doc
1. Log on to the Guest VM as `<user>`.
2. Create the OpenVINO™ Security Add-on directory in the home directory
```sh
mkdir OVSA
```
```sh
mkdir -p ~/OVSA
```
3. While on the Host Machine copy the ovsa-model-hosting.tar.gz from release_files to the Guest VM:
```sh
cd $OVSA_RELEASE_PATH
scp ovsa-model-hosting.tar.gz username@<isv-developer-vm-ip-address>:/<username-home-directory>/OVSA
scp ovsa-model-hosting.tar.gz username@<runtime-vm-ip-address>:/<username-home-directory>/OVSA
```
4. Install the software to the Guest VM:
4. Go to the Guest VM.
5. Create `ovsa` user
```sh
cd OVSA
sudo useradd -m ovsa
sudo passwd ovsa
sudo usermod -aG docker ovsa
```
6. Install the software to the Guest VM:
```sh
cd ~/OVSA
tar xvfz ovsa-model-hosting.tar.gz
cd ovsa-model-hosting
sudo -s
./install.sh
```
5. Create a directory named `artefacts`:
```sh
cd /<username-home-directory>/OVSA
mkdir artefacts
cd artefacts
sudo ./install.sh
```
## How to Use the OpenVINO™ Security Add-on
@@ -599,24 +597,27 @@ The following figure describes the interactions between the Model Developer, Ind
### Model Developer Instructions
The Model Developer creates model, defines access control and creates the user license. References to the Guest VM are to `ovsa_isv_dev`. After the model is created, access control enabled, and the license is ready, the Model Developer provides the license details to the Independent Software Vendor before sharing to the Model User.
The Model Developer creates model, defines access control and creates the user license. After the model is created, access control enabled, and the license is ready, the Model Developer provides the license details to the Independent Software Vendor before sharing to the Model User.
#### Step 1: Create a key store and add a certificate to it
References to the Guest VM are to `ovsa_isv_dev`. Log on to the Guest VM as `ovsa` user.
1. Set up a path to the artefacts directory:
```sh
sudo -s
cd /<username-home-directory>/OVSA/artefacts
export OVSA_DEV_ARTEFACTS=$PWD
source /opt/ovsa/scripts/setupvars.sh
```
2. Create files to request a certificate:<br>
This example uses a self-signed certificate for demonstration purposes. In a production environment, use CSR files to request for a CA-signed certificate.
#### Step 1: Set up the artefacts directory
Create a directory named artefacts. This directory will hold artefacts required to create licenses:
```sh
mkdir -p ~/OVSA/artefacts
cd ~/OVSA/artefacts
export OVSA_DEV_ARTEFACTS=$PWD
source /opt/ovsa/scripts/setupvars.sh
```
#### Step 2: Create a key store and add a certificate to it
1. Create files to request a certificate:
This example uses a self-signed certificate for demonstration purposes. In a production environment, use CSR files to request for a CA-signed certificate.
```sh
cd $OVSA_DEV_ARTEFACTS
/opt/ovsa/bin/ovsatool keygen -storekey -t ECDSA -n Intel -k isv_keystore -r isv_keystore.csr -e "/C=IN/CN=localhost"
```
Two files are created:
Below two files are created along with the keystore file:
- `isv_keystore.csr`- A Certificate Signing Request (CSR)
- `isv_keystore.csr.crt` - A self-signed certificate
@@ -627,50 +628,38 @@ The Model Developer creates model, defines access control and creates the user l
/opt/ovsa/bin/ovsatool keygen -storecert -c isv_keystore.csr.crt -k isv_keystore
```
#### Step 2: Create the model
#### Step 3: Create the model
This example uses `curl` to download the `face-detection-retail-004` model from the OpenVINO Model Zoo. If you are behind a firewall, check and set your proxy settings.
1. Log on to the Guest VM.
2. Download a model from the Model Zoo:
```sh
cd $OVSA_DEV_ARTEFACTS
curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2021.3/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https:// storage.openvinotoolkit.org/repositories/open_model_zoo/2021.3/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin
```
The model is downloaded to the `OVSA_DEV_ARTEFACTS/model` directory.
#### Step 3: Define access control for the model and create a master license for it
Download a model from the Model Zoo:
```sh
curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin
```
The model is downloaded to the `OVSA_DEV_ARTEFACTS/model` directory
1. Go to the `artefacts` directory:
```sh
cd $OVSA_DEV_ARTEFACTS
```
2. Run the `uuidgen` command:
```sh
uuidgen
```
3. Define and enable the model access control and master license:
```sh
/opt/ovsa/bin/ovsatool controlAccess -i model/face-detection-retail-0004.xml model/face-detection-retail-0004.bin -n "face detection" -d "face detection retail" -v 0004 -p face_detection_model.dat -m face_detection_model.masterlic -k isv_keystore -g <output-of-uuidgen>
```
The Intermediate Representation files for the `face-detection-retail-0004` model are encrypted as `face_detection_model.dat` and a master license is generated as `face_detection_model.masterlic`.
#### Step 4: Define access control for the model and create a master license for it
#### Step 4: Create a Runtime Reference TCB
Define and enable the model access control and master license:
```sh
uuid=$(uuidgen)
/opt/ovsa/bin/ovsatool controlAccess -i model/face-detection-retail-0004.xml model/face-detection-retail-0004.bin -n "face detection" -d "face detection retail" -v 0004 -p face_detection_model.dat -m face_detection_model.masterlic -k isv_keystore -g $uuid
```
The Intermediate Representation files for the `face-detection-retail-0004` model are encrypted as `face_detection_model.dat` and a master license is generated as `face_detection_model.masterlic`
#### Step 5: Create a Runtime Reference TCB
Use the runtime reference TCB to create a customer license for the access controlled model and the specific runtime.
Generate the reference TCB for the runtime
```sh
cd $OVSA_DEV_ARTEFACTS
source /opt/ovsa/scripts/setupvars.sh
/opt/ovsa/bin/ovsaruntime gen-tcb-signature -n "Face Detect @ Runtime VM" -v "1.0" -f face_detect_runtime_vm.tcb -k isv_keystore
/opt/ovsa/bin/ovsaruntime gen-tcb-signature -n "Face Detect @ Runtime VM" -v "1.0" -f face_detect_runtime_vm.tcb -k isv_keystore
```
#### Step 5: Publish the access controlled Model and Runtime Reference TCB
#### Step 6: Publish the access controlled Model and Runtime Reference TCB
The access controlled model is ready to be shared with the User and the reference TCB is ready to perform license checks.
#### Step 6: Receive a User Request
#### Step 7: Receive a User Request
1. Obtain artefacts from the User who needs access to a access controlled model:
* Customer certificate from the customer's key store.
* Other information that apply to your licensing practices, such as the length of time the user needs access to the model
@@ -678,8 +667,9 @@ The access controlled model is ready to be shared with the User and the referenc
2. Create a customer license configuration
```sh
cd $OVSA_DEV_ARTEFACTS
/opt/ovsa/bin/ovsatool licgen -t TimeLimit -l30 -n "Time Limit License Config" -v 1.0 -u "<isv-developer-vm-ip-address>:<license_server-port>" -k isv_keystore -o 30daylicense.config
/opt/ovsa/bin/ovsatool licgen -t TimeLimit -l30 -n "Time Limit License Config" -v 1.0 -u "<isv-developer-vm-ip-address>:<license_server-port>" /opt/ovsa/certs/server.crt -k isv_keystore -o 30daylicense.config
```
**NOTE**: The parameter /opt/ovsa/certs/server.crt contains the certificate used by the License Server. The server certificate will be added to the customer license and validated during use. Refer to [OpenVINO™ Security Add-on License Server Certificate Pinning](https://github.com/openvinotoolkit/security_addon/blob/release_2021_4/docs/ovsa_license_server_cert_pinning.md)
3. Create the customer license
```sh
cd $OVSA_DEV_ARTEFACTS
@@ -693,27 +683,30 @@ The access controlled model is ready to be shared with the User and the referenc
```
5. Provide these files to the User:
* `face_detection_model.dat`
* `face_detection_model.lic`
* `face_detection_model.dat`
* `face_detection_model.lic`
### User Instructions
References to the Guest VM are to `ovsa_rumtime`.
### Model User Instructions
References to the Guest VM are to `ovsa_rumtime`. Log on to the Guest VM as `ovsa` user.
#### Step 1: Add a CA-Signed Certificate to a Key Store
#### Step 1: Setup up the artefacts directory
1. Set up a path to the artefacts directory:
1. Create a directory named artefacts. This directory will hold artefacts required to create licenses:
```sh
sudo -s
cd /<username-home-directory>/OVSA/artefacts
mkdir -p ~/OVSA/artefacts
cd ~/OVSA/artefacts
export OVSA_RUNTIME_ARTEFACTS=$PWD
source /opt/ovsa/scripts/setupvars.sh
```
2. Generate a Customer key store file:
#### Step 2: Add a CA-Signed Certificate to a Key Store
1. Generate a Customer key store file:
```sh
cd $OVSA_RUNTIME_ARTEFACTS
/opt/ovsa/bin/ovsatool keygen -storekey -t ECDSA -n Intel -k custkeystore -r custkeystore.csr -e "/C=IN/CN=localhost"
```
Two files are created:
Below two files are created along with the keystore file:
* `custkeystore.csr` - A Certificate Signing Request (CSR)
* `custkeystore.csr.crt` - A self-signed certificate
@@ -724,20 +717,25 @@ References to the Guest VM are to `ovsa_rumtime`.
/opt/ovsa/bin/ovsatool keygen -storecert -c custkeystore.csr.crt -k custkeystore
```
#### Step 2: Request an access controlled Model from the Model Developer
#### Step 3: Request an access controlled Model from the Model Developer
This example uses scp to share data between the ovsa_runtime and ovsa_dev Guest VMs on the same Host Machine.
1. Communicate your need for a model to the Model Developer. The Developer will ask you to provide the certificate from your key store and other information. This example uses the length of time the model needs to be available.
2. Generate an artefact file to provide to the Developer:
2. The model user's certificate needs to be provided to the Developer:
```sh
cd $OVSA_RUNTIME_ARTEFACTS
scp custkeystore.csr.crt username@<developer-vm-ip-address>:/<username-home-directory>/OVSA/artefacts
```
#### Step 4: Receive and load the access controlled model into the OpenVINO™ Model Server
1. Receive the model as files named:
* face_detection_model.dat
* face_detection_model.lic
```sh
cd $OVSA_RUNTIME_ARTEFACTS
scp username@<developer-vm-ip-address>:/<username-home-directory>/OVSA/artefacts/face_detection_model.dat .
scp username@<developer-vm-ip-address>:/<username-home-directory>/OVSA/artefacts/face_detection_model.lic .
```
#### Step 3: Receive and load the access controlled model into the OpenVINO™ Model Server
1. Receive the model as files named
* `face_detection_model.dat`
* `face_detection_model.lic`
2. Prepare the environment:
```sh
cd $OVSA_RUNTIME_ARTEFACTS/..
@@ -776,14 +774,14 @@ This example uses scp to share data between the ovsa_runtime and ovsa_dev Guest
}
```
#### Step 4: Start the NGINX Model Server
#### Step 5: Start the NGINX Model Server
The NGINX Model Server publishes the access controlled model.
```sh
./start_secure_ovsa_model_server.sh
```
For information about the NGINX interface, see https://github.com/openvinotoolkit/model_server/blob/main/extras/nginx-mtls-auth/README.md
#### Step 5: Prepare to run Inference
#### Step 6: Prepare to run Inference
1. Log on to the Guest VM from another terminal.
@@ -798,7 +796,7 @@ For information about the NGINX interface, see https://github.com/openvinotoolki
```
3. Copy the `face_detection.py` from the example_client in `/opt/ovsa/example_client`
```sh
cd /home/intel/OVSA/ovms
cd ~/OVSA/ovms
cp /opt/ovsa/example_client/* .
```
4. Copy the sample images for inferencing. An image directory is created that includes a sample image for inferencing.
@@ -806,11 +804,11 @@ For information about the NGINX interface, see https://github.com/openvinotoolki
curl --create-dirs https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/images/people/people1.jpeg -o images/people1.jpeg
```
#### Step 6: Run Inference
#### Step 7: Run Inference
Run the `face_detection.py` script:
```sh
python3 face_detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert server.pem --client_cert client.pem --client_key client.key --model_name controlled-access-model
python3 face_detection.py --grpc_port 3335 --batch_size 1 --width 300 --height 300 --input_images_dir images --output_dir results --tls --server_cert /var/OVSA/Modelserver/server.pem --client_cert /var/OVSA/Modelserver/client.pem --client_key /var/OVSA/Modelserver/client.key --model_name controlled-access-model
```
## Summary