Docs minor release adjustments (#14745)
Update benchmark-data.csv update external files update articles, faq included * hide ovms benchmark
This commit is contained in:
parent
d7c3e0acaf
commit
55879bdbee
Binary file not shown.
Binary file not shown.
BIN
docs/_static/benchmarks_files/OV-2022.3-Performance-Data.xlsx
vendored
Normal file
BIN
docs/_static/benchmarks_files/OV-2022.3-Performance-Data.xlsx
vendored
Normal file
Binary file not shown.
BIN
docs/_static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx
vendored
Normal file
BIN
docs/_static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx
vendored
Normal file
Binary file not shown.
1209
docs/_static/benchmarks_files/benchmark-data.csv
vendored
1209
docs/_static/benchmarks_files/benchmark-data.csv
vendored
File diff suppressed because it is too large
Load Diff
BIN
docs/_static/benchmarks_files/platform_list_22.2.pdf
vendored
BIN
docs/_static/benchmarks_files/platform_list_22.2.pdf
vendored
Binary file not shown.
BIN
docs/_static/benchmarks_files/platform_list_22.3.pdf
vendored
Normal file
BIN
docs/_static/benchmarks_files/platform_list_22.3.pdf
vendored
Normal file
Binary file not shown.
@ -7,7 +7,6 @@
|
||||
:hidden:
|
||||
|
||||
openvino_docs_performance_benchmarks_openvino
|
||||
openvino_docs_performance_benchmarks_ovms
|
||||
openvino_docs_MO_DG_Getting_Performance_Numbers
|
||||
|
||||
|
||||
@ -15,12 +14,12 @@
|
||||
|
||||
The [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) helps accelerate deep learning inference across a variety of Intel® processors and accelerators.
|
||||
|
||||
The benchmark results below demonstrate high performance gains on several public neural networks on multiple Intel® CPUs, GPUs and VPUs covering a broad performance range. The results may be helpful when deciding which hardware is best for your applications and solutions or to plan AI workload on the Intel computing already included in your solutions.
|
||||
The benchmark results below demonstrate high performance gains on several public neural networks on multiple Intel® CPUs, GPUs and VPUs covering a broad performance range. The results may be helpful when deciding which hardware is best for your applications or to plan AI workload on the Intel computing already included in your solutions.
|
||||
|
||||
Benchmarks are available for:
|
||||
|
||||
* [Intel® Distribution of OpenVINO™ toolkit](performance_benchmarks_openvino.md).
|
||||
* [OpenVINO™ Model Server](performance_benchmarks_ovms.md).
|
||||
|
||||
|
||||
|
||||
You can also test performance for your system yourself, following the guide on [getting performance numbers](../MO_DG/prepare_model/Getting_performance_numbers.md).
|
||||
|
@ -10,7 +10,7 @@
|
||||
|
||||
.. dropdown:: Where can I find the models used in the performance benchmarks?
|
||||
|
||||
All models used are included in the GitHub repository of `Open Model Zoo <https://github.com/openvinotoolkit/open_model_zoo>`_.
|
||||
All models used are included in the GitHub repository of :doc:`Open Model Zoo <model_zoo>`.
|
||||
|
||||
.. dropdown:: Will there be any new models added to the list used for benchmarking?
|
||||
|
||||
@ -23,11 +23,11 @@
|
||||
All of the performance benchmarks are generated using the
|
||||
open-source tool within the Intel® Distribution of OpenVINO™ toolkit
|
||||
called `benchmark_app`. This tool is available
|
||||
`for C++ apps <http://openvino-doc.iotg.sclab.intel.com/2022.3/openvino_inference_engine_samples_benchmark_app_README.html>`_
|
||||
:doc:`for C++ apps <openvino_inference_engine_samples_benchmark_app_README>`.
|
||||
as well as
|
||||
`for Python apps <http://openvino-doc.iotg.sclab.intel.com/2022.3/openvino_inference_engine_tools_benchmark_tool_README.html>`_.
|
||||
:doc:`for Python apps <openvino_inference_engine_tools_benchmark_tool_README>`.
|
||||
|
||||
For a simple instruction on testing performance, see the `Getting Performance Numbers Guide <http://openvino-doc.iotg.sclab.intel.com/2022.3/openvino_docs_MO_DG_Getting_Performance_Numbers.html>`_.
|
||||
For a simple instruction on testing performance, see the :doc:`Getting Performance Numbers Guide <openvino_docs_MO_DG_Getting_Performance_Numbers>`.
|
||||
|
||||
.. dropdown:: What image sizes are used for the classification network models?
|
||||
|
||||
@ -42,63 +42,63 @@
|
||||
- Public Network
|
||||
- Task
|
||||
- Input Size
|
||||
* - :ref:`bert-base-cased<https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`
|
||||
* - `bert-base-cased<https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_
|
||||
- BERT
|
||||
- question / answer
|
||||
- 124
|
||||
* - :ref:`bert-large-uncased-whole-word-masking-squad-int8-0001<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-large-uncased-whole-word-masking-squad-int8-0001>`
|
||||
* - `bert-large-uncased-whole-word-masking-squad-int8-0001<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-large-uncased-whole-word-masking-squad-int8-0001>`_
|
||||
- BERT-large
|
||||
- question / answer
|
||||
- 384
|
||||
* - :ref:`deeplabv3-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3>`
|
||||
* - `deeplabv3-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3>`_
|
||||
- DeepLab v3 Tf
|
||||
- semantic segmentation
|
||||
- 513x513
|
||||
* - :ref:`densenet-121-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/densenet-121-tf>`
|
||||
* - `densenet-121-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/densenet-121-tf>`_
|
||||
- Densenet-121 Tf
|
||||
- classification
|
||||
- 224x224
|
||||
* - :ref:`efficientdet-d0<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/efficientdet-d0-tf>`
|
||||
* - `efficientdet-d0<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/efficientdet-d0-tf>`_
|
||||
- Efficientdet
|
||||
- classification
|
||||
- 512x512
|
||||
* - :ref:`faster_rcnn_resnet50_coco-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco>`
|
||||
* - `faster_rcnn_resnet50_coco-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/faster_rcnn_resnet50_coco>`_
|
||||
- Faster RCNN Tf
|
||||
- object detection
|
||||
- 600x1024
|
||||
* - :ref:`inception-v4-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/googlenet-v4-tf>`
|
||||
* - `inception-v4-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/googlenet-v4-tf>`_
|
||||
- Inception v4 Tf (aka GoogleNet-V4)
|
||||
- classification
|
||||
- 299x299
|
||||
* - :ref:`mobilenet-ssd-CF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-ssd>`
|
||||
* - `mobilenet-ssd-CF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-ssd>`_
|
||||
- SSD (MobileNet)_COCO-2017_Caffe
|
||||
- object detection
|
||||
- 300x300
|
||||
* - :ref:`mobilenet-v2-pytorch<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-pytorch>`
|
||||
* - `mobilenet-v2-pytorch<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-pytorch>`_
|
||||
- Mobilenet V2 PyTorch
|
||||
- classification
|
||||
- 224x224
|
||||
* - :ref:`resnet-18-pytorch<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-18-pytorch>`
|
||||
* - `resnet-18-pytorch<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-18-pytorch>`_
|
||||
- ResNet-18 PyTorch
|
||||
- classification
|
||||
- 224x224
|
||||
* - :ref:`resnet-50-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`
|
||||
* - `resnet-50-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`_
|
||||
- ResNet-50_v1_ILSVRC-2012
|
||||
- classification
|
||||
- 224x224
|
||||
* - :ref:`ssd-resnet34-1200-onnx <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd-resnet34-1200-onnx>`
|
||||
* - `ssd-resnet34-1200-onnx <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd-resnet34-1200-onnx>`_
|
||||
- ssd-resnet34 onnx model
|
||||
- object detection
|
||||
- 1200x1200
|
||||
* - :ref:`unet-camvid-onnx-0001<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/unet-camvid-onnx-0001>`
|
||||
* - `unet-camvid-onnx-0001<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/unet-camvid-onnx-0001>`_
|
||||
- U-Net
|
||||
- semantic segmentation
|
||||
- 368x480
|
||||
* - :ref:`yolo-v3-tiny-tf<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tiny-tf>`
|
||||
* - `yolo-v3-tiny-tf<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tiny-tf>`_
|
||||
- YOLO v3 Tiny
|
||||
- object detection
|
||||
- 416x416
|
||||
* - :ref:`yolo_v4-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v4-tf>`
|
||||
* - `yolo_v4-TF<https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v4-tf>`_
|
||||
- Yolo-V4 TF
|
||||
- object detection
|
||||
- 608x608
|
||||
@ -108,16 +108,15 @@
|
||||
|
||||
Intel partners with vendors all over the world. For a list of Hardware Manufacturers, see the
|
||||
`Intel® AI: In Production Partners & Solutions Catalog <https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/partners-solutions-catalog.html>`_.
|
||||
For more details, see the [Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md)
|
||||
For more details, see the :doc:`Supported Devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>`.
|
||||
documentation. Before purchasing any hardware, you can test and run
|
||||
models remotely, using `Intel® DevCloud for the Edge <http://devcloud.intel.com/edge/>`_.
|
||||
|
||||
.. dropdown:: How can I optimize my models for better performance or accuracy?
|
||||
|
||||
Set of guidelines and recommendations to optimize models are available in the
|
||||
[optimization guide](../optimization_guide/dldt_deployment_optimization_guide.md).
|
||||
Join the conversation in the `Community Forum <https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit>`
|
||||
for further support.
|
||||
:doc:`optimization guide <openvino_docs_deployment_optimization_guide_dldt_optimization_guide>`.
|
||||
Join the conversation in the `Community Forum <https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit>`_ for further support.
|
||||
|
||||
.. dropdown:: Why are INT8 optimized models used for benchmarking on CPUs with no VNNI support?
|
||||
|
||||
@ -130,7 +129,7 @@
|
||||
hardware. For comparison on boost factors for different network models
|
||||
and a selection of Intel® CPU architectures, including AVX-2 with Intel®
|
||||
Core™ i7-8700T, and AVX-512 (VNNI) with Intel® Xeon® 5218T and Intel®
|
||||
Xeon® 8270, refer to the [Model Accuracy for INT8 and FP32 Precision](performance_int8_vs_fp32.md) article.
|
||||
Xeon® 8270, refer to the :doc:`Model Accuracy for INT8 and FP32 Precision <openvino_docs_performance_int8_vs_fp32>`
|
||||
|
||||
.. dropdown:: Where can I search for OpenVINO™ performance results based on HW-platforms?
|
||||
|
||||
|
@ -7,12 +7,12 @@
|
||||
|
||||
openvino_docs_performance_benchmarks_faq
|
||||
openvino_docs_performance_int8_vs_fp32
|
||||
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2022.2/_static/benchmarks_files/OV-2022.2-Performance-Data.xlsx>
|
||||
Performance Data Spreadsheet (download xlsx) <https://docs.openvino.ai/2022.2/_static/benchmarks_files/OV-2022.3-Performance-Data.xlsx>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
Click the "Benchmark Graphs" button to see the OpenVINO(R) benchmark graphs. Select the models, the hardware platforms (CPU SKUs),
|
||||
Click the "Benchmark Graphs" button to see the OpenVINO™ benchmark graphs. Select the models, the hardware platforms (CPU SKUs),
|
||||
precision and performance index from the lists and click the “Build Graphs” button.
|
||||
|
||||
@sphinxdirective
|
||||
@ -68,26 +68,25 @@ Below are four parameters for measurements, which are key elements to consider f
|
||||
<p>For a listing of all platforms and configurations used for testing, refer to the following:</p>
|
||||
<container class="platform-configurations">
|
||||
<div>
|
||||
<a href="https://docs.openvino.ai/latest/_downloads/33ee2a13abf3ae3058381800409edc4a/platform_list_22.2.pdf" target="_blank" class="pdf"><img src="_static/css/media/pdf-icon.svg"/>Hardware Platforms (PDF)</a>
|
||||
<a href="https://docs.openvino.ai/nightly/_static/benchmarks_files/platform_list_22.3.pdf" target="_blank" class="pdf"><img src="_static/css/media/pdf-icon.svg"/>Hardware Platforms (PDF)</a>
|
||||
</div>
|
||||
<div>
|
||||
<a href="https://docs.openvino.ai/latest/_downloads/fdd5a86ab44d348b13bf5be23d8c0dde/OV-2022.2-system-info-detailed.xlsx" class="xls"><img src="_static/css/media/xls-icon.svg"/>Configuration Details (XLSX)</a>
|
||||
<a href="https://docs.openvino.ai/nightly/_static/benchmarks_files/OV-2022.3-system-info-detailed.xlsx" class="xls"><img src="_static/css/media/xls-icon.svg"/>Configuration Details (XLSX)</a>
|
||||
</div>
|
||||
</container>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
This benchmark setup includes a single machine on which both the benchmark application and the OpenVINO™ installation reside. The presented performance benchmark numbers are based on the release 2022.2 of the Intel® Distribution of OpenVINO™ toolkit.
|
||||
This benchmark setup includes a single machine on which both the benchmark application and the OpenVINO™ installation reside. The presented performance benchmark numbers are based on the release 2022.3 of the Intel® Distribution of OpenVINO™ toolkit.
|
||||
The benchmark application loads the OpenVINO™ Runtime and executes inferences on the specified hardware (CPU, GPU or VPU).
|
||||
It measures the time spent on actual inferencing (excluding any pre or post processing) and then reports on the inferences per second (or Frames Per Second).
|
||||
For additional information on the benchmark application, refer to the entry 5 of the ``FAQ section`` ADD LINK.
|
||||
|
||||
|
||||
## Disclaimers
|
||||
|
||||
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2022.2.
|
||||
Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2022.3.
|
||||
|
||||
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of March 17, 2022 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
|
||||
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of December 13, 2022 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure.
|
||||
|
||||
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
|
||||
|
||||
|
@ -1,3 +1,6 @@
|
||||
@sphinxdirective
|
||||
:orphan:
|
||||
@endsphinxdirective
|
||||
# OpenVINO™ Model Server Benchmark Results {#openvino_docs_performance_benchmarks_ovms}
|
||||
|
||||
OpenVINO™ Model Server is an open-source, production-grade inference platform that exposes a set of models via a convenient inference API over gRPC or HTTP/REST. It employs the OpenVINO™ Runtime libraries from the Intel® Distribution of OpenVINO™ toolkit to extend workloads across Intel® hardware including CPU, GPU and others.
|
||||
|
@ -1,129 +1,98 @@
|
||||
# Model Accuracy and Performance for INT8 and FP32 {#openvino_docs_performance_int8_vs_fp32}
|
||||
|
||||
The following table presents the absolute accuracy drop calculated as the accuracy difference between FP32 and INT8 representations of a model:
|
||||
The following table presents the absolute accuracy drop calculated as the accuracy difference between FP32 and INT8 representations of a model on two platforms
|
||||
|
||||
* A - Intel® Core™ i9-9000K (AVX2)
|
||||
* B - Intel® Xeon® 6338, (VNNI)
|
||||
|
||||
@sphinxdirective
|
||||
.. raw:: html
|
||||
.. list-table:: Model Accuracy
|
||||
:header-rows: 1
|
||||
|
||||
<table class="table" id="model-accuracy-and-perf-int8-fp32-table">
|
||||
<tr align="left">
|
||||
<th></th>
|
||||
<th></th>
|
||||
<th></th>
|
||||
<th class="light-header">Intel® Core™ i9-12900K @ 3.2 GHz (AVX2)</th>
|
||||
<th class="light-header">Intel® Xeon® 6338 @ 2.0 GHz (VNNI)</th>
|
||||
<th class="light-header">iGPU Gen12LP (Intel® Core™ i9-12900K @ 3.2 GHz)</th>
|
||||
</tr>
|
||||
<tr align="left" class="header">
|
||||
<th>OpenVINO Benchmark <br>Model Name</th>
|
||||
<th>Dataset</th>
|
||||
<th>Metric Name</th>
|
||||
<th colspan="3" align="center">Absolute Accuracy Drop, %</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>bert-base-cased</td>
|
||||
<td>SST-2</td>
|
||||
<td>accuracy</td>
|
||||
<td class="data">0.11</td>
|
||||
<td class="data">0.34</td>
|
||||
<td class="data">0.46</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>bert-large-uncased-whole-word-masking-squad-0001</td>
|
||||
<td>SQUAD</td>
|
||||
<td>F1</td>
|
||||
<td class="data">0.87</td>
|
||||
<td class="data">1.11</td>
|
||||
<td class="data">0.70</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>deeplabv3</td>
|
||||
<td>VOC2012</td>
|
||||
<td>mean_iou</td>
|
||||
<td class="data">0.04</td>
|
||||
<td class="data">0.04</td>
|
||||
<td class="data">0.11</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>densenet-121</td>
|
||||
<td>ImageNet</td>
|
||||
<td>accuracy@top1</td>
|
||||
<td class="data">0.56</td>
|
||||
<td class="data">0.56</td>
|
||||
<td class="data">0.63</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>efficientdet-d0</td>
|
||||
<td>COCO2017</td>
|
||||
<td>coco_precision</td>
|
||||
<td class="data">0.63</td>
|
||||
<td class="data">0.62</td>
|
||||
<td class="data">0.45</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>faster_rcnn_<br>resnet50_coco</td>
|
||||
<td>COCO2017</td>
|
||||
<td>coco_<br>precision</td>
|
||||
<td class="data">0.52</td>
|
||||
<td class="data">0.55</td>
|
||||
<td class="data">0.31</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>resnet-18</td>
|
||||
<td>ImageNet</td>
|
||||
<td>acc@top-1</td>
|
||||
<td class="data">0.16</td>
|
||||
<td class="data">0.16</td>
|
||||
<td class="data">0.16</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>resnet-50</td>
|
||||
<td>ImageNet</td>
|
||||
<td>acc@top-1</td>
|
||||
<td class="data">0.09</td>
|
||||
<td class="data">0.09</td>
|
||||
<td class="data">0.09</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>resnet-50-pytorch</td>
|
||||
<td>ImageNet</td>
|
||||
<td>acc@top-1</td>
|
||||
<td class="data">0.13</td>
|
||||
<td class="data">0.13</td>
|
||||
<td class="data">0.11</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>ssd-resnet34-1200</td>
|
||||
<td>COCO2017</td>
|
||||
<td>COCO mAp</td>
|
||||
<td class="data">0.09</td>
|
||||
<td class="data">0.09</td>
|
||||
<td class="data">0.13</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>unet-camvid-onnx-0001</td>
|
||||
<td>CamVid</td>
|
||||
<td>mean_iou@mean</td>
|
||||
<td class="data">0.56</td>
|
||||
<td class="data">0.56</td>
|
||||
<td class="data">0.60</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>yolo-v3-tiny</td>
|
||||
<td>COCO2017</td>
|
||||
<td>COCO mAp</td>
|
||||
<td class="data">0.12</td>
|
||||
<td class="data">0.12</td>
|
||||
<td class="data">0.17</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>yolo_v4</td>
|
||||
<td>COCO2017</td>
|
||||
<td>COCO mAp</td>
|
||||
<td class="data">0.52</td>
|
||||
<td class="data">0.52</td>
|
||||
<td class="data">0.54</td>
|
||||
</tr>
|
||||
</table>
|
||||
* - OpenVINO™ Model name
|
||||
- dataset
|
||||
- Metric Name
|
||||
- A
|
||||
- B
|
||||
* - bert-base-cased
|
||||
- SST-2_bert_cased_padded
|
||||
- accuracy
|
||||
- 0.11%
|
||||
- 1.15%
|
||||
* - bert-large-uncased-whole-word-masking-squad-0001
|
||||
- SQUAD_v1_1_bert_msl384_mql64_ds128_lowercase
|
||||
- F1
|
||||
- 0.51%
|
||||
-
|
||||
* - deeplabv3
|
||||
- VOC2012_segm
|
||||
- mean_iou
|
||||
- 0.44%
|
||||
- 0.06%
|
||||
* - densenet-121
|
||||
- ImageNet2012
|
||||
- accuracy@top1
|
||||
- 0.31%
|
||||
- 0.32%
|
||||
* - efficientdet-d0
|
||||
- COCO2017_detection_91cl
|
||||
- coco_precision
|
||||
- 0.88%
|
||||
- 0.62%
|
||||
* - faster_rcnn_resnet50_coco
|
||||
- COCO2017_detection_91cl_bkgr
|
||||
- coco_precision
|
||||
- 0.19%
|
||||
- 0.19%
|
||||
* - googlenet-v4
|
||||
- ImageNet2012_bkgr
|
||||
- accuracy@top1
|
||||
- 0.07%
|
||||
- 0.09%
|
||||
* - mobilenet-ssd
|
||||
- VOC2007_detection
|
||||
- map
|
||||
- 0.47%
|
||||
- 0.14%
|
||||
* - mobilenet-v2
|
||||
- ImageNet2012
|
||||
- accuracy@top1
|
||||
- 0.50%
|
||||
- 0.18%
|
||||
* - mobilenet-v2
|
||||
- ImageNet2012
|
||||
- accuracy@top1
|
||||
- 0.50%
|
||||
- 0.18%
|
||||
* - resnet-18
|
||||
- ImageNet2012
|
||||
- accuracy@top1
|
||||
- 0.27%
|
||||
- 0.24%
|
||||
* - resnet-50
|
||||
- ImageNet2012
|
||||
- accuracy@top1
|
||||
- 0.13%
|
||||
- 0.12%
|
||||
* - ssd-resnet34-1200
|
||||
- COCO2017_detection_80cl_bkgr
|
||||
- map
|
||||
- 0.08%
|
||||
- 0.09%
|
||||
* - unet-camvid-onnx-0001
|
||||
- CamVid_12cl
|
||||
- mean_iou@mean
|
||||
- 0.33%
|
||||
- 0.33%
|
||||
* - yolo_v3_tiny
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- 0.01%
|
||||
- 0.07%
|
||||
* - yolo_v4
|
||||
- COCO2017_detection_80cl
|
||||
- map
|
||||
- 0.05%
|
||||
- 0.06%
|
||||
|
||||
@endsphinxdirective
|
Loading…
Reference in New Issue
Block a user