DOCS shift to rst -Sync Benchmark Samples (#16561)

This commit is contained in:
Sebastian Golebiewski
2023-03-27 18:28:16 +02:00
committed by GitHub
parent 6e99b48ecc
commit 5c5a29d095
2 changed files with 194 additions and 122 deletions

View File

@@ -1,97 +1,132 @@
# Sync Benchmark C++ Sample {#openvino_inference_engine_samples_sync_benchmark_README}
This sample demonstrates how to estimate performace of a model using Synchronous Inference Request API. It makes sence to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](@ref omz_demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
@sphinxdirective
This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike :doc:`demos <omz_demos>` this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
The following C++ API is used in the application:
| Feature | API | Description |
| :--- | :--- | :--- |
| OpenVINO Runtime Version | `ov::get_openvino_version` | Get Openvino API version |
| Basic Infer Flow | `ov::Core`, `ov::Core::compile_model`, `ov::CompiledModel::create_infer_request`, `ov::InferRequest::get_tensor` | Common API to do inference: compile a model, create an infer request, configure input tensors |
| Synchronous Infer | `ov::InferRequest::infer` | Do synchronous inference |
| Model Operations | `ov::CompiledModel::inputs` | Get inputs of a model |
| Tensor Operations | `ov::Tensor::get_shape` | Get a tensor shape |
| Tensor Operations | `ov::Tensor::get_shape`, `ov::Tensor::data` | Get a tensor shape and its data. |
+--------------------------+----------------------------------------------+----------------------------------------------+
| Feature | API | Description |
+==========================+==============================================+==============================================+
| OpenVINO Runtime Version | ``ov::get_openvino_version`` | Get Openvino API version. |
+--------------------------+----------------------------------------------+----------------------------------------------+
| Basic Infer Flow | ``ov::Core``, ``ov::Core::compile_model``, | Common API to do inference: compile a model, |
| | ``ov::CompiledModel::create_infer_request``, | create an infer request, |
| | ``ov::InferRequest::get_tensor`` | configure input tensors. |
+--------------------------+----------------------------------------------+----------------------------------------------+
| Synchronous Infer | ``ov::InferRequest::infer``, | Do synchronous inference. |
+--------------------------+----------------------------------------------+----------------------------------------------+
| Model Operations | ``ov::CompiledModel::inputs`` | Get inputs of a model. |
+--------------------------+----------------------------------------------+----------------------------------------------+
| Tensor Operations | ``ov::Tensor::get_shape``, | Get a tensor shape and its data. |
| | ``ov::Tensor::data`` | |
+--------------------------+----------------------------------------------+----------------------------------------------+
| Options | Values |
| :--- | :--- |
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) [yolo-v3-tf](@ref omz_models_model_yolo_v3_tf), [face-detection-0200](@ref omz_models_model_face_detection_0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](../../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
| Other language realization | [Python](../../../python/benchmark/sync_benchmark/README.md) |
+--------------------------------+------------------------------------------------------------------------------------------------+
| Options | Values |
+================================+================================================================================================+
| Validated Models | :doc:`alexnet <omz_models_model_alexnet>`, |
| | :doc:`googlenet-v1 <omz_models_model_googlenet_v1>`, |
| | :doc:`yolo-v3-tf <omz_models_model_yolo_v3_tf>`, |
| | :doc:`face-detection-0200 <omz_models_model_face_detection_0200>` |
+--------------------------------+------------------------------------------------------------------------------------------------+
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
+--------------------------------+------------------------------------------------------------------------------------------------+
| Supported devices | :doc:`All <openvino_docs_OV_UG_supported_plugins_Supported_Devices>` |
+--------------------------------+------------------------------------------------------------------------------------------------+
| Other language realization | :doc:`Python <openvino_inference_engine_ie_bridges_python_sample_sync_benchmark_README>` |
+--------------------------------+------------------------------------------------------------------------------------------------+
## How It Works
How It Works
####################
The sample compiles a model for a given device, randomly generates input data, performs synchronous inference multiple times for a given number of seconds. Then processes and reports performance results.
You can see the explicit description of
each sample step at [Integration Steps](../../../../docs/OV_Runtime_UG/integrate_with_your_application.md) section of "Integrate OpenVINO™ Runtime with Your Application" guide.
each sample step at :doc:`Integration Steps <openvino_docs_OV_UG_Integrate_OV_with_your_application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide.
## Building
Building
####################
To build the sample, please use instructions available at [Build the Sample Applications](../../../../docs/OV_Runtime_UG/Samples_Overview.md) section in OpenVINO™ Toolkit Samples guide.
To build the sample, please use instructions available at :doc:`Build the Sample Applications <openvino_docs_OV_UG_Samples_Overview>` section in OpenVINO™ Toolkit Samples guide.
## Running
Running
####################
.. code-block:: sh
sync_benchmark <path_to_model>
```
sync_benchmark <path_to_model>
```
To run the sample, you need to specify a model:
- You can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
> **NOTES**:
>
> - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
- You can use :doc:`public <omz_models_group_public>` or :doc:`Intel's <omz_models_group_intel>` pre-trained models from the Open Model Zoo. The models can be downloaded using the :doc:`Model Downloader <omz_tools_downloader>`.
### Example
.. note::
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the :doc:`Model Optimizer tool <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
Example
++++++++++++++++++++
1. Install the ``openvino-dev`` Python package to use Open Model Zoo Tools:
.. code-block:: sh
python -m pip install openvino-dev[caffe]
```
python -m pip install openvino-dev[caffe]
```
2. Download a pre-trained model using:
```
omz_downloader --name googlenet-v1
```
.. code-block:: sh
omz_downloader --name googlenet-v1
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
```
omz_converter --name googlenet-v1
```
.. code-block:: sh
4. Perform benchmarking using the `googlenet-v1` model on a `CPU`:
omz_converter --name googlenet-v1
```
sync_benchmark googlenet-v1.xml
```
## Sample Output
4. Perform benchmarking using the ``googlenet-v1`` model on a ``CPU``:
.. code-block:: sh
sync_benchmark googlenet-v1.xml
Sample Output
####################
The application outputs performance results.
```
[ INFO ] OpenVINO:
[ INFO ] Build ................................. <version>
[ INFO ] Count: 992 iterations
[ INFO ] Duration: 15009.8 ms
[ INFO ] Latency:
[ INFO ] Median: 14.00 ms
[ INFO ] Average: 15.13 ms
[ INFO ] Min: 9.33 ms
[ INFO ] Max: 53.60 ms
[ INFO ] Throughput: 66.09 FPS
```
.. code-block:: sh
## See Also
[ INFO ] OpenVINO:
[ INFO ] Build ................................. <version>
[ INFO ] Count: 992 iterations
[ INFO ] Duration: 15009.8 ms
[ INFO ] Latency:
[ INFO ] Median: 14.00 ms
[ INFO ] Average: 15.13 ms
[ INFO ] Min: 9.33 ms
[ INFO ] Max: 53.60 ms
[ INFO ] Throughput: 66.09 FPS
- [Integrate the OpenVINO™ Runtime with Your Application](../../../../docs/OV_Runtime_UG/integrate_with_your_application.md)
- [Using OpenVINO™ Toolkit Samples](../../../../docs/OV_Runtime_UG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
See Also
####################
* :doc:`Integrate the OpenVINO™ Runtime with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
* :doc:`Using OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>`
* :doc:`Model Downloader <omz_tools_downloader>`
* :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
@endsphinxdirective