* Add sync_bnehcmark * Fix Unix comilation * niter->time * Explain main loop * samples: factor out common * Code style * clang-format -i * return 0; -> return EXIT_SUCCESS;, +x * Update throughput_benchmark * Add READMEs * Fix READMEs refs * Add sync_benchmark.py * Add niter, infer_new_request, -pc * from datetime import timedelta * Fix niter and seconds_to_run * Add disclaimer about benchmark_app performance * Update samples/cpp/benchmark/sync_benchmark/README.md * Add dynamic_shape_bert_benhcmark * Add dynamic_shape_detection_benchmark * Adopt for detr-resnet50 * Remove sync_benchmark2, throughput_benchmark2, perf counters * clang-format -i * Fix flake8 * Add README.md * Add links to sample_dynamic_shape_bert_benchmark * Add softmax * nameless LatencyMetrics * parent.parent -> parents[2] * Add bert_benhcmark sample * Code style * Add bert_benhcmark/README.md * rm -r samples/python/benchmark/dynamic_shape_bert_benhcmark/ * rm -r samples/cpp/benchmark/dynamic_shape_detection_benchmark/ * bert_benhcmark/README.md: remove dynamic shape * Remove add_subdirectory(dynamic_shape_detection_benchmark) * flake8 * samples: Add a note about CUMULATIVE_THROUGHPUT, don’t expect get_property() to throw, don’t introduce json dependency for samples/cpp/common * / namespace * Add article * namespace -> static * Update README, seconds_ro_run 10, niter 10, no inter alinment * percentile->median * benchmark samples: use generate(), align logs, update READMEs * benchmakr samples: remove percentile() * samples/python/benchmark/bert_benhcmark/bert_benhcmark.py: report average sequence length and processing time * Python samples: move requirements.txt to every sample * Remove numpy from requirements.txt * Remove Building section from Python samples, install only required extras from openvino-dev, set up environment for bert_benhcmark, report duration for bert_benhcmark * Install openvino-dev for Hello Reshape SSD C++ Sample
4.4 KiB
Throughput Benchmark Python* Sample
This sample demonstrates how to estimate performace of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](@ref omz_demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
The reported results may deviate from what benchmark_app reports. One example is model input precision for computer vision tasks. benchmark_app sets uint8, while the sample uses default model precision which is usually float32.
The following Python* API is used in the application:
| Feature | API | Description |
|---|---|---|
| OpenVINO Runtime Version | [openvino.runtime.get_version] | Get Openvino API version |
| Basic Infer Flow | [openvino.runtime.Core], [openvino.runtime.Core.compile_model], [openvino.runtime.InferRequest.get_tensor] | Common API to do inference: compile a model, configure input tensors |
| Asynchronous Infer | [openvino.runtime.AsyncInferQueue], [openvino.runtime.AsyncInferQueue.start_async], [openvino.runtime.AsyncInferQueue.wait_all], [openvino.runtime.InferRequest.results] | Do asynchronous inference |
| Model Operations | [openvino.runtime.CompiledModel.inputs] | Get inputs of a model |
| Tensor Operations | [openvino.runtime.Tensor.get_shape], [openvino.runtime.Tensor.data] | Get a tensor shape and its data. |
| Options | Values |
|---|---|
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) [yolo-v3-tf](@ref omz_models_model_yolo_v3_tf), [face-detection-0200](@ref omz_models_model_face_detection_0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (*.xml + *.bin), ONNX (*.onnx) |
| Supported devices | All |
| Other language realization | C++ |
How It Works
The sample compiles a model for a given device, randomly generates input data, performs asynchronous inference multiple times for a given number of seconds. Then processes and reports performance results.
You can see the explicit description of each sample step at Integration Steps section of "Integrate OpenVINO™ Runtime with Your Application" guide.
Running
python throughput_benchmark.py <path_to_model>
To run the sample, you need to specify a model:
- You can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
NOTES:
Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (*.xml + *.bin) using the Model Optimizer tool.
The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
Example
- Install the
openvino-devPython package to use Open Model Zoo Tools:
python -m pip install openvino-dev[caffe]
- Download a pre-trained model using:
omz_downloader --name googlenet-v1
- If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
omz_converter --name googlenet-v1
- Perform benchmarking using the
googlenet-v1model on aCPU:
python throughput_benchmark.py googlenet-v1.xml
Sample Output
The application outputs performance results.
[ INFO ] OpenVINO:
[ INFO ] Build ................................. <version>
[ INFO ] Count: 2817 iterations
[ INFO ] Duration: 10012.65 ms
[ INFO ] Latency:
[ INFO ] Median: 13.80 ms
[ INFO ] Average: 14.10 ms
[ INFO ] Min: 8.35 ms
[ INFO ] Max: 28.38 ms
[ INFO ] Throughput: 281.34 FPS
See Also
- Integrate the OpenVINO™ Runtime with Your Application
- Using OpenVINO™ Toolkit Samples
- [Model Downloader](@ref omz_tools_downloader)
- Model Optimizer