* Removed Intel MYRIAD plugin * Removed Intel MYIAD from CI files * Removed Intel MYRIAD from cmake folder * Removed MYRIAD, HDDL from samples * Removed MYRIAD, HDDL from scripts folder * Removed MYRIAD from bindings folder (C and Python API) * Removed MYRIAD tests * Removed MYRIAD from tests folder * Removed MYRIAD from tools folder * Removed HDDL (VAD), MYRIAD (NSC2) from documentation * Fixed build for AUTO unit tests * Fixed clang code style * Fixed comments and issues * removed MYRIAD from AUTO tests * Disabled MULTI tests in CI * Update docs/OV_Runtime_UG/auto_device_selection.md Co-authored-by: Yuan Xu <yuan1.xu@intel.com> * Update docs/get_started/get_started_demos.md Co-authored-by: Yuan Xu <yuan1.xu@intel.com> * Update docs/OV_Runtime_UG/deployment/local-distribution.md Co-authored-by: Yuan Xu <yuan1.xu@intel.com> Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Sync Benchmark Python* Sample
This sample demonstrates how to estimate performace of a model using Synchronous Inference Request API. It makes sence to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](@ref omz_demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
The following Python* API is used in the application:
| Feature | API | Description |
|---|---|---|
| OpenVINO Runtime Version | [openvino.runtime.get_version] | Get Openvino API version |
| Basic Infer Flow | [openvino.runtime.Core], [openvino.runtime.Core.compile_mode], [openvino.runtime.InferRequest.get_tensor] | Common API to do inference: compile a model, configure input tensors |
| Synchronous Infer | [openvino.runtime.InferRequest.infer] | Do synchronous inference |
| Model Operations | [openvino.runtime.CompiledModel.inputs] | Get inputs of a model |
| Tensor Operations | [openvino.runtime.Tensor.get_shape], [openvino.runtime.Tensor.data] | Get a tensor shape and its data. |
| Options | Values |
|---|---|
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) [yolo-v3-tf](@ref omz_models_model_yolo_v3_tf), [face-detection-0200](@ref omz_models_model_face_detection_0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (*.xml + *.bin), ONNX (*.onnx) |
| Supported devices | All |
| Other language realization | C++ |
How It Works
The sample compiles a model for a given device, randomly generates input data, performs synchronous inference multiple times for a given number of seconds. Then processes and reports performance results.
You can see the explicit description of each sample step at Integration Steps section of "Integrate OpenVINO™ Runtime with Your Application" guide.
Running
python sync_benchmark.py <path_to_model>
To run the sample, you need to specify a model:
- You can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
NOTES:
Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (*.xml + *.bin) using the Model Optimizer tool.
The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
Example
- Install the
openvino-devPython package to use Open Model Zoo Tools:
python -m pip install openvino-dev[caffe]
- Download a pre-trained model using:
omz_downloader --name googlenet-v1
- If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
omz_converter --name googlenet-v1
- Perform benchmarking using the
googlenet-v1model on aCPU:
python sync_benchmark.py googlenet-v1.xml
Sample Output
The application outputs performance results.
[ INFO ] OpenVINO:
[ INFO ] Build ................................. <version>
[ INFO ] Count: 2333 iterations
[ INFO ] Duration: 10003.59 ms
[ INFO ] Latency:
[ INFO ] Median: 3.90 ms
[ INFO ] Average: 4.29 ms
[ INFO ] Min: 3.30 ms
[ INFO ] Max: 10.11 ms
[ INFO ] Throughput: 233.22 FPS
See Also
- Integrate the OpenVINO™ Runtime with Your Application
- Using OpenVINO™ Toolkit Samples
- [Model Downloader](@ref omz_tools_downloader)
- Model Optimizer