* [Docs][PyOV] update python snippets * first snippet * Fix samples debug * Fix linter * part1 * Fix speech sample * update model state snippet * add serialize * add temp dir * CPU snippets update (#134) * snippets CPU 1/6 * snippets CPU 2/6 * snippets CPU 3/6 * snippets CPU 4/6 * snippets CPU 5/6 * snippets CPU 6/6 * make module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC * Add static model creation in snippets for CPU * export_comp_model done * leftovers * apply comments * apply comments -- properties * small fixes * rempve debug info * return IENetwork instead of Function * apply comments * revert precision change in common snippets * update opset * [PyOV] Edit docs for the rest of plugins (#136) * modify main.py * GNA snippets * GPU snippets * AUTO snippets * MULTI snippets * HETERO snippets * Added properties * update gna * more samples * Update docs/OV_Runtime_UG/model_state_intro.md * Update docs/OV_Runtime_UG/model_state_intro.md * attempt1 fix ci * new approach to test * temporary remove some files from run * revert cmake changes * fix ci * fix snippet * fix py_exclusive snippet * fix preprocessing snippet * clean-up main * remove numpy installation in gha * check for GPU * add logger * iexclude main * main update * temp * Temp2 * Temp2 * temp * Revert temp * add property execution devices * hide output from samples --------- Co-authored-by: p-wysocki <przemyslaw.wysocki@intel.com> Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Sync Benchmark Python Sample
@sphinxdirective
.. meta:: :description: Learn how to estimate performance of a model using Synchronous Inference Request (Python) API.
This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike :doc:demos <omz_demos> this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
.. tab-set::
.. tab-item:: Requirements
+--------------------------------+------------------------------------------------------------------------------+
| Options | Values |
+================================+==============================================================================+
| Validated Models | :doc:`alexnet <omz_models_model_alexnet>`, |
| | :doc:`googlenet-v1 <omz_models_model_googlenet_v1>`, |
| | :doc:`yolo-v3-tf <omz_models_model_yolo_v3_tf>`, |
| | :doc:`face-detection-0200 <omz_models_model_face_detection_0200>` |
+--------------------------------+------------------------------------------------------------------------------+
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
+--------------------------------+------------------------------------------------------------------------------+
| Supported devices | :doc:`All <openvino_docs_OV_UG_supported_plugins_Supported_Devices>` |
+--------------------------------+------------------------------------------------------------------------------+
| Other language realization | :doc:`C++ <openvino_inference_engine_samples_sync_benchmark_README>` |
+--------------------------------+------------------------------------------------------------------------------+
.. tab-item:: Python API
The following Python API is used in the application:
+--------------------------------+-------------------------------------------------+----------------------------------------------+
| Feature | API | Description |
+================================+=================================================+==============================================+
| OpenVINO Runtime Version | [openvino.runtime.get_version] | Get Openvino API version. |
+--------------------------------+-------------------------------------------------+----------------------------------------------+
| Basic Infer Flow | [openvino.runtime.Core], | Common API to do inference: compile a model, |
| | [openvino.runtime.Core.compile_model], | configure input tensors. |
| | [openvino.runtime.InferRequest.get_tensor] | |
+--------------------------------+-------------------------------------------------+----------------------------------------------+
| Synchronous Infer | [openvino.runtime.InferRequest.infer], | Do synchronous inference. |
+--------------------------------+-------------------------------------------------+----------------------------------------------+
| Model Operations | [openvino.runtime.CompiledModel.inputs] | Get inputs of a model. |
+--------------------------------+-------------------------------------------------+----------------------------------------------+
| Tensor Operations | [openvino.runtime.Tensor.get_shape], | Get a tensor shape and its data. |
| | [openvino.runtime.Tensor.data] | |
+--------------------------------+-------------------------------------------------+----------------------------------------------+
.. tab-item:: Sample Code
.. doxygensnippet:: samples/python/benchmark/sync_benchmark/sync_benchmark.py
:language: python
How It Works ####################
The sample compiles a model for a given device, randomly generates input data, performs synchronous inference multiple times for a given number of seconds. Then processes and reports performance results.
You can see the explicit description of
each sample step at :doc:Integration Steps <openvino_docs_OV_UG_Integrate_OV_with_your_application> section of "Integrate OpenVINO™ Runtime with Your Application" guide.
Running ####################
.. code-block:: sh
python sync_benchmark.py <path_to_model>
To run the sample, you need to specify a model:
- You can use :doc:
public <omz_models_group_public>or doc:Intel's <omz_models_group_intel>pre-trained models from the Open Model Zoo. The models can be downloaded using the :doc:Model Downloader <omz_tools_downloader>.
.. note::
Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (*.xml + *.bin) using the :doc:model conversion API <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>.
The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
Example ++++++++++++++++++++
-
Install the
openvino-devPython package to use Open Model Zoo Tools:.. code-block:: sh
python -m pip install openvino-dev[caffe]
-
Download a pre-trained model using:
.. code-block:: sh
omz_downloader --name googlenet-v1
-
If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
.. code-block:: sh
omz_converter --name googlenet-v1
-
Perform benchmarking using the
googlenet-v1model on aCPU:.. code-block:: sh
python sync_benchmark.py googlenet-v1.xml
Sample Output ####################
The application outputs performance results.
.. code-block:: sh
[ INFO ] OpenVINO: [ INFO ] Build ................................. [ INFO ] Count: 2333 iterations [ INFO ] Duration: 10003.59 ms [ INFO ] Latency: [ INFO ] Median: 3.90 ms [ INFO ] Average: 4.29 ms [ INFO ] Min: 3.30 ms [ INFO ] Max: 10.11 ms [ INFO ] Throughput: 233.22 FPS
See Also ####################
- :doc:
Integrate the OpenVINO™ Runtime with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application> - :doc:
Using OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview> - :doc:
Model Downloader <omz_tools_downloader> - :doc:
Convert a Model <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>
@endsphinxdirective