DOCS shift to rst - Image Classification Async C++ Sample & Image Classification Async Python* Sample (#16580)
This commit is contained in:
parent
5e9ea6a146
commit
7ccf1c89cf
@ -1,173 +1,200 @@
|
|||||||
# Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README}
|
# Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README}
|
||||||
|
|
||||||
|
@sphinxdirective
|
||||||
|
|
||||||
This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
|
This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
|
||||||
|
|
||||||
Models with only one input and output are supported.
|
Models with only one input and output are supported.
|
||||||
|
|
||||||
In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model.
|
In addition to regular images, the sample also supports single-channel ``ubyte`` images as an input for LeNet model.
|
||||||
|
|
||||||
The following C++ API is used in the application:
|
The following C++ API is used in the application:
|
||||||
|
|
||||||
|
+--------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------------+
|
||||||
| Feature | API | Description |
|
| Feature | API | Description |
|
||||||
| :--- | :--- | :--- |
|
+==========================+=======================================================================+========================================================================================+
|
||||||
| Asynchronous Infer | `ov::InferRequest::start_async`, `ov::InferRequest::set_callback` | Do asynchronous inference with callback. |
|
| Asynchronous Infer | ``ov::InferRequest::start_async``, ``ov::InferRequest::set_callback`` | Do asynchronous inference with callback. |
|
||||||
| Model Operations | `ov::Output::get_shape`, `ov::set_batch` | Manage the model, operate with its batch size. Set batch size using input image count. |
|
+--------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------------+
|
||||||
| Infer Request Operations | `ov::InferRequest::get_input_tensor` | Get an input tensor. |
|
| Model Operations | ``ov::Output::get_shape``, ``ov::set_batch`` | Manage the model, operate with its batch size. Set batch size using input image count. |
|
||||||
| Tensor Operations | `ov::shape_size`, `ov::Tensor::data` | Get a tensor shape size and its data. |
|
+--------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------------+
|
||||||
|
| Infer Request Operations | ``ov::InferRequest::get_input_tensor`` | Get an input tensor. |
|
||||||
|
+--------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------------+
|
||||||
|
| Tensor Operations | ``ov::shape_size``, ``ov::Tensor::data`` | Get a tensor shape size and its data. |
|
||||||
|
+--------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](../hello_classification/README.md).
|
Basic OpenVINO™ Runtime API is covered by :doc:`Hello Classification C++ sample <openvino_inference_engine_samples_hello_classification_README>`.
|
||||||
|
|
||||||
|
+----------------------------+-------------------------------------------------------------------------------------------------------+
|
||||||
| Options | Values |
|
| Options | Values |
|
||||||
| :--- | :--- |
|
+============================+=======================================================================================================+
|
||||||
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) |
|
| Validated Models | :doc:`alexnet <omz_models_model_alexnet>`, :doc:`googlenet-v1 <omz_models_model_googlenet_v1>` |
|
||||||
|
+----------------------------+-------------------------------------------------------------------------------------------------------+
|
||||||
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) |
|
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) |
|
||||||
| Supported devices | [All](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
|
+----------------------------+-------------------------------------------------------------------------------------------------------+
|
||||||
| Other language realization | [Python](../../../samples/python/classification_sample_async/README.md) |
|
| Supported devices | :doc:`All <openvino_docs_OV_UG_supported_plugins_Supported_Devices>` |
|
||||||
|
+----------------------------+-------------------------------------------------------------------------------------------------------+
|
||||||
|
| Other language realization | :doc:`Python <openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README>` |
|
||||||
|
+----------------------------+-------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
## How It Works
|
How It Works
|
||||||
|
############
|
||||||
|
|
||||||
At startup, the sample application reads command line parameters and loads the specified model and input images (or a
|
At startup, the sample application reads command line parameters and loads the specified model and input images (or a
|
||||||
folder with images) to the OpenVINO™ Runtime plugin. The batch size of the model is set according to the number of read images. The batch mode is an independent attribute on the asynchronous mode. Asynchronous mode works efficiently with any batch size.
|
folder with images) to the OpenVINO™ Runtime plugin. The batch size of the model is set according to the number of read images. The batch mode is an independent attribute on the asynchronous mode. Asynchronous mode works efficiently with any batch size.
|
||||||
|
|
||||||
Then, the sample creates an inference request object and assigns completion callback for it. In scope of the completion callback
|
Then, the sample creates an inference request object and assigns completion callback for it. In scope of the completion callback handling the inference request is executed again.
|
||||||
handling the inference request is executed again.
|
|
||||||
|
|
||||||
After that, the application starts inference for the first infer request and waits of 10th inference request execution being completed. The asynchronous mode might increase the throughput of the pictures.
|
After that, the application starts inference for the first infer request and waits of 10th inference request execution being completed. The asynchronous mode might increase the throughput of the pictures.
|
||||||
|
|
||||||
When inference is done, the application outputs data to the standard output stream. You can place labels in .labels file near the model to get pretty output.
|
When inference is done, the application outputs data to the standard output stream. You can place labels in .labels file near the model to get pretty output.
|
||||||
|
|
||||||
You can see the explicit description of
|
You can see the explicit description of each sample step at :doc:`Integration Steps <openvino_docs_OV_UG_Integrate_OV_with_your_application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide.
|
||||||
each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/integrate_with_your_application.md) section of "Integrate OpenVINO™ Runtime with Your Application" guide.
|
|
||||||
|
|
||||||
## Building
|
Building
|
||||||
|
########
|
||||||
|
|
||||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/OV_Runtime_UG/Samples_Overview.md) section in OpenVINO™ Toolkit Samples guide.
|
To build the sample, please use instructions available at :doc:`Build the Sample Applications <openvino_docs_OV_UG_Samples_Overview>` section in OpenVINO™ Toolkit Samples guide.
|
||||||
|
|
||||||
## Running
|
Running
|
||||||
|
#######
|
||||||
|
|
||||||
Run the application with the `-h` option to see the usage instructions:
|
Run the application with the ``-h`` option to see the usage instructions:
|
||||||
|
|
||||||
```
|
.. code-block:: sh
|
||||||
classification_sample_async -h
|
|
||||||
```
|
classification_sample_async -h
|
||||||
|
|
||||||
Usage instructions:
|
Usage instructions:
|
||||||
|
|
||||||
```
|
.. code-block:: sh
|
||||||
[ INFO ] OpenVINO Runtime version ......... <version>
|
|
||||||
[ INFO ] Build ........... <build>
|
|
||||||
|
|
||||||
classification_sample_async [OPTION]
|
[ INFO ] OpenVINO Runtime version ......... <version>
|
||||||
Options:
|
[ INFO ] Build ........... <build>
|
||||||
|
|
||||||
|
classification_sample_async [OPTION]
|
||||||
|
Options:
|
||||||
|
|
||||||
-h Print usage instructions.
|
-h Print usage instructions.
|
||||||
-m "<path>" Required. Path to an .xml file with a trained model.
|
-m "<path>" Required. Path to an .xml file with a trained model.
|
||||||
-i "<path>" Required. Path to a folder with images or path to image files: a .ubyte file for LeNet and a .bmp file for other models.
|
-i "<path>" Required. Path to a folder with images or path to image files: a .ubyte file for LeNet and a .bmp file for other models.
|
||||||
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma_separated_devices_list>" format to specify the HETERO plugin. Sample will look for a suitable plugin for the device specified.
|
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma_separated_devices_list>" format to specify the HETERO plugin. Sample will look for a suitable plugin for the device specified.
|
||||||
|
|
||||||
Available target devices: <devices>
|
Available target devices: <devices>
|
||||||
```
|
|
||||||
|
|
||||||
To run the sample, you need to specify a model and image:
|
To run the sample, you need to specify a model and image:
|
||||||
- You can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
|
|
||||||
- You can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
|
|
||||||
|
|
||||||
> **NOTES**:
|
- You can use :doc:`public <omz_models_group_public>` or :doc:`Intel's <omz_models_group_intel>` pre-trained models from the Open Model Zoo. The models can be downloaded using the :doc:`Model Downloader <omz_tools_downloader>`.
|
||||||
>
|
- You can use images from the media files collection available `here <https://storage.openvinotoolkit.org/data/test_data>`.
|
||||||
> - By default, OpenVINO™ Toolkit Samples and Demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
|
|
||||||
>
|
|
||||||
> - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
|
||||||
>
|
|
||||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
|
||||||
|
|
||||||
### Example
|
.. note::
|
||||||
|
|
||||||
|
- By default, OpenVINO™ Toolkit Samples and Demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with ``--reverse_input_channels`` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`.
|
||||||
|
|
||||||
|
- Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the :doc:`Model Optimizer tool <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||||
|
|
||||||
|
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||||
|
|
||||||
|
Example
|
||||||
|
+++++++
|
||||||
|
|
||||||
|
1. Install the ``openvino-dev`` Python package to use Open Model Zoo Tools:
|
||||||
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
|
||||||
```
|
|
||||||
python -m pip install openvino-dev[caffe]
|
python -m pip install openvino-dev[caffe]
|
||||||
```
|
|
||||||
|
|
||||||
2. Download a pre-trained model using:
|
2. Download a pre-trained model using:
|
||||||
```
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
omz_downloader --name googlenet-v1
|
omz_downloader --name googlenet-v1
|
||||||
```
|
|
||||||
|
|
||||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||||
```
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
omz_converter --name googlenet-v1
|
omz_converter --name googlenet-v1
|
||||||
```
|
|
||||||
|
|
||||||
4. Perform inference of `dog.bmp` using `googlenet-v1` model on a `GPU`, for example:
|
4. Perform inference of ``dog.bmp`` using ``googlenet-v1`` model on a ``GPU``, for example:
|
||||||
```
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
classification_sample_async -m googlenet-v1.xml -i dog.bmp -d GPU
|
classification_sample_async -m googlenet-v1.xml -i dog.bmp -d GPU
|
||||||
```
|
|
||||||
|
|
||||||
## Sample Output
|
Sample Output
|
||||||
|
#############
|
||||||
|
|
||||||
```
|
.. code-block:: sh
|
||||||
[ INFO ] OpenVINO Runtime version ......... <version>
|
|
||||||
[ INFO ] Build ........... <build>
|
|
||||||
[ INFO ]
|
|
||||||
[ INFO ] Parsing input parameters
|
|
||||||
[ INFO ] Files were added: 1
|
|
||||||
[ INFO ] /images/dog.bmp
|
|
||||||
[ INFO ] Loading model files:
|
|
||||||
[ INFO ] /models/googlenet-v1.xml
|
|
||||||
[ INFO ] model name: GoogleNet
|
|
||||||
[ INFO ] inputs
|
|
||||||
[ INFO ] input name: data
|
|
||||||
[ INFO ] input type: f32
|
|
||||||
[ INFO ] input shape: {1, 3, 224, 224}
|
|
||||||
[ INFO ] outputs
|
|
||||||
[ INFO ] output name: prob
|
|
||||||
[ INFO ] output type: f32
|
|
||||||
[ INFO ] output shape: {1, 1000}
|
|
||||||
[ INFO ] Read input images
|
|
||||||
[ INFO ] Set batch size 1
|
|
||||||
[ INFO ] model name: GoogleNet
|
|
||||||
[ INFO ] inputs
|
|
||||||
[ INFO ] input name: data
|
|
||||||
[ INFO ] input type: u8
|
|
||||||
[ INFO ] input shape: {1, 224, 224, 3}
|
|
||||||
[ INFO ] outputs
|
|
||||||
[ INFO ] output name: prob
|
|
||||||
[ INFO ] output type: f32
|
|
||||||
[ INFO ] output shape: {1, 1000}
|
|
||||||
[ INFO ] Loading model to the device GPU
|
|
||||||
[ INFO ] Create infer request
|
|
||||||
[ INFO ] Start inference (asynchronous executions)
|
|
||||||
[ INFO ] Completed 1 async request execution
|
|
||||||
[ INFO ] Completed 2 async request execution
|
|
||||||
[ INFO ] Completed 3 async request execution
|
|
||||||
[ INFO ] Completed 4 async request execution
|
|
||||||
[ INFO ] Completed 5 async request execution
|
|
||||||
[ INFO ] Completed 6 async request execution
|
|
||||||
[ INFO ] Completed 7 async request execution
|
|
||||||
[ INFO ] Completed 8 async request execution
|
|
||||||
[ INFO ] Completed 9 async request execution
|
|
||||||
[ INFO ] Completed 10 async request execution
|
|
||||||
[ INFO ] Completed async requests execution
|
|
||||||
|
|
||||||
Top 10 results:
|
[ INFO ] OpenVINO Runtime version ......... <version>
|
||||||
|
[ INFO ] Build ........... <build>
|
||||||
|
[ INFO ]
|
||||||
|
[ INFO ] Parsing input parameters
|
||||||
|
[ INFO ] Files were added: 1
|
||||||
|
[ INFO ] /images/dog.bmp
|
||||||
|
[ INFO ] Loading model files:
|
||||||
|
[ INFO ] /models/googlenet-v1.xml
|
||||||
|
[ INFO ] model name: GoogleNet
|
||||||
|
[ INFO ] inputs
|
||||||
|
[ INFO ] input name: data
|
||||||
|
[ INFO ] input type: f32
|
||||||
|
[ INFO ] input shape: {1, 3, 224, 224}
|
||||||
|
[ INFO ] outputs
|
||||||
|
[ INFO ] output name: prob
|
||||||
|
[ INFO ] output type: f32
|
||||||
|
[ INFO ] output shape: {1, 1000}
|
||||||
|
[ INFO ] Read input images
|
||||||
|
[ INFO ] Set batch size 1
|
||||||
|
[ INFO ] model name: GoogleNet
|
||||||
|
[ INFO ] inputs
|
||||||
|
[ INFO ] input name: data
|
||||||
|
[ INFO ] input type: u8
|
||||||
|
[ INFO ] input shape: {1, 224, 224, 3}
|
||||||
|
[ INFO ] outputs
|
||||||
|
[ INFO ] output name: prob
|
||||||
|
[ INFO ] output type: f32
|
||||||
|
[ INFO ] output shape: {1, 1000}
|
||||||
|
[ INFO ] Loading model to the device GPU
|
||||||
|
[ INFO ] Create infer request
|
||||||
|
[ INFO ] Start inference (asynchronous executions)
|
||||||
|
[ INFO ] Completed 1 async request execution
|
||||||
|
[ INFO ] Completed 2 async request execution
|
||||||
|
[ INFO ] Completed 3 async request execution
|
||||||
|
[ INFO ] Completed 4 async request execution
|
||||||
|
[ INFO ] Completed 5 async request execution
|
||||||
|
[ INFO ] Completed 6 async request execution
|
||||||
|
[ INFO ] Completed 7 async request execution
|
||||||
|
[ INFO ] Completed 8 async request execution
|
||||||
|
[ INFO ] Completed 9 async request execution
|
||||||
|
[ INFO ] Completed 10 async request execution
|
||||||
|
[ INFO ] Completed async requests execution
|
||||||
|
|
||||||
Image /images/dog.bmp
|
Top 10 results:
|
||||||
|
|
||||||
classid probability
|
Image /images/dog.bmp
|
||||||
------- -----------
|
|
||||||
156 0.8935547
|
|
||||||
218 0.0608215
|
|
||||||
215 0.0217133
|
|
||||||
219 0.0105667
|
|
||||||
212 0.0018835
|
|
||||||
217 0.0018730
|
|
||||||
152 0.0018730
|
|
||||||
157 0.0015745
|
|
||||||
154 0.0012817
|
|
||||||
220 0.0010099
|
|
||||||
```
|
|
||||||
|
|
||||||
## See Also
|
classid probability
|
||||||
|
------- -----------
|
||||||
|
156 0.8935547
|
||||||
|
218 0.0608215
|
||||||
|
215 0.0217133
|
||||||
|
219 0.0105667
|
||||||
|
212 0.0018835
|
||||||
|
217 0.0018730
|
||||||
|
152 0.0018730
|
||||||
|
157 0.0015745
|
||||||
|
154 0.0012817
|
||||||
|
220 0.0010099
|
||||||
|
|
||||||
|
See Also
|
||||||
|
########
|
||||||
|
|
||||||
|
- :doc:`Integrate the OpenVINO™ Runtime with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
|
||||||
|
- :doc:`Using OpenVINO™ Toolkit Samples <openvino_docs_OV_UG_Samples_Overview>`
|
||||||
|
- :doc:`Model Downloader <omz_tools_downloader>`
|
||||||
|
- :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
|
||||||
|
|
||||||
|
@endsphinxdirective
|
||||||
|
|
||||||
- [Integrate the OpenVINO™ Runtime with Your Application](../../../docs/OV_Runtime_UG/integrate_with_your_application.md)
|
|
||||||
- [Using OpenVINO™ Toolkit Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
|
|
||||||
- [Model Downloader](@ref omz_tools_downloader)
|
|
||||||
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
|
||||||
|
@ -1,45 +1,62 @@
|
|||||||
# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
|
# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
|
||||||
|
|
||||||
|
@sphinxdirective
|
||||||
|
|
||||||
This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
|
This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
|
||||||
|
|
||||||
Models with only 1 input and output are supported.
|
Models with only 1 input and output are supported.
|
||||||
|
|
||||||
The following Python API is used in the application:
|
The following Python API is used in the application:
|
||||||
|
|
||||||
|
+--------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+
|
||||||
| Feature | API | Description |
|
| Feature | API | Description |
|
||||||
| :----------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------ |
|
+====================+===========================================================================================================================================================================================================+===========================+
|
||||||
| Asynchronous Infer | [openvino.runtime.AsyncInferQueue], [openvino.runtime.AsyncInferQueue.set_callback], [openvino.runtime.AsyncInferQueue.start_async], [openvino.runtime.AsyncInferQueue.wait_all], [openvino.runtime.InferRequest.results] | Do asynchronous inference |
|
| Asynchronous Infer | `openvino.runtime.AsyncInferQueue <https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html>`__ , | Do asynchronous inference |
|
||||||
|
| | `openvino.runtime.AsyncInferQueue.set_callback <https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.set_callback>`__ , | |
|
||||||
|
| | `openvino.runtime.AsyncInferQueue.start_async <https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.start_async>`__ , | |
|
||||||
|
| | `openvino.runtime.AsyncInferQueue.wait_all <https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.wait_all>`__ , | |
|
||||||
|
| | `openvino.runtime.InferRequest.results <https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.InferRequest.html#openvino.runtime.InferRequest.results>`__ | |
|
||||||
|
+--------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+
|
||||||
|
|
||||||
Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
|
Basic OpenVINO™ Runtime API is covered by :doc:`Hello Classification Python* Sample <openvino_inference_engine_ie_bridges_python_sample_hello_classification_README>`.
|
||||||
|
|
||||||
|
+----------------------------+-----------------------------------------------------------------------------------+
|
||||||
| Options | Values |
|
| Options | Values |
|
||||||
| :------------------------- | :----------------------------------------------------------------------- |
|
+============================+===================================================================================+
|
||||||
| Validated Models | [alexnet](@ref omz_models_model_alexnet) |
|
| Validated Models | :doc:`alexnet <omz_models_model_alexnet>` |
|
||||||
|
+----------------------------+-----------------------------------------------------------------------------------+
|
||||||
| Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin), ONNX (.onnx) |
|
| Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin), ONNX (.onnx) |
|
||||||
| Supported devices | [All](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
|
+----------------------------+-----------------------------------------------------------------------------------+
|
||||||
| Other language realization | [C++](../../../samples/cpp/classification_sample_async/README.md) |
|
| Supported devices | :doc:`All <openvino_docs_OV_UG_supported_plugins_Supported_Devices>` |
|
||||||
|
+----------------------------+-----------------------------------------------------------------------------------+
|
||||||
|
| Other language realization | :doc:`C++ <openvino_inference_engine_samples_classification_sample_async_README>` |
|
||||||
|
+----------------------------+-----------------------------------------------------------------------------------+
|
||||||
|
|
||||||
## How It Works
|
How It Works
|
||||||
|
############
|
||||||
|
|
||||||
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the OpenVINO™ Runtime plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
|
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the OpenVINO™ Runtime plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
|
||||||
|
|
||||||
You can see the explicit description of
|
You can see the explicit description of
|
||||||
each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/integrate_with_your_application.md) section of "Integrate OpenVINO™ Runtime with Your Application" guide.
|
each sample step at :doc:`Integration Steps <openvino_docs_OV_UG_Integrate_OV_with_your_application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide.
|
||||||
|
|
||||||
## Running
|
Running
|
||||||
|
#######
|
||||||
|
|
||||||
Run the application with the `-h` option to see the usage message:
|
Run the application with the ``-h`` option to see the usage message:
|
||||||
|
|
||||||
```
|
.. code-block:: sh
|
||||||
python classification_sample_async.py -h
|
|
||||||
```
|
python classification_sample_async.py -h
|
||||||
|
|
||||||
Usage message:
|
Usage message:
|
||||||
|
|
||||||
```
|
.. code-block:: sh
|
||||||
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
|
|
||||||
|
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
|
||||||
[-d DEVICE]
|
[-d DEVICE]
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
-h, --help Show this help message and exit.
|
-h, --help Show this help message and exit.
|
||||||
-m MODEL, --model MODEL
|
-m MODEL, --model MODEL
|
||||||
Required. Path to an .xml or .onnx file with a trained
|
Required. Path to an .xml or .onnx file with a trained
|
||||||
@ -51,94 +68,99 @@ Options:
|
|||||||
GPU or HETERO: is acceptable. The sample
|
GPU or HETERO: is acceptable. The sample
|
||||||
will look for a suitable plugin for device specified.
|
will look for a suitable plugin for device specified.
|
||||||
Default value is CPU.
|
Default value is CPU.
|
||||||
```
|
|
||||||
|
|
||||||
To run the sample, you need specify a model and image:
|
To run the sample, you need specify a model and image:
|
||||||
|
|
||||||
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
|
- You can use :doc:`public <omz_models_group_public>` or :doc:`Intel's <omz_models_group_intel>` pre-trained models from the Open Model Zoo. The models can be downloaded using the :doc:`Model Downloader <omz_tools_downloader>`.
|
||||||
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
|
- You can use images from the media files collection available `here <https://storage.openvinotoolkit.org/data/test_data>`__ .
|
||||||
|
|
||||||
> **NOTES**:
|
.. note::
|
||||||
>
|
|
||||||
> - By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
|
|
||||||
>
|
|
||||||
> - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
|
||||||
>
|
|
||||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
|
||||||
|
|
||||||
### Example
|
- By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with ``--reverse_input_channels`` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`.
|
||||||
|
|
||||||
|
- Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the :doc:`Model Optimizer tool <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||||
|
|
||||||
|
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||||
|
|
||||||
|
Example
|
||||||
|
+++++++
|
||||||
|
|
||||||
|
1. Install the ``openvino-dev`` Python package to use Open Model Zoo Tools:
|
||||||
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
|
||||||
```
|
|
||||||
python -m pip install openvino-dev[caffe]
|
python -m pip install openvino-dev[caffe]
|
||||||
```
|
|
||||||
|
|
||||||
2. Download a pre-trained model:
|
2. Download a pre-trained model:
|
||||||
```
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
omz_downloader --name alexnet
|
omz_downloader --name alexnet
|
||||||
```
|
|
||||||
|
|
||||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||||
```
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
omz_converter --name alexnet
|
omz_converter --name alexnet
|
||||||
```
|
|
||||||
|
|
||||||
4. Perform inference of `banana.jpg` and `car.bmp` using the `alexnet` model on a `GPU`, for example:
|
4. Perform inference of ``banana.jpg`` and ``car.bmp`` using the ``alexnet`` model on a ``GPU``, for example:
|
||||||
```
|
|
||||||
|
.. code-block:: sh
|
||||||
|
|
||||||
python classification_sample_async.py -m alexnet.xml -i banana.jpg car.bmp -d GPU
|
python classification_sample_async.py -m alexnet.xml -i banana.jpg car.bmp -d GPU
|
||||||
```
|
|
||||||
|
|
||||||
## Sample Output
|
Sample Output
|
||||||
|
#############
|
||||||
|
|
||||||
The sample application logs each step in a standard output stream and outputs top-10 inference results.
|
The sample application logs each step in a standard output stream and outputs top-10 inference results.
|
||||||
|
|
||||||
```
|
.. code-block:: sh
|
||||||
[ INFO ] Creating OpenVINO Runtime Core
|
|
||||||
[ INFO ] Reading the model: C:/test_data/models/alexnet.xml
|
|
||||||
[ INFO ] Loading the model to the plugin
|
|
||||||
[ INFO ] Starting inference in asynchronous mode
|
|
||||||
[ INFO ] Image path: /test_data/images/banana.jpg
|
|
||||||
[ INFO ] Top 10 results:
|
|
||||||
[ INFO ] class_id probability
|
|
||||||
[ INFO ] --------------------
|
|
||||||
[ INFO ] 954 0.9707602
|
|
||||||
[ INFO ] 666 0.0216788
|
|
||||||
[ INFO ] 659 0.0032558
|
|
||||||
[ INFO ] 435 0.0008082
|
|
||||||
[ INFO ] 809 0.0004359
|
|
||||||
[ INFO ] 502 0.0003860
|
|
||||||
[ INFO ] 618 0.0002867
|
|
||||||
[ INFO ] 910 0.0002866
|
|
||||||
[ INFO ] 951 0.0002410
|
|
||||||
[ INFO ] 961 0.0002193
|
|
||||||
[ INFO ]
|
|
||||||
[ INFO ] Image path: /test_data/images/car.bmp
|
|
||||||
[ INFO ] Top 10 results:
|
|
||||||
[ INFO ] class_id probability
|
|
||||||
[ INFO ] --------------------
|
|
||||||
[ INFO ] 656 0.5120340
|
|
||||||
[ INFO ] 874 0.1142275
|
|
||||||
[ INFO ] 654 0.0697167
|
|
||||||
[ INFO ] 436 0.0615163
|
|
||||||
[ INFO ] 581 0.0552262
|
|
||||||
[ INFO ] 705 0.0304179
|
|
||||||
[ INFO ] 675 0.0151660
|
|
||||||
[ INFO ] 734 0.0151582
|
|
||||||
[ INFO ] 627 0.0148493
|
|
||||||
[ INFO ] 757 0.0120964
|
|
||||||
[ INFO ]
|
|
||||||
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
|
|
||||||
```
|
|
||||||
|
|
||||||
## See Also
|
[ INFO ] Creating OpenVINO Runtime Core
|
||||||
|
[ INFO ] Reading the model: C:/test_data/models/alexnet.xml
|
||||||
|
[ INFO ] Loading the model to the plugin
|
||||||
|
[ INFO ] Starting inference in asynchronous mode
|
||||||
|
[ INFO ] Image path: /test_data/images/banana.jpg
|
||||||
|
[ INFO ] Top 10 results:
|
||||||
|
[ INFO ] class_id probability
|
||||||
|
[ INFO ] --------------------
|
||||||
|
[ INFO ] 954 0.9707602
|
||||||
|
[ INFO ] 666 0.0216788
|
||||||
|
[ INFO ] 659 0.0032558
|
||||||
|
[ INFO ] 435 0.0008082
|
||||||
|
[ INFO ] 809 0.0004359
|
||||||
|
[ INFO ] 502 0.0003860
|
||||||
|
[ INFO ] 618 0.0002867
|
||||||
|
[ INFO ] 910 0.0002866
|
||||||
|
[ INFO ] 951 0.0002410
|
||||||
|
[ INFO ] 961 0.0002193
|
||||||
|
[ INFO ]
|
||||||
|
[ INFO ] Image path: /test_data/images/car.bmp
|
||||||
|
[ INFO ] Top 10 results:
|
||||||
|
[ INFO ] class_id probability
|
||||||
|
[ INFO ] --------------------
|
||||||
|
[ INFO ] 656 0.5120340
|
||||||
|
[ INFO ] 874 0.1142275
|
||||||
|
[ INFO ] 654 0.0697167
|
||||||
|
[ INFO ] 436 0.0615163
|
||||||
|
[ INFO ] 581 0.0552262
|
||||||
|
[ INFO ] 705 0.0304179
|
||||||
|
[ INFO ] 675 0.0151660
|
||||||
|
[ INFO ] 734 0.0151582
|
||||||
|
[ INFO ] 627 0.0148493
|
||||||
|
[ INFO ] 757 0.0120964
|
||||||
|
[ INFO ]
|
||||||
|
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
|
||||||
|
|
||||||
- [Integrate the OpenVINO™ Runtime with Your Application](../../../docs/OV_Runtime_UG/integrate_with_your_application.md)
|
|
||||||
- [Using OpenVINO™ Toolkit Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
|
|
||||||
- [Model Downloader](@ref omz_tools_downloader)
|
|
||||||
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
|
||||||
|
|
||||||
[openvino.runtime.AsyncInferQueue]:https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html
|
See Also
|
||||||
[openvino.runtime.AsyncInferQueue.set_callback]:https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.set_callback
|
########
|
||||||
[openvino.runtime.AsyncInferQueue.start_async]:https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.start_async
|
|
||||||
[openvino.runtime.AsyncInferQueue.wait_all]:https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.wait_all
|
- :doc:`Integrate the OpenVINO™ Runtime with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`
|
||||||
[openvino.runtime.InferRequest.results]:https://docs.openvino.ai/2022.3/api/ie_python_api/_autosummary/openvino.runtime.InferRequest.html#openvino.runtime.InferRequest.results
|
- :doc:`Using OpenVINO™ Toolkit Samples <openvino_docs_OV_UG_Samples_Overview>`
|
||||||
|
- :doc:`Model Downloader <omz_tools_downloader>`
|
||||||
|
- :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
|
||||||
|
|
||||||
|
@endsphinxdirective
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user