Update readme of IE Python samples (#4908)

* Update readme of python samples

* Add info about ubyte image support to ngraph function creation sample readme

* Move lang after sample name

* Update sample output sections

* Remove API used in the hello classification sample from other sample readme's

* Update sample readme files to resolve conversations

* Add a note about color inversion

* Fix the wrong link to shape inference feature

* Update sample output for the hello query device sample
This commit is contained in:
Dmitry Pigasin 2021-04-14 13:25:39 +03:00 committed by GitHub
parent 19ace232cf
commit 52a37f853b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 708 additions and 242 deletions

View File

@ -1,79 +1,154 @@
# Image Classification Python* Sample Async {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
# Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
This sample demonstrates how to run the Image Classification sample application with inference executed in the asynchronous mode.
This sample demonstrates how to do inference of image classification networks using Asynchronous Inference Request API.
Models with only 1 input and output are supported.
The sample demonstrates how to use the new Infer Request API of Inference Engine in applications.
Refer to [Integrate the Inference Engine New Request API with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) for details.
The sample demonstrates how to build and execute an inference request 10 times in the asynchronous mode on example of classifications networks.
The asynchronous mode might increase the throughput of the pictures.
The following Inference Engine Python API is used in the application:
The batch mode is an independent attribute on the asynchronous mode. Asynchronous mode works efficiently with any batch size.
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Asynchronous Infer | [InferRequest.async_infer], [InferRequest.wait], [Blob.buffer] | Do asynchronous inference |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------- |
| Validated Models | [alexnet](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/alexnet/alexnet.md) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/classification_sample_async) |
## How It Works
Upon the start-up, the sample application reads command line parameters and loads specified network and input images (or a
folder with images) to the Inference Engine plugin. The batch size of the network is set according to the number of read images.
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
Then, the sample creates an inference request object and assigns completion callback for it. In scope of the completion callback
handling the inference request is executed again.
After that, the application starts inference for the first infer request and waits of 10th inference request execution being completed.
When inference is done, the application outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Running the application with the <code>-h</code> option yields the following usage message:
Run the application with the <code>-h</code> option to see the usage message:
```
python3 classification_sample_async.py -h
python classification_sample_async.py -h
```
The command yields the following usage message:
Usage message:
```
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
[-l EXTENSION] [-c CONFIG] [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
path to a shared library with the kernels
implementations.
Required. Path to an image file(s).
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU
--labels LABELS Optional. Labels mapping file
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
Optional. Number of top results.
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
You can do inference of an image using a trained AlexNet network on FPGA with fallback to CPU using the following command:
```
python3 classification_sample_async.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d HETERO:FPGA,CPU
python classification_sample_async.py -m <path_to_model>/alexnet.xml -i <path_to_image>/cat.bmp <path_to_image>/car.bmp -d GPU
```
## Sample Output
By default, the application outputs top-10 inference results for each infer request.
It also provides throughput value measured in frames per seconds.
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\alexnet.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (227, 227)
[ WARNING ] Image images\car.bmp is resized from (259, 787) to (227, 227)
[ INFO ] Starting inference in asynchronous mode
[ INFO ] Infer request 0 returned 0
[ INFO ] Image path: images\cat.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 435 0.0996898
[ INFO ] 876 0.0900239
[ INFO ] 999 0.0691452
[ INFO ] 587 0.0390186
[ INFO ] 666 0.0360390
[ INFO ] 419 0.0308306
[ INFO ] 285 0.0306287
[ INFO ] 700 0.0293007
[ INFO ] 696 0.0202707
[ INFO ] 631 0.0199126
[ INFO ]
[ INFO ] Infer request 1 returned 0
[ INFO ] Image path: images\car.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 479 0.7561803
[ INFO ] 511 0.0755696
[ INFO ] 436 0.0730265
[ INFO ] 817 0.0460268
[ INFO ] 656 0.0303792
[ INFO ] 661 0.0055282
[ INFO ] 581 0.0031296
[ INFO ] 468 0.0029875
[ INFO ] 717 0.0022792
[ INFO ] 627 0.0016297
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[InferRequest.async_infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a95ebe0368cdf4d5d64f9fddc8ee1cd0e
[InferRequest.wait]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a936fa50a7531e2f9a9e9c3d45afc9b43
<!-- TODO replace by python API link -->
[Blob.buffer]:https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Blob.html#a0cad47b43204b115b4017b6b2564fa7e

View File

@ -1,73 +1,121 @@
# Image Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_README}
# Hello Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_classification_README}
This topic demonstrates how to run the Image Classification sample application, which performs
inference using image classification networks such as AlexNet and GoogLeNet.
This sample demonstrates how to do inference of image classification networks using Synchronous Inference Request API.
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Basic Infer Flow | [IECore], [IECore.read_network], [IECore.load_network] | Common API to do inference |
| Synchronous Infer | [ExecutableNetwork.infer] | Do synchronous inference |
| Network Operations | [IENetwork.input_info], [IENetwork.outputs], [InputInfoPtr.precision], [DataPtr.precision], [InputInfoPtr.input_data.shape] | Managing of network: configure input and output blobs |
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------- |
| Validated Models | [alexnet](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/alexnet/alexnet.md) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_classification), [C](../../../c/samples/hello_classification) |
## How It Works
Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference
Engine plugin. When inference is done, the application creates an
output image and outputs data to the standard output stream.
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option yields the usage message:
Run the application with the <code>-h</code> option to see the usage message:
```
python3 classification_sample.py -h
python hello_classification.py -h
```
The command yields the following usage message:
Usage message:
```
usage: classification_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION]
[-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP]
usage: hello_classification.py [-h] -m MODEL -i INPUT [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. MKLDNN (CPU)-targeted custom layers.
Absolute path to a shared library with the kernels
implementations.
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU
--labels LABELS Optional. Path to a labels mapping file
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
Optional. Number of top results.
```
Running the application with the empty list of options yields the usage message given above.
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
For example, to perform inference of an AlexNet model (previously converted to the Inference Engine format) on CPU, use the following command:
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml
python hello_classification.py -m <path_to_model>/alexnet.xml -i <path_to_image>/cat.bmp -d GPU
```
## Sample Output
By default the application outputs top-10 inference results.
Add the `-nt` option to the previous command to modify the number of top output results.
For example, to get the top-5 results on GPU, run the following command:
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d GPU
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\alexnet.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (227, 227)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: images\cat.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 435 0.0996890
[ INFO ] 876 0.0900242
[ INFO ] 999 0.0691449
[ INFO ] 587 0.0390189
[ INFO ] 666 0.0360393
[ INFO ] 419 0.0308307
[ INFO ] 285 0.0306287
[ INFO ] 700 0.0293009
[ INFO ] 696 0.0202707
[ INFO ] 631 0.0199126
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@ -1,50 +1,111 @@
# Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README}
This topic demonstrates how to run the Hello Query Device sample application, which queries Inference Engine
devices and prints their metrics and default configuration values. The sample shows
how to use Query Device API feature.
This sample demonstrates how to show Inference Engine devices and prints their metrics and default configuration values, using [Query Device API feature](../../../../../docs/IE_DG/InferenceEngine_QueryAPI.md).
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------- | :--------------------------------------- | :-------------------- |
| Basic | [IECore] | Common API |
| Query Device | [IECore.get_metric], [IECore.get_config] | Get device properties |
| Options | Values |
| :------------------------- | :---------------------------------------------------------------------- |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_query_device) |
## How It Works
The sample queries all available Inference Engine devices and prints their supported metrics and plugin configuration parameters.
## Running
The sample has no command-line parameters. To see the report, run the following command:
```
python3 hello_query_device.py
python hello_query_device.py
```
## Sample Output
The application prints all available devices with their supported metrics and default values for configuration parameters. For example:
For example:
```
Available devices:
Device: CPU
Metrics:
AVAILABLE_DEVICES: 0
SUPPORTED_METRICS: AVAILABLE_DEVICES, SUPPORTED_METRICS, FULL_DEVICE_NAME, OPTIMIZATION_CAPABILITIES, SUPPORTED_CONFIG_KEYS, RANGE_FOR_ASYNC_INFER_REQUESTS, RANGE_FOR_STREAMS
FULL_DEVICE_NAME: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
OPTIMIZATION_CAPABILITIES: WINOGRAD, FP32, INT8, BIN
SUPPORTED_CONFIG_KEYS: CPU_BIND_THREAD, CPU_THREADS_NUM, CPU_THROUGHPUT_STREAMS, DUMP_EXEC_GRAPH_AS_DOT, DYN_BATCH_ENABLED, DYN_BATCH_LIMIT, EXCLUSIVE_ASYNC_REQUESTS, PERF_COUNT, RANGE_FOR_ASYNC_INFER_REQUESTS, RANGE_FOR_STREAMS
RANGE_FOR_ASYNC_INFER_REQUESTS: 0, 6, 1
RANGE_FOR_STREAMS: 1, 12
Default values for device configuration keys:
CPU_BIND_THREAD: YES
CPU_THREADS_NUM: 0
CPU_THROUGHPUT_STREAMS: 1
DUMP_EXEC_GRAPH_AS_DOT:
DYN_BATCH_ENABLED: NO
DYN_BATCH_LIMIT: 0
EXCLUSIVE_ASYNC_REQUESTS: NO
PERF_COUNT: NO
RANGE_FOR_ASYNC_INFER_REQUESTS: 1
RANGE_FOR_STREAMS: 6
[ INFO ] Creating Inference Engine
[ INFO ] Available devices:
[ INFO ] CPU :
[ INFO ] SUPPORTED_METRICS:
[ INFO ] AVAILABLE_DEVICES:
[ INFO ] FULL_DEVICE_NAME: Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
[ INFO ] OPTIMIZATION_CAPABILITIES: FP32, FP16, INT8, BIN
[ INFO ] RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1
[ INFO ] RANGE_FOR_STREAMS: 1, 8
[ INFO ]
[ INFO ] SUPPORTED_CONFIG_KEYS (default values):
[ INFO ] CPU_BIND_THREAD: NUMA
[ INFO ] CPU_THREADS_NUM: 0
[ INFO ] CPU_THROUGHPUT_STREAMS: 1
[ INFO ] DUMP_EXEC_GRAPH_AS_DOT:
[ INFO ] DYN_BATCH_ENABLED: NO
[ INFO ] DYN_BATCH_LIMIT: 0
[ INFO ] ENFORCE_BF16: NO
[ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ] PERF_COUNT: NO
[ INFO ]
[ INFO ] GNA :
[ INFO ] SUPPORTED_METRICS:
[ INFO ] AVAILABLE_DEVICES: GNA_SW
[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1
[ INFO ] FULL_DEVICE_NAME: GNA_SW
[ INFO ] GNA_LIBRARY_FULL_VERSION: 2.0.0.1047
[ INFO ]
[ INFO ] SUPPORTED_CONFIG_KEYS (default values):
[ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ] GNA_COMPACT_MODE: NO
[ INFO ] GNA_DEVICE_MODE: GNA_SW_EXACT
[ INFO ] GNA_FIRMWARE_MODEL_IMAGE:
[ INFO ] GNA_FIRMWARE_MODEL_IMAGE_GENERATION:
[ INFO ] GNA_LIB_N_THREADS: 1
[ INFO ] GNA_PRECISION: I16
[ INFO ] GNA_PWL_UNIFORM_DESIGN: NO
[ INFO ] GNA_SCALE_FACTOR: 1.000000
[ INFO ] GNA_SCALE_FACTOR_0: 1.000000
[ INFO ] PERF_COUNT: NO
[ INFO ] SINGLE_THREAD: YES
[ INFO ]
[ INFO ] GPU :
[ INFO ] SUPPORTED_METRICS:
[ INFO ] AVAILABLE_DEVICES: 0
[ INFO ] FULL_DEVICE_NAME: Intel(R) UHD Graphics 620 (iGPU)
[ INFO ] OPTIMIZATION_CAPABILITIES: FP32, BIN, FP16
[ INFO ] RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 2, 1
[ INFO ] RANGE_FOR_STREAMS: 1, 2
[ INFO ]
[ INFO ] SUPPORTED_CONFIG_KEYS (default values):
[ INFO ] CACHE_DIR:
[ INFO ] CLDNN_ENABLE_FP16_FOR_QUANTIZED_MODELS: YES
[ INFO ] CLDNN_GRAPH_DUMPS_DIR:
[ INFO ] CLDNN_MEM_POOL: YES
[ INFO ] CLDNN_NV12_TWO_INPUTS: NO
[ INFO ] CLDNN_PLUGIN_PRIORITY: 0
[ INFO ] CLDNN_PLUGIN_THROTTLE: 0
[ INFO ] CLDNN_SOURCES_DUMPS_DIR:
[ INFO ] CONFIG_FILE:
[ INFO ] DEVICE_ID:
[ INFO ] DUMP_KERNELS: NO
[ INFO ] DYN_BATCH_ENABLED: NO
[ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ] GPU_THROUGHPUT_STREAMS: 1
[ INFO ] PERF_COUNT: NO
[ INFO ] TUNING_FILE:
[ INFO ] TUNING_MODE: TUNING_DISABLED
[ INFO ]
```
## See Also
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.get_metric]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#af1cdf2ecbea6399c556957c2c2fdf8eb
[IECore.get_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a48764dec7c235d2374af8b8ef53c6363

View File

@ -1,30 +1,120 @@
# Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README}
# Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README}
This topic demonstrates how to run the Hello Reshape SSD application, which does inference using object detection
networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../../../docs/IE_DG/ShapeInference.md).
This sample demonstrates how to do synchronous inference of object detection networks using [Shape Inference feature](../../../../../docs/IE_DG/ShapeInference.md).
Models with only 1 input and output are supported.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork.reshape] | Managing of network: configure input and output blobs |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [mobilenet-ssd](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-ssd/mobilenet-ssd.md) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_reshape_ssd) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README).
Run the application with the <code>-h</code> option to see the usage message:
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
```
python hello_reshape_ssd.py -h
```
Usage message:
```
usage: hello_reshape_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG]
[-d DEVICE] [--labels LABELS]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can use the following command to do inference on CPU of an image using a trained SSD network:
```sh
python3 ./hello_reshape_ssd.py -m <path_to_model>/ssd_300.xml -i <path_to_image>/500x500.bmp -d CPU
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
python hello_reshape_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/cat.bmp -d GPU
```
## Sample Output
The application renders an image with detected objects enclosed in rectangles. It outputs the list of classes
of the detected objects along with the respective confidence values and the coordinates of the
rectangles to the standard output stream.
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Reshaping the network to the height and width of the input image
[ INFO ] Input shape before reshape: [1, 3, 300, 300]
[ INFO ] Input shape after reshape: [1, 3, 300, 300]
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Found: label = 8, confidence = 1.00, coords = (115, 64), (189, 182)
[ INFO ] Image out.bmp was created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[IENetwork.reshape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a6683f0291db25f908f8d6720ab2f221a
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@ -1,54 +1,109 @@
# nGraph Function Python* Sample {#openvino_inference_engine_ie_bridges_python_samples_ngraph_function_creation_sample_README}
# nGraph Function Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README}
This sample demonstrates how to execute an inference using ngraph::Function to create a network. The sample uses the LeNet classifications network as an example.
This sample demonstrates how to execute an inference using [nGraph function feature](../../../../../docs/nGraph_DG/build_function.md) to create a network that uses weights from LeNet classification network. So you don't need an XML file, the model will be created from the source code on the fly.
In addition to regular images, the sample also supports single-channel ubyte images as an input.
You do not need an XML file to create a network. The API of ngraph::Function allows to create a network on the fly from the source code. The sample uses one-channel pictures as an input.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork], [IENetwork.batch_size] | Managing of network |
| nGraph Functions | [ngraph.impl.Function], [ngraph.parameter], [ngraph.constant], [ngraph.convolution], [ngraph.add], [ngraph.max_pool], [ngraph.reshape], [ngraph.matmul], [ngraph.relu], [ngraph.softmax], [ngraph.result], ngraph.impl.Function.to_capsule | Description of a network using nGraph Python API |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :---------------------------------------------------------------------- |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/ngraph_function_creation_sample) |
## How It Works
Upon the start-up, the sample reads command-line parameters and creates a network using the ngraph::Function API and passed weights file.
Then, the application loads the created network and an image to the Inference Engine core.
At startup, the sample application reads command-line parameters, prepares input data, creates a network using [nGraph function feature](../../../../../docs/nGraph_DG/build_function.md) and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
When the inference is done, the application outputs inference results to the standard output stream.
> **NOTE**: This sample supports models with FP32 weights only.
The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Running the application with the `-h` option yields the following usage message:
```sh
./ngraph_function_creation_sample -h
[ INFO ] InferenceEngine:
API version ............ <version>
Build .................. <number>
Description ....... API
[ INFO ] Parsing input parameters
ngraph_function_creation_sample [OPTION]
Options:
-h Print a usage message.
-m "<path>" Path to a .bin file with weights for the trained model
-i "<path>" Required. Path to an image or folder with images
-d "<device>" Specify the target device to infer on it. See the list of available devices below. The sample looks for a suitable plugin for the specified device. The default value is CPU.
-nt "<integer>" Number of top results. The default value is 10.
Available target devices: <devices>
Run the application with the <code>-h</code> option to see the usage message:
```
python ngraph_function_creation_sample.py -h
```
For example, to do inference of an UByte image on a GPU run the following command:
```sh
./ngraph_function_creation_sample -i <path_to_image> -m <path_to_weights_file> -d GPU
Usage message:
```
usage: ngraph_function_creation_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to a file with network weights.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.
```
To run the sample, you need specify a model weights and image:
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTE**:
>
> * This sample supports models with FP32 weights only.
>
> * The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
>
> * The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
>
> * The white over black images will be automatically inverted in color for a better predictions.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
python ngraph_function_creation_sample.py -m <path_to_model>/lenet.bin -i <path_to_image>/3.bmp -d GPU
```
## Sample Output
By default, the application outputs top-1 inference result for each inference request.
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
[ INFO ] Creating Inference Engine
[ INFO ] Loading the network using ngraph function with weights from <path_to_model>/lenet.bin
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] <path_to_image>/3.bmp is inverted to white over black
[ WARNING ] <path_to_image>/3.bmp is resized from (100, 100) to (28, 28)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: <path_to_image>/3.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 3 1.0000000
[ INFO ] 9 0.0000000
[ INFO ] 8 0.0000000
[ INFO ] 7 0.0000000
[ INFO ] 6 0.0000000
[ INFO ] 5 0.0000000
[ INFO ] 4 0.0000000
[ INFO ] 2 0.0000000
[ INFO ] 1 0.0000000
[ INFO ] 0 0.0000000
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## Deprecation Notice
@ -69,4 +124,33 @@ By default, the application outputs top-1 inference result for each inference re
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IENetwork]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html
[IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
<!-- TODO: Replace the link by another one pointing to the Python API, if available -->
[ngraph.impl.Function]:https://docs.openvinotoolkit.org/latest/ngraph_cpp_api/classngraph_1_1Function.html
<!-- [ngraph.impl.Function.to_capsule]: -->
[ngraph.parameter]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a709acd09288f5a76ed8d07492efc3d13
[ngraph.constant]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a5b6c4e416026e007a4107b3f510d0c27
[ngraph.convolution]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a3143ff55f68428afc1b6c802ee9381e8
[ngraph.add]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#abfa0373c10ced1b1f129594d9bd8a159
[ngraph.max_pool]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#ac60b4459ad23b296086925abce6acd2d
[ngraph.reshape]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a38e1ead9435c4b75c1d891ba2dd6a62e
[ngraph.matmul]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a403b5e10e1f75aeb7569024237e85071
[ngraph.relu]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a70b9b3faf58d85e43d27fef5028117e3
[ngraph.softmax]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a632cc9a31ecaefa2a982d039ecad8d26
[ngraph.result]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a94f8bf6ab8910dfd461d09cb6c6edd11

View File

@ -1,74 +1,118 @@
# Object Detection Python* Sample SSD {#openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README}
# Object Detection SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README}
This sample demonstrates how to run the Object Detection sample application.
This sample demonstrates how to do inference of object detection networks using Synchronous Inference Request API.
Models with 1 input and 1 or 2 outputs are supported.
In the last case names of output blobs must be "boxes" and "labels".
The sample demonstrates how to use the new Infer Request API of Inference Engine in applications.
Refer to [Integrate the Inference Engine New Request API with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) for details.
The sample demonstrates how to build and execute an inference request on example of object detection networks.
The following Inference Engine Python API is used in the application:
Due to properties of SSD networks, this sample works correctly only on a batch of the size 1. For a greater number of images in a batch, network reshape is required.
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Validated Models | [mobilenet-ssd](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-ssd/mobilenet-ssd.md), [face-detection-0206](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/face-detection-0206/description/face-detection-0206.md) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/object_detection_sample_ssd), [C](../../../c/samples/object_detection_sample_ssd) |
## How It Works
Upon the start-up, the sample application reads command line parameters and loads specified network and input images (or a
folder with images) to the Inference Engine plugin.
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image, logging each step in a standard output stream.
Then, the sample creates an inference request object and executes inference on it.
When inference is done, the application outputs data to the standard output stream and creates an output image with bounding boxes drawn atop the initial image.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Running the application with the <code>-h</code> option yields the following usage message:
Run the application with the <code>-h</code> option to see the usage message:
```
python3 object_detection_sample_ssd.py -h
python object_detection_sample_ssd.py -h
```
The command yields the following usage message:
Usage message:
```
usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION]
[-c CONFIG] [-d DEVICE]
[--labels LABELS]
Options:
-h, --help Show this help message and exit
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
path to a shared library with the kernels
implementations
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
will look for a suitable plugin for device specified
Default value is CPU
--labels LABELS Optional. Labels mapping file
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
To run the sample, you can use RMNet_SSD or other object-detection models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
You can do inference of an image using the [person detection SSD model](@ref omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013) from the Open Model Zoo on CPU using the following command:
```
python3 object_detection_sample_ssd.py -i <path_to_image>/cat.bmp -m <path_to_model>/person-detection-retail-0013.xml -d CPU
python object_detection_sample_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/cat.bmp -d GPU
```
## Sample Output
By default, the application outputs all inference results and draws bounding boxes for inference results with an over 50% confidence.
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Found: label = 8, confidence = 1.00, coords = (115, 64), (189, 182)
[ INFO ] Image out.bmp created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@ -1,70 +1,134 @@
# Neural Style Transfer Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README}
# Style Transfer Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README}
This topic demonstrates how to run the Neural Style Transfer sample application, which performs
inference of style transfer models.
This sample demonstrates how to do synchronous inference of style transfer networks using Network Batch Size Feature.
You can specify multiple images to input, a network batch size will be set equal to their number automatically.
Models with only 1 input and output are supported.
> **NOTE**: The OpenVINO™ toolkit does not include a pre-trained model to run the Neural Style Transfer sample. A public model from the [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style) can be used. Read the [Converting a Style Transfer Model from MXNet*](../../../../../docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md) topic from the [Model Optimizer Developer Guide](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to learn about how to get the trained model and how to convert it to the Inference Engine format (\*.xml + \*.bin).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork.batch_size] | Managing of network: configure input and output blobs |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [fast-neural-style-mosaic-onnx](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/fast-neural-style-mosaic-onnx/fast-neural-style-mosaic-onnx.md) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/style_transfer_sample) |
## How It Works
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image(s), logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Running the application with the <code>-h</code> option yields the following usage message:
Run the application with the <code>-h</code> option to see the usage message:
```
python3 style_transfer_sample.py --help
python style_transfer_sample.py -h
```
The command yields the following usage message:
Usage message:
```
usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION] [-d DEVICE]
[-nt NUMBER_TOP]
[--mean_val_r MEAN_VAL_R]
[-l EXTENSION] [-c CONFIG] [-d DEVICE]
[--original_size] [--mean_val_r MEAN_VAL_R]
[--mean_val_g MEAN_VAL_G]
[--mean_val_b MEAN_VAL_B]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
MKLDNN (CPU)-targeted custom layers. Absolute path to
a shared library with the kernels implementations
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU, GPU, FPGA,
HDDL or MYRIAD is acceptable. Sample will look for a
suitable plugin for device specified. Default value is CPU
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
--mean_val_r MEAN_VAL_R, -mean_val_r MEAN_VAL_R
Optional. Mean value of red channel for mean value subtraction in
postprocessing
--mean_val_g MEAN_VAL_G, -mean_val_g MEAN_VAL_G
Optional. Mean value of green channel for mean value subtraction
in postprocessing
--mean_val_b MEAN_VAL_B, -mean_val_b MEAN_VAL_B
Optional. Mean value of blue channel for mean value subtraction
in postprocessing
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--original_size Optional. Resize an output image to original image
size.
--mean_val_r MEAN_VAL_R
Optional. Mean value of red channel for mean value
subtraction in postprocessing.
--mean_val_g MEAN_VAL_G
Optional. Mean value of green channel for mean value
subtraction in postprocessing.
--mean_val_b MEAN_VAL_B
Optional. Mean value of blue channel for mean value
subtraction in postprocessing.
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
To perform inference of an image using a trained model of NST network on Intel® CPUs, use the following command:
```
python3 style_transfer_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/1_decoder_FP32.xml
python style_transfer_sample.py -m <path_to_model>/fast-neural-style-mosaic-onnx.onnx -i <path_to_image>/car.png <path_to_image>/cat.jpg -d GPU
```
### Demo Output
## Sample Output
The application outputs an image (`out1.bmp`) or a sequence of images (`out1.bmp`, ..., `out<N>.bmp`) which are redrawn in style of the style transfer model used for sample.
The sample application logs each step in a standard output stream and creates an output image (`out_0.bmp`) or a sequence of images (`out_0.bmp`, .., `out_<n>.bmp`) that are redrawn in the style of the style transfer model used.
## See Also
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\fast-neural-style-mosaic-onnx.onnx
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image images\car.bmp is resized from (259, 787) to (224, 224)
[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (224, 224)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image out_0.bmp created!
[ INFO ] Image out_1.bmp created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519