[Python API] Move samples and docs to the new directory (#7851)

* [Python API] Move samples and docs to the new directory

* move samples to the new directory

* try to fix build and pychecks

* fix links

* fix pychecks

* fix cmake

* fix cpack installation

* Update inference-engine/ie_bridges/python/CMakeLists.txt

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
This commit is contained in:
Anastasia Kuporosova
2021-10-14 14:49:35 +03:00
committed by GitHub
parent eb838d5699
commit 799be77e33
39 changed files with 126 additions and 91 deletions

View File

@@ -0,0 +1,167 @@
# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
This sample demonstrates how to do inference of image classification networks using Asynchronous Inference Request API.
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Asynchronous Infer | [InferRequest.async_infer], [InferRequest.wait], [Blob.buffer] | Do asynchronous inference |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------- |
| Validated Models | [alexnet](@ref omz_models_model_alexnet) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/classification_sample_async/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/classification_sample_async.py -h
```
Usage message:
```
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l EXTENSION] [-c CONFIG] [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to an image file(s).
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name alexnet
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name alexnet
```
3. Perform inference of `car.bmp` and `cat.jpg` using `alexnet` model on a `GPU`, for example:
```
python <path_to_sample>/classification_sample_async.py -m <path_to_model>/alexnet.xml -i <path_to_image>/car.bmp <path_to_image>/cat.jpg -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\alexnet\FP32\alexnet.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (227, 227)
[ WARNING ] Image c:\images\cat.jpg is resized from (300, 300) to (227, 227)
[ INFO ] Starting inference in asynchronous mode
[ INFO ] Infer request 0 returned 0
[ INFO ] Image path: c:\images\car.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 656 0.6645315
[ INFO ] 654 0.1121185
[ INFO ] 581 0.0698451
[ INFO ] 874 0.0334973
[ INFO ] 436 0.0259718
[ INFO ] 817 0.0173190
[ INFO ] 675 0.0109321
[ INFO ] 511 0.0109075
[ INFO ] 569 0.0083093
[ INFO ] 717 0.0063173
[ INFO ]
[ INFO ] Infer request 1 returned 0
[ INFO ] Image path: c:\images\cat.jpg
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 876 0.1320105
[ INFO ] 435 0.1210389
[ INFO ] 285 0.0712640
[ INFO ] 282 0.0570528
[ INFO ] 281 0.0319335
[ INFO ] 999 0.0285931
[ INFO ] 94 0.0270323
[ INFO ] 36 0.0240510
[ INFO ] 335 0.0198461
[ INFO ] 186 0.0183939
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[InferRequest.async_infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a95ebe0368cdf4d5d64f9fddc8ee1cd0e
[InferRequest.wait]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a936fa50a7531e2f9a9e9c3d45afc9b43
<!-- TODO replace by python API link -->
[Blob.buffer]:https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Blob.html#a0cad47b43204b115b4017b6b2564fa7e

View File

@@ -0,0 +1,168 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import sys
import cv2
import numpy as np
from openvino.inference_engine import IECore, StatusCode
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to an .xml or .onnx file with a trained model.')
args.add_argument('-i', '--input', required=True, type=str, nargs='+', help='Required. Path to an image file(s).')
args.add_argument('-l', '--extension', type=str, default=None,
help='Optional. Required by the CPU Plugin for executing the custom operation on a CPU. '
'Absolute path to a shared library with the kernels implementations.')
args.add_argument('-c', '--config', type=str, default=None,
help='Optional. Required by GPU or VPU Plugins for the custom operation kernel. '
'Absolute path to operation description file (.xml).')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--labels', default=None, type=str, help='Optional. Path to a labels mapping file.')
args.add_argument('-nt', '--number_top', default=10, type=int, help='Optional. Number of top results.')
# fmt: on
return parser.parse_args()
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
if args.extension and args.device == 'CPU':
log.info(f'Loading the {args.device} extension: {args.extension}')
ie.add_extension(args.extension, args.device)
if args.config and args.device in ('GPU', 'MYRIAD', 'HDDL'):
log.info(f'Loading the {args.device} configuration: {args.config}')
ie.set_config({'CONFIG_FILE': args.config}, args.device)
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
log.info(f'Reading the network: {args.model}')
# (.xml and .bin files) or (.onnx file)
net = ie.read_network(model=args.model)
if len(net.input_info) != 1:
log.error('Sample supports only single input topologies')
return -1
if len(net.outputs) != 1:
log.error('Sample supports only single output topologies')
return -1
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get names of input and output blobs
input_blob = next(iter(net.input_info))
out_blob = next(iter(net.outputs))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
net.outputs[out_blob].precision = 'FP32'
# Get a number of input images
num_of_input = len(args.input)
# Get a number of classes recognized by a model
num_of_classes = max(net.outputs[out_blob].shape)
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device, num_requests=num_of_input)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
input_data = []
_, _, h, w = net.input_info[input_blob].input_data.shape
for i in range(num_of_input):
image = cv2.imread(args.input[i])
if image.shape[:-1] != (h, w):
log.warning(f'Image {args.input[i]} is resized from {image.shape[:-1]} to {(h, w)}')
image = cv2.resize(image, (w, h))
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
# Add N dimension to transform to NCHW
image = np.expand_dims(image, axis=0)
input_data.append(image)
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in asynchronous mode')
for i in range(num_of_input):
exec_net.requests[i].async_infer({input_blob: input_data[i]})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
# Generate a label list
if args.labels:
with open(args.labels, 'r') as f:
labels = [line.split(',')[0].strip() for line in f]
# Create a list to control a order of output
output_queue = list(range(num_of_input))
while True:
for i in output_queue:
# Immediately returns a inference status without blocking or interrupting
infer_status = exec_net.requests[i].wait(0)
if infer_status == StatusCode.RESULT_NOT_READY:
continue
log.info(f'Infer request {i} returned {infer_status}')
if infer_status != StatusCode.OK:
return -2
# Read infer request results from buffer
res = exec_net.requests[i].output_blobs[out_blob].buffer
# Change a shape of a numpy.ndarray with results to get another one with one dimension
probs = res.reshape(num_of_classes)
# Get an array of args.number_top class IDs in descending order of probability
top_n_idexes = np.argsort(probs)[-args.number_top :][::-1]
header = 'classid probability'
header = header + ' label' if args.labels else header
log.info(f'Image path: {args.input[i]}')
log.info(f'Top {args.number_top} results: ')
log.info(header)
log.info('-' * len(header))
for class_id in top_n_idexes:
probability_indent = ' ' * (len('classid') - len(str(class_id)) + 1)
label_indent = ' ' * (len('probability') - 8) if args.labels else ''
label = labels[class_id] if args.labels else ''
log.info(f'{class_id}{probability_indent}{probs[class_id]:.7f}{label_indent}{label}')
log.info('')
output_queue.remove(i)
if len(output_queue) == 0:
break
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,133 @@
# Hello Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_classification_README}
This sample demonstrates how to do inference of image classification networks using Synchronous Inference Request API.
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Basic Infer Flow | [IECore], [IECore.read_network], [IECore.load_network] | Common API to do inference |
| Synchronous Infer | [ExecutableNetwork.infer] | Do synchronous inference |
| Network Operations | [IENetwork.input_info], [IENetwork.outputs], [InputInfoPtr.precision], [DataPtr.precision], [InputInfoPtr.input_data.shape] | Managing of network: configure input and output blobs |
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------- |
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [C](../../../inference-engine/ie_bridges/c/samples/hello_classification/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/hello_classification.py -h
```
Usage message:
```
usage: hello_classification.py [-h] -m MODEL -i INPUT [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name alexnet
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name alexnet
```
3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example:
```
python <path_to_sample>/hello_classification.py -m <path_to_model>/alexnet.xml -i <path_to_image>/car.bmp -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\alexnet\FP32\alexnet.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (227, 227)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: c:\images\car.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 656 0.6645315
[ INFO ] 654 0.1121185
[ INFO ] 581 0.0698451
[ INFO ] 874 0.0334973
[ INFO ] 436 0.0259718
[ INFO ] 817 0.0173190
[ INFO ] 675 0.0109321
[ INFO ] 511 0.0109075
[ INFO ] 569 0.0083093
[ INFO ] 717 0.0063173
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@@ -0,0 +1,125 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import sys
import cv2
import numpy as np
from openvino.inference_engine import IECore
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to an .xml or .onnx file with a trained model.')
args.add_argument('-i', '--input', required=True, type=str, help='Required. Path to an image file.')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--labels', default=None, type=str, help='Optional. Path to a labels mapping file.')
args.add_argument('-nt', '--number_top', default=10, type=int, help='Optional. Number of top results.')
# fmt: on
return parser.parse_args()
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
log.info(f'Reading the network: {args.model}')
# (.xml and .bin files) or (.onnx file)
net = ie.read_network(model=args.model)
if len(net.input_info) != 1:
log.error('Sample supports only single input topologies')
return -1
if len(net.outputs) != 1:
log.error('Sample supports only single output topologies')
return -1
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get names of input and output blobs
input_blob = next(iter(net.input_info))
out_blob = next(iter(net.outputs))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
net.outputs[out_blob].precision = 'FP32'
# Get a number of classes recognized by a model
num_of_classes = max(net.outputs[out_blob].shape)
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
original_image = cv2.imread(args.input)
image = original_image.copy()
_, _, h, w = net.input_info[input_blob].input_data.shape
if image.shape[:-1] != (h, w):
log.warning(f'Image {args.input} is resized from {image.shape[:-1]} to {(h, w)}')
image = cv2.resize(image, (w, h))
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
# Add N dimension to transform to NCHW
image = np.expand_dims(image, axis=0)
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
res = exec_net.infer(inputs={input_blob: image})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
# Generate a label list
if args.labels:
with open(args.labels, 'r') as f:
labels = [line.split(',')[0].strip() for line in f]
res = res[out_blob]
# Change a shape of a numpy.ndarray with results to get another one with one dimension
probs = res.reshape(num_of_classes)
# Get an array of args.number_top class IDs in descending order of probability
top_n_idexes = np.argsort(probs)[-args.number_top :][::-1]
header = 'classid probability'
header = header + ' label' if args.labels else header
log.info(f'Image path: {args.input}')
log.info(f'Top {args.number_top} results: ')
log.info(header)
log.info('-' * len(header))
for class_id in top_n_idexes:
probability_indent = ' ' * (len('classid') - len(str(class_id)) + 1)
label_indent = ' ' * (len('probability') - 8) if args.labels else ''
label = labels[class_id] if args.labels else ''
log.info(f'{class_id}{probability_indent}{probs[class_id]:.7f}{label_indent}{label}')
log.info('')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,110 @@
# Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README}
This sample demonstrates how to show Inference Engine devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/IE_DG/InferenceEngine_QueryAPI.md).
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------- | :--------------------------------------- | :-------------------- |
| Basic | [IECore] | Common API |
| Query Device | [IECore.get_metric], [IECore.get_config] | Get device properties |
| Options | Values |
| :------------------------- | :---------------------------------------------------------------------- |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/hello_query_device/README.md) |
## How It Works
The sample queries all available Inference Engine devices and prints their supported metrics and plugin configuration parameters.
## Running
The sample has no command-line parameters. To see the report, run the following command:
```
python <path_to_sample>/hello_query_device.py
```
## Sample Output
The application prints all available devices with their supported metrics and default values for configuration parameters. (Some lines are not shown due to length.) For example:
```
[ INFO ] Creating Inference Engine
[ INFO ] Available devices:
[ INFO ] CPU :
[ INFO ] SUPPORTED_METRICS:
[ INFO ] AVAILABLE_DEVICES:
[ INFO ] FULL_DEVICE_NAME: Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
[ INFO ] OPTIMIZATION_CAPABILITIES: FP32, FP16, INT8, BIN
[ INFO ] RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1
[ INFO ] RANGE_FOR_STREAMS: 1, 8
[ INFO ]
[ INFO ] SUPPORTED_CONFIG_KEYS (default values):
[ INFO ] CPU_BIND_THREAD: NUMA
[ INFO ] CPU_THREADS_NUM: 0
[ INFO ] CPU_THROUGHPUT_STREAMS: 1
[ INFO ] DUMP_EXEC_GRAPH_AS_DOT:
[ INFO ] DYN_BATCH_ENABLED: NO
[ INFO ] DYN_BATCH_LIMIT: 0
[ INFO ] ENFORCE_BF16: NO
[ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ] PERF_COUNT: NO
[ INFO ]
[ INFO ] GNA :
[ INFO ] SUPPORTED_METRICS:
[ INFO ] AVAILABLE_DEVICES: GNA_SW
[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1
[ INFO ] FULL_DEVICE_NAME: GNA_SW
[ INFO ] GNA_LIBRARY_FULL_VERSION: 2.0.0.1047
[ INFO ]
[ INFO ] SUPPORTED_CONFIG_KEYS (default values):
[ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ] GNA_COMPACT_MODE: NO
[ INFO ] GNA_DEVICE_MODE: GNA_SW_EXACT
[ INFO ] GNA_FIRMWARE_MODEL_IMAGE:
[ INFO ] GNA_FIRMWARE_MODEL_IMAGE_GENERATION:
[ INFO ] GNA_LIB_N_THREADS: 1
[ INFO ] GNA_PRECISION: I16
[ INFO ] GNA_PWL_UNIFORM_DESIGN: NO
[ INFO ] GNA_SCALE_FACTOR: 1.000000
[ INFO ] GNA_SCALE_FACTOR_0: 1.000000
[ INFO ] PERF_COUNT: NO
[ INFO ] SINGLE_THREAD: YES
[ INFO ]
[ INFO ] GPU :
[ INFO ] SUPPORTED_METRICS:
[ INFO ] AVAILABLE_DEVICES: 0
[ INFO ] FULL_DEVICE_NAME: Intel(R) UHD Graphics 620 (iGPU)
[ INFO ] OPTIMIZATION_CAPABILITIES: FP32, BIN, FP16
[ INFO ] RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 2, 1
[ INFO ] RANGE_FOR_STREAMS: 1, 2
[ INFO ]
[ INFO ] SUPPORTED_CONFIG_KEYS (default values):
[ INFO ] CACHE_DIR:
[ INFO ] CLDNN_ENABLE_FP16_FOR_QUANTIZED_MODELS: YES
[ INFO ] CLDNN_GRAPH_DUMPS_DIR:
[ INFO ] CLDNN_MEM_POOL: YES
[ INFO ] CLDNN_NV12_TWO_INPUTS: NO
[ INFO ] CLDNN_PLUGIN_PRIORITY: 0
[ INFO ] CLDNN_PLUGIN_THROTTLE: 0
[ INFO ] CLDNN_SOURCES_DUMPS_DIR:
[ INFO ] CONFIG_FILE:
[ INFO ] DEVICE_ID:
[ INFO ] DUMP_KERNELS: NO
[ INFO ] DYN_BATCH_ENABLED: NO
[ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ] GPU_THROUGHPUT_STREAMS: 1
[ INFO ] PERF_COUNT: NO
[ INFO ] TUNING_FILE:
[ INFO ] TUNING_MODE: TUNING_DISABLED
[ INFO ]
```
## See Also
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.get_metric]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#af1cdf2ecbea6399c556957c2c2fdf8eb
[IECore.get_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a48764dec7c235d2374af8b8ef53c6363

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging as log
import sys
from openvino.inference_engine import IECore
def param_to_string(metric) -> str:
"""Convert a list / tuple of parameters returned from IE to a string"""
if isinstance(metric, (list, tuple)):
return ', '.join([str(x) for x in metric])
else:
return str(metric)
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
# ---------------------------Initialize inference engine core----------------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
# ---------------------------Get metrics of available devices----------------------------------------------------------
log.info('Available devices:')
for device in ie.available_devices:
log.info(f'{device} :')
log.info('\tSUPPORTED_METRICS:')
for metric in ie.get_metric(device, 'SUPPORTED_METRICS'):
if metric not in ('SUPPORTED_METRICS', 'SUPPORTED_CONFIG_KEYS'):
try:
metric_val = ie.get_metric(device, metric)
except TypeError:
metric_val = 'UNSUPPORTED TYPE'
log.info(f'\t\t{metric}: {param_to_string(metric_val)}')
log.info('')
log.info('\tSUPPORTED_CONFIG_KEYS (default values):')
for config_key in ie.get_metric(device, 'SUPPORTED_CONFIG_KEYS'):
try:
config_val = ie.get_config(device, config_key)
except TypeError:
config_val = 'UNSUPPORTED TYPE'
log.info(f'\t\t{config_key}: {param_to_string(config_val)}')
log.info('')
# ----------------------------------------------------------------------------------------------------------------------
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,132 @@
# Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README}
This sample demonstrates how to do synchronous inference of object detection networks using [Shape Inference feature](../../../docs/IE_DG/ShapeInference.md).
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork.reshape] | Managing of network: configure input and output blobs |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/hello_reshape_ssd/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/hello_reshape_ssd.py -h
```
Usage message:
```
usage: hello_reshape_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG]
[-d DEVICE] [--labels LABELS]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name mobilenet-ssd
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name mobilenet-ssd
```
3. Perform inference of `car.bmp` using `mobilenet-ssd` model on a `GPU`, for example:
```
python <path_to_sample>/hello_reshape_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/car.bmp -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\mobilenet-ssd\FP32\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Reshaping the network to the height and width of the input image
[ INFO ] Input shape before reshape: [1, 3, 300, 300]
[ INFO ] Input shape after reshape: [1, 3, 637, 749]
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Found: label = 7, confidence = 0.99, coords = (283, 166), (541, 472)
[ INFO ] Image out.bmp was created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[IENetwork.reshape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a6683f0291db25f908f8d6720ab2f221a
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@@ -0,0 +1,148 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import os
import sys
import cv2
import numpy as np
from openvino.inference_engine import IECore
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to an .xml or .onnx file with a trained model.')
args.add_argument('-i', '--input', required=True, type=str, help='Required. Path to an image file.')
args.add_argument('-l', '--extension', type=str, default=None,
help='Optional. Required by the CPU Plugin for executing the custom operation on a CPU. '
'Absolute path to a shared library with the kernels implementations.')
args.add_argument('-c', '--config', type=str, default=None,
help='Optional. Required by GPU or VPU Plugins for the custom operation kernel. '
'Absolute path to operation description file (.xml).')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--labels', default=None, type=str, help='Optional. Path to a labels mapping file.')
# fmt: on
return parser.parse_args()
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
if args.extension and args.device == 'CPU':
log.info(f'Loading the {args.device} extension: {args.extension}')
ie.add_extension(args.extension, args.device)
if args.config and args.device in ('GPU', 'MYRIAD', 'HDDL'):
log.info(f'Loading the {args.device} configuration: {args.config}')
ie.set_config({'CONFIG_FILE': args.config}, args.device)
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
log.info(f'Reading the network: {args.model}')
# (.xml and .bin files) or (.onnx file)
net = ie.read_network(model=args.model)
if len(net.input_info) != 1:
log.error('Sample supports only single input topologies')
return -1
if len(net.outputs) != 1:
log.error('Sample supports only single output topologies')
return -1
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get names of input and output blobs
input_blob = next(iter(net.input_info))
out_blob = next(iter(net.outputs))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
net.outputs[out_blob].precision = 'FP32'
original_image = cv2.imread(args.input)
image = original_image.copy()
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
# Add N dimension to transform to NCHW
image = np.expand_dims(image, axis=0)
log.info('Reshaping the network to the height and width of the input image')
log.info(f'Input shape before reshape: {net.input_info[input_blob].input_data.shape}')
net.reshape({input_blob: image.shape})
log.info(f'Input shape after reshape: {net.input_info[input_blob].input_data.shape}')
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
# This sample changes a network input layer shape instead of a image shape. See Step 4.
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
res = exec_net.infer(inputs={input_blob: image})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
# Generate a label list
if args.labels:
with open(args.labels, 'r') as f:
labels = [line.split(',')[0].strip() for line in f]
res = res[out_blob]
output_image = original_image.copy()
h, w, _ = output_image.shape
# Change a shape of a numpy.ndarray with results ([1, 1, N, 7]) to get another one ([N, 7]),
# where N is the number of detected bounding boxes
detections = res.reshape(-1, 7)
for detection in detections:
confidence = detection[2]
if confidence > 0.5:
class_id = int(detection[1])
label = labels[class_id] if args.labels else class_id
xmin = int(detection[3] * w)
ymin = int(detection[4] * h)
xmax = int(detection[5] * w)
ymax = int(detection[6] * h)
log.info(f'Found: label = {label}, confidence = {confidence:.2f}, ' f'coords = ({xmin}, {ymin}), ({xmax}, {ymax})')
# Draw a bounding box on a output image
cv2.rectangle(output_image, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)
cv2.imwrite('out.bmp', output_image)
if os.path.exists('out.bmp'):
log.info('Image out.bmp was created!')
else:
log.error('Image out.bmp was not created. Check your permissions.')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,159 @@
# nGraph Function Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README}
This sample demonstrates how to execute an inference using [nGraph function feature](../../../docs/nGraph_DG/build_function.md) to create a network that uses weights from LeNet classification network, which is known to work well on digit classification tasks. So you don't need an XML file, the model will be created from the source code on the fly.
In addition to regular grayscale images with a digit, the sample also supports single-channel `ubyte` images as an input.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork], [IENetwork.batch_size] | Managing of network |
| nGraph Functions | [ngraph.impl.Function], [ngraph.parameter], [ngraph.constant], [ngraph.convolution], [ngraph.add], [ngraph.max_pool], [ngraph.reshape], [ngraph.matmul], [ngraph.relu], [ngraph.softmax], [ngraph.result], ngraph.impl.Function.to_capsule | Description of a network using nGraph Python API |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :---------------------------------------------------------------------- |
| Validated Models | LeNet |
| Model Format | Network weights file (\*.bin) |
| Validated images | The sample uses OpenCV\* to [read input grayscale image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png) or single-channel `ubyte` image |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/ngraph_function_creation_sample/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, creates a network using [nGraph function feature](../../../docs/nGraph_DG/build_function.md) and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/ngraph_function_creation_sample.py -h
```
Usage message:
```
usage: ngraph_function_creation_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to a file with network weights.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.
```
To run the sample, you need specify a model weights and image:
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTE**:
>
> - This sample supports models with FP32 weights only.
>
> - The `lenet.bin` weights file was generated by the [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
>
> - The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
>
> - The white over black images will be automatically inverted in color for a better predictions.
For example, you can do inference of `3.png` using the pre-trained model on a `GPU`:
```
python <path_to_sample>/ngraph_function_creation_sample.py -m <path_to_sample>/lenet.bin -i <path_to_image>/3.png -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
[ INFO ] Creating Inference Engine
[ INFO ] Loading the network using ngraph function with weights from c:\openvino\samples\python\ngraph_function_creation_sample\lenet.bin
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\3.png is inverted to white over black
[ WARNING ] Image c:\images\3.png is resized from (351, 353) to (28, 28)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: c:\images\3.png
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 3 1.0000000
[ INFO ] 9 0.0000000
[ INFO ] 8 0.0000000
[ INFO ] 7 0.0000000
[ INFO ] 6 0.0000000
[ INFO ] 5 0.0000000
[ INFO ] 4 0.0000000
[ INFO ] 2 0.0000000
[ INFO ] 1 0.0000000
[ INFO ] 0 0.0000000
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## Deprecation Notice
<table>
<tr>
<td><strong>Deprecation Begins</strong></td>
<td>June 1, 2020</td>
</tr>
<tr>
<td><strong>Removal Date</strong></td>
<td>December 1, 2020</td>
</tr>
</table>
*Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.*
*Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.*
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IENetwork]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html
[IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
<!-- TODO: Replace the link by another one pointing to the Python API, if available -->
[ngraph.impl.Function]:https://docs.openvinotoolkit.org/latest/ngraph_cpp_api/classngraph_1_1Function.html
<!-- [ngraph.impl.Function.to_capsule]: -->
[ngraph.parameter]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a709acd09288f5a76ed8d07492efc3d13
[ngraph.constant]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a5b6c4e416026e007a4107b3f510d0c27
[ngraph.convolution]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a3143ff55f68428afc1b6c802ee9381e8
[ngraph.add]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#abfa0373c10ced1b1f129594d9bd8a159
[ngraph.max_pool]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#ac60b4459ad23b296086925abce6acd2d
[ngraph.reshape]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a38e1ead9435c4b75c1d891ba2dd6a62e
[ngraph.matmul]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a403b5e10e1f75aeb7569024237e85071
[ngraph.relu]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a70b9b3faf58d85e43d27fef5028117e3
[ngraph.softmax]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a632cc9a31ecaefa2a982d039ecad8d26
[ngraph.result]:https://docs.openvinotoolkit.org/latest/ngraph_python_api/namespacengraph_1_1opset1_1_1ops.html#a94f8bf6ab8910dfd461d09cb6c6edd11

View File

@@ -0,0 +1,262 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import struct as st
import sys
import typing
from functools import reduce
import cv2
import ngraph
from ngraph.opset1 import max_pool
import numpy as np
from openvino.inference_engine import IECore, IENetwork
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to a file with network weights.')
args.add_argument('-i', '--input', required=True, type=str, nargs='+', help='Required. Path to an image file.')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--labels', default=None, type=str, help='Optional. Path to a labels mapping file.')
args.add_argument('-nt', '--number_top', default=10, type=int, help='Optional. Number of top results.')
# fmt: on
return parser.parse_args()
def read_image(image_path: str) -> np.ndarray:
"""Read and return an image as grayscale (one channel)"""
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
# Try to open image as ubyte
if image is None:
with open(image_path, 'rb') as f:
st.unpack('>4B', f.read(4)) # need to skip 4 bytes
nimg = st.unpack('>I', f.read(4))[0] # number of images
nrow = st.unpack('>I', f.read(4))[0] # number of rows
ncolumn = st.unpack('>I', f.read(4))[0] # number of column
nbytes = nimg * nrow * ncolumn * 1 # each pixel data is 1 byte
if nimg != 1:
raise Exception('Sample supports ubyte files with 1 image inside')
image = np.asarray(st.unpack('>' + 'B' * nbytes, f.read(nbytes))).reshape((nrow, ncolumn))
return image
def create_ngraph_function(args: argparse.Namespace) -> ngraph.impl.Function:
"""Create a network on the fly from the source code using ngraph"""
def shape_and_length(shape: list) -> typing.Tuple[list, int]:
length = reduce(lambda x, y: x * y, shape)
return shape, length
weights = np.fromfile(args.model, dtype=np.float32)
weights_offset = 0
padding_begin = padding_end = [0, 0]
# input
input_shape = [64, 1, 28, 28]
param_node = ngraph.parameter(input_shape, np.float32, 'Parameter')
# convolution 1
conv_1_kernel_shape, conv_1_kernel_length = shape_and_length([20, 1, 5, 5])
conv_1_kernel = ngraph.constant(weights[0:conv_1_kernel_length].reshape(conv_1_kernel_shape))
weights_offset += conv_1_kernel_length
conv_1_node = ngraph.convolution(param_node, conv_1_kernel, [1, 1], padding_begin, padding_end, [1, 1])
# add 1
add_1_kernel_shape, add_1_kernel_length = shape_and_length([1, 20, 1, 1])
add_1_kernel = ngraph.constant(
weights[weights_offset : weights_offset + add_1_kernel_length].reshape(add_1_kernel_shape),
)
weights_offset += add_1_kernel_length
add_1_node = ngraph.add(conv_1_node, add_1_kernel)
# maxpool 1
maxpool_1_node = max_pool(add_1_node, [2, 2], padding_begin, padding_end, [2, 2], 'ceil')
# convolution 2
conv_2_kernel_shape, conv_2_kernel_length = shape_and_length([50, 20, 5, 5])
conv_2_kernel = ngraph.constant(
weights[weights_offset : weights_offset + conv_2_kernel_length].reshape(conv_2_kernel_shape),
)
weights_offset += conv_2_kernel_length
conv_2_node = ngraph.convolution(maxpool_1_node, conv_2_kernel, [1, 1], padding_begin, padding_end, [1, 1])
# add 2
add_2_kernel_shape, add_2_kernel_length = shape_and_length([1, 50, 1, 1])
add_2_kernel = ngraph.constant(
weights[weights_offset : weights_offset + add_2_kernel_length].reshape(add_2_kernel_shape),
)
weights_offset += add_2_kernel_length
add_2_node = ngraph.add(conv_2_node, add_2_kernel)
# maxpool 2
maxpool_2_node = max_pool(add_2_node, [2, 2], padding_begin, padding_end, [2, 2], 'ceil')
# reshape 1
reshape_1_dims, reshape_1_length = shape_and_length([2])
# workaround to get int64 weights from float32 ndarray w/o unnecessary copying
dtype_weights = np.frombuffer(
weights[weights_offset : weights_offset + 2 * reshape_1_length],
dtype=np.int64,
)
reshape_1_kernel = ngraph.constant(dtype_weights)
weights_offset += 2 * reshape_1_length
reshape_1_node = ngraph.reshape(maxpool_2_node, reshape_1_kernel, True)
# matmul 1
matmul_1_kernel_shape, matmul_1_kernel_length = shape_and_length([500, 800])
matmul_1_kernel = ngraph.constant(
weights[weights_offset : weights_offset + matmul_1_kernel_length].reshape(matmul_1_kernel_shape),
)
weights_offset += matmul_1_kernel_length
matmul_1_node = ngraph.matmul(reshape_1_node, matmul_1_kernel, False, True)
# add 3
add_3_kernel_shape, add_3_kernel_length = shape_and_length([1, 500])
add_3_kernel = ngraph.constant(
weights[weights_offset : weights_offset + add_3_kernel_length].reshape(add_3_kernel_shape),
)
weights_offset += add_3_kernel_length
add_3_node = ngraph.add(matmul_1_node, add_3_kernel)
# ReLU
relu_node = ngraph.relu(add_3_node)
# reshape 2
reshape_2_kernel = ngraph.constant(dtype_weights)
reshape_2_node = ngraph.reshape(relu_node, reshape_2_kernel, True)
# matmul 2
matmul_2_kernel_shape, matmul_2_kernel_length = shape_and_length([10, 500])
matmul_2_kernel = ngraph.constant(
weights[weights_offset : weights_offset + matmul_2_kernel_length].reshape(matmul_2_kernel_shape),
)
weights_offset += matmul_2_kernel_length
matmul_2_node = ngraph.matmul(reshape_2_node, matmul_2_kernel, False, True)
# add 4
add_4_kernel_shape, add_4_kernel_length = shape_and_length([1, 10])
add_4_kernel = ngraph.constant(
weights[weights_offset : weights_offset + add_4_kernel_length].reshape(add_4_kernel_shape),
)
weights_offset += add_4_kernel_length
add_4_node = ngraph.add(matmul_2_node, add_4_kernel)
# softmax
softmax_axis = 1
softmax_node = ngraph.softmax(add_4_node, softmax_axis)
# result
result_node = ngraph.result(softmax_node)
return ngraph.impl.Function(result_node, [param_node], 'lenet')
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation------------------------------
log.info(f'Loading the network using ngraph function with weights from {args.model}')
ngraph_function = create_ngraph_function(args)
net = IENetwork(ngraph.impl.Function.to_capsule(ngraph_function))
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get names of input and output blobs
input_blob = next(iter(net.input_info))
out_blob = next(iter(net.outputs))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
net.outputs[out_blob].precision = 'FP32'
# Set a batch size to a equal number of input images
net.batch_size = len(args.input)
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
n, c, h, w = net.input_info[input_blob].input_data.shape
input_data = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = read_image(args.input[i])
light_pixel_count = np.count_nonzero(image > 127)
dark_pixel_count = np.count_nonzero(image < 127)
is_light_image = (light_pixel_count - dark_pixel_count) > 0
if is_light_image:
log.warning(f'Image {args.input[i]} is inverted to white over black')
image = cv2.bitwise_not(image)
if image.shape != (h, w):
log.warning(f'Image {args.input[i]} is resized from {image.shape} to {(h, w)}')
image = cv2.resize(image, (w, h))
input_data[i] = image
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
res = exec_net.infer(inputs={input_blob: input_data})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
# Generate a label list
if args.labels:
with open(args.labels, 'r') as f:
labels = [line.split(',')[0].strip() for line in f]
res = res[out_blob]
for i in range(n):
probs = res[i]
# Get an array of args.number_top class IDs in descending order of probability
top_n_idexes = np.argsort(probs)[-args.number_top :][::-1]
header = 'classid probability'
header = header + ' label' if args.labels else header
log.info(f'Image path: {args.input[i]}')
log.info(f'Top {args.number_top} results: ')
log.info(header)
log.info('-' * len(header))
for class_id in top_n_idexes:
probability_indent = ' ' * (len('classid') - len(str(class_id)) + 1)
label_indent = ' ' * (len('probability') - 8) if args.labels else ''
label = labels[class_id] if args.labels else ''
log.info(f'{class_id}{probability_indent}{probs[class_id]:.7f}{label_indent}{label}')
log.info('')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,133 @@
# Object Detection SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README}
This sample demonstrates how to do inference of object detection networks using Synchronous Inference Request API.
Models with 1 input and 1 or 2 outputs are supported.
In the last case names of output blobs must be "boxes" and "labels".
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd), [face-detection-0206](@ref omz_models_model_face_detection_0206) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [C](../../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md) |
## How It Works
On startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/object_detection_sample_ssd.py -h
```
Usage message:
```
usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION]
[-c CONFIG] [-d DEVICE]
[--labels LABELS]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name mobilenet-ssd
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name mobilenet-ssd
```
3. Perform inference of `car.bmp` using `mobilenet-ssd` model on a `GPU`, for example:
```
python <path_to_sample>/object_detection_sample_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/car.bmp -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\mobilenet-ssd\FP32\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (300, 300)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Found: label = 7, confidence = 1.00, coords = (228, 120), (502, 460)
[ INFO ] Found: label = 7, confidence = 0.95, coords = (637, 233), (743, 608)
[ INFO ] Image out.bmp created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@@ -0,0 +1,161 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import os
import sys
import cv2
import numpy as np
from openvino.inference_engine import IECore
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to an .xml or .onnx file with a trained model.')
args.add_argument('-i', '--input', required=True, type=str, help='Required. Path to an image file.')
args.add_argument('-l', '--extension', type=str, default=None,
help='Optional. Required by the CPU Plugin for executing the custom operation on a CPU. '
'Absolute path to a shared library with the kernels implementations.')
args.add_argument('-c', '--config', type=str, default=None,
help='Optional. Required by GPU or VPU Plugins for the custom operation kernel. '
'Absolute path to operation description file (.xml).')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--labels', default=None, type=str, help='Optional. Path to a labels mapping file.')
# fmt: on
return parser.parse_args()
def main(): # noqa
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
if args.extension and args.device == 'CPU':
log.info(f'Loading the {args.device} extension: {args.extension}')
ie.add_extension(args.extension, args.device)
if args.config and args.device in ('GPU', 'MYRIAD', 'HDDL'):
log.info(f'Loading the {args.device} configuration: {args.config}')
ie.set_config({'CONFIG_FILE': args.config}, args.device)
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
log.info(f'Reading the network: {args.model}')
# (.xml and .bin files) or (.onnx file)
net = ie.read_network(model=args.model)
if len(net.input_info) != 1:
log.error('The sample supports only single input topologies')
return -1
if len(net.outputs) != 1 and not ('boxes' in net.outputs or 'labels' in net.outputs):
log.error('The sample supports models with 1 output or with 2 with the names "boxes" and "labels"')
return -1
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get name of input blob
input_blob = next(iter(net.input_info))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
if len(net.outputs) == 1:
output_blob = next(iter(net.outputs))
net.outputs[output_blob].precision = 'FP32'
else:
net.outputs['boxes'].precision = 'FP32'
net.outputs['labels'].precision = 'U16'
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
original_image = cv2.imread(args.input)
image = original_image.copy()
_, _, net_h, net_w = net.input_info[input_blob].input_data.shape
if image.shape[:-1] != (net_h, net_w):
log.warning(f'Image {args.input} is resized from {image.shape[:-1]} to {(net_h, net_w)}')
image = cv2.resize(image, (net_w, net_h))
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
# Add N dimension to transform to NCHW
image = np.expand_dims(image, axis=0)
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
res = exec_net.infer(inputs={input_blob: image})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
# Generate a label list
if args.labels:
with open(args.labels, 'r') as f:
labels = [line.split(',')[0].strip() for line in f]
output_image = original_image.copy()
h, w, _ = output_image.shape
if len(net.outputs) == 1:
res = res[output_blob]
# Change a shape of a numpy.ndarray with results ([1, 1, N, 7]) to get another one ([N, 7]),
# where N is the number of detected bounding boxes
detections = res.reshape(-1, 7)
else:
detections = res['boxes']
labels = res['labels']
# Redefine scale coefficients
w, h = w / net_w, h / net_h
for i, detection in enumerate(detections):
if len(net.outputs) == 1:
_, class_id, confidence, xmin, ymin, xmax, ymax = detection
else:
class_id = labels[i]
xmin, ymin, xmax, ymax, confidence = detection
if confidence > 0.5:
label = int(labels[class_id]) if args.labels else int(class_id)
xmin = int(xmin * w)
ymin = int(ymin * h)
xmax = int(xmax * w)
ymax = int(ymax * h)
log.info(f'Found: label = {label}, confidence = {confidence:.2f}, ' f'coords = ({xmin}, {ymin}), ({xmax}, {ymax})')
# Draw a bounding box on a output image
cv2.rectangle(output_image, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)
cv2.imwrite('out.bmp', output_image)
if os.path.exists('out.bmp'):
log.info('Image out.bmp created!')
else:
log.error('Image out.bmp was not created. Check your permissions.')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,2 @@
opencv-python==4.5.*
numpy>=1.16.6,<1.20

19
samples/python/setup.cfg Normal file
View File

@@ -0,0 +1,19 @@
[flake8]
filename = *.py
max-line-length = 160
ignore = E203
max-parameters-amount = 8
show_source = True
docstring-convention = google
enable-extensions = G
[pydocstyle]
convention = google
[mypy]
ignore_missing_imports = True
disable_error_code = attr-defined
show_column_numbers = True
show_error_context = True
show_absolute_path = True
pretty = True

View File

@@ -0,0 +1,220 @@
# Automatic Speech Recognition Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_speech_sample_README}
This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors.
The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores.
Automatic Speech Recognition Python sample application demonstrates how to use the following Inference Engine Python API in applications:
| Feature | API | Description |
| :------------------ | :---------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------- |
| Import/Export Model | [IECore.import_network], [ExecutableNetwork.export] | The GNA plugin supports loading and saving of the GNA-optimized model |
| Network Operations | [IENetwork.batch_size], [CDataPtr.shape], [ExecutableNetwork.input_info], [ExecutableNetwork.outputs] | Managing of network: configure input and output blobs |
| Network Operations | [IENetwork.add_outputs] | Managing of network: Change names of output layers in the network |
| InferRequest Operations|InferRequest.query_state, VariableState.reset| Gets and resets state control interface for given executable network |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :---------------------------------------------------------------------------------------------------- |
| Validated Models | Acoustic model based on Kaldi* neural networks (see [Model Preparation](#model-preparation) section) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin) |
| Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/speech_sample/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, loads a specified model and input data to the Inference Engine plugin, performs synchronous inference on all speech utterances stored in the input file, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## GNA-specific details
### Quantization
If the GNA device is selected (for example, using the `-d` GNA flag), the GNA Inference Engine plugin quantizes the model and input feature vector sequence to integer representation before performing inference.
The `-qb` flag provides a hint to the GNA plugin regarding the preferred target weight resolution for all layers.
For example, when `-qb 8` is specified, the plugin will use 8-bit weights wherever possible in the
network.
> **NOTE**:
>
> - It is not always possible to use 8-bit weights due to GNA hardware limitations. For example, convolutional layers always use 16-bit weights (GNA hardware version 1 and 2). This limitation will be removed in GNA hardware version 3 and higher.
>
### Execution Modes
Several execution modes are supported via the `-d` flag:
- `CPU` - All calculation are performed on CPU device using CPU Plugin.
- `GPU` - All calculation are performed on GPU device using GPU Plugin.
- `MYRIAD` - All calculation are performed on Intel® Neural Compute Stick 2 device using VPU MYRIAD Plugin.
- `GNA_AUTO` - GNA hardware is used if available and the driver is installed. Otherwise, the GNA device is emulated in fast-but-not-bit-exact mode.
- `GNA_HW` - GNA hardware is used if available and the driver is installed. Otherwise, an error will occur.
- `GNA_SW` - Deprecated. The GNA device is emulated in fast-but-not-bit-exact mode.
- `GNA_SW_FP32` - Substitutes parameters and calculations from low precision to floating point (FP32).
- `GNA_SW_EXACT` - GNA device is emulated in bit-exact mode.
### Loading and Saving Models
The GNA plugin supports loading and saving of the GNA-optimized model (non-IR) via the `-rg` and `-wg` flags.
Thereby, it is possible to avoid the cost of full model quantization at run time.
In addition to performing inference directly from a GNA model file, this option makes it possible to:
- Convert from IR format to GNA format model file (`-m`, `-wg`)
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/speech_sample.py -h
```
Usage message:
```
usage: speech_sample.py [-h] (-m MODEL | -rg IMPORT_GNA_MODEL) -i INPUT
[-o OUTPUT] [-r REFERENCE] [-d DEVICE]
[-bs BATCH_SIZE] [-qb QUANTIZATION_BITS]
[-sf SCALE_FACTOR] [-wg EXPORT_GNA_MODEL] [-pc]
[-a {CORE,ATOM}] [-iname INPUT_LAYERS]
[-oname OUTPUT_LAYERS]
optional arguments:
-m MODEL, --model MODEL
Path to an .xml file with a trained model (required if
-rg is missing).
-rg IMPORT_GNA_MODEL, --import_gna_model IMPORT_GNA_MODEL
Read GNA model from file using path/filename provided
(required if -m is missing).
Options:
-h, --help Show this help message and exit.
-i INPUT, --input INPUT
Required. Path to an input file (.ark or .npz).
-o OUTPUT, --output OUTPUT
Optional. Output file name to save inference results
(.ark or .npz).
-r REFERENCE, --reference REFERENCE
Optional. Read reference score file and compare
scores.
-d DEVICE, --device DEVICE
Optional. Specify a target device to infer on. CPU,
GPU, MYRIAD, GNA_AUTO, GNA_HW, GNA_SW_FP32,
GNA_SW_EXACT and HETERO with combination of GNA as the
primary device and CPU as a secondary (e.g.
HETERO:GNA,CPU) are supported. The sample will look
for a suitable plugin for device specified. Default
value is CPU.
-bs BATCH_SIZE, --batch_size BATCH_SIZE
Optional. Batch size 1-8 (default 1).
-qb QUANTIZATION_BITS, --quantization_bits QUANTIZATION_BITS
Optional. Weight bits for quantization: 8 or 16
(default 16).
-sf SCALE_FACTOR, --scale_factor SCALE_FACTOR
Optional. The user-specified input scale factor for
quantization. If the network contains multiple inputs,
provide scale factors by separating them with commas.
-wg EXPORT_GNA_MODEL, --export_gna_model EXPORT_GNA_MODEL
Optional. Write GNA model to file using path/filename
provided.
-pc, --performance_counter
Optional. Enables performance report (specify -a to
ensure arch accurate results).
-a {CORE,ATOM}, --arch {CORE,ATOM}
Optional. Specify architecture. CORE, ATOM with the
combination of -pc.
-iname INPUT_LAYERS, --input_layers INPUT_LAYERS
Optional. Layer names for input blobs. The names are
separated with ",". Allows to change the order of
input layers for -i flag. Example: Input1,Input2
-oname OUTPUT_LAYERS, --output_layers OUTPUT_LAYERS
Optional. Layer names for output blobs. The names are
separated with ",". Allows to change the order of
output layers for -o flag. Example:
Output1:port,Output2:port.
```
## Model Preparation
You can use the following model optimizer command to convert a Kaldi nnet1 or nnet2 neural network to Inference Engine Intermediate Representation format:
```
python <path_to_mo>/mo.py --framework kaldi --input_model wsj_dnn5b.nnet --counts wsj_dnn5b.counts --remove_output_softmax --output_dir <path_to_dir>
```
The following pre-trained models are available:
- wsj_dnn5b_smbr
- rm_lstm4f
- rm_cnn4a_smbr
All of them can be downloaded from [https://storage.openvinotoolkit.org/models_contrib/speech/2021.2](https://storage.openvinotoolkit.org/models_contrib/speech/2021.2).
## Speech Inference
You can do inference on Intel® Processors with the GNA co-processor (or emulation library):
```
python <path_to_sample>/speech_sample.py -m <path_to_model>/wsj_dnn5b.xml -i <path_to_ark>/dev93_10.ark -r <path_to_ark>/dev93_scores_10.ark -d GNA_AUTO -o result.npz
```
> **NOTES**:
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample supports input and output in numpy file format (.npz)
## Sample Output
The sample application logs each step in a standard output stream.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: wsj_dnn5b.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Using scale factor(s) calculated from first utterance
[ INFO ] For input 0 using scale factor of 2175.4322418
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Utterance 0 (4k0c0301)
[ INFO ] Output blob name: affinetransform14/Fused_Add_
[ INFO ] Frames in utterance: 1294
[ INFO ] Total time in Infer (HW and SW): 6211.45ms
[ INFO ] max error: 0.7051840
[ INFO ] avg error: 0.0448388
[ INFO ] avg rms error: 0.0582387
[ INFO ] stdev error: 0.0371650
[ INFO ]
[ INFO ] Utterance 1 (4k0c0302)
[ INFO ] Output blob name: affinetransform14/Fused_Add_
[ INFO ] Frames in utterance: 1005
[ INFO ] Total time in Infer (HW and SW): 4742.27ms
[ INFO ] max error: 0.7575974
[ INFO ] avg error: 0.0452166
[ INFO ] avg rms error: 0.0586013
[ INFO ] stdev error: 0.0372769
...
[ INFO ] Total sample time: 40219.99ms
[ INFO ] File result.npz was created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e
[IENetwork.add_outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#ae8024b07f3301d6d5de5c0d153e2e6e6
[CDataPtr.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1CDataPtr.html#aa6fd459edb323d1c6215dc7a970ebf7f
[ExecutableNetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#ac76a04c2918607874018d2e15a2f274f
[ExecutableNetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#a4a631776df195004b1523e6ae91a65c1
[IECore.import_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#afdeac5192bb1d9e64722f1071fb0a64a
[ExecutableNetwork.export]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#afa78158252f0d8070181bafec4318413

View File

@@ -0,0 +1,49 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
model = parser.add_mutually_exclusive_group(required=True)
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
model.add_argument('-m', '--model', type=str,
help='Path to an .xml file with a trained model (required if -rg is missing).')
model.add_argument('-rg', '--import_gna_model', type=str,
help='Read GNA model from file using path/filename provided (required if -m is missing).')
args.add_argument('-i', '--input', required=True, type=str, help='Required. Path to an input file (.ark or .npz).')
args.add_argument('-o', '--output', type=str,
help='Optional. Output file name to save inference results (.ark or .npz).')
args.add_argument('-r', '--reference', type=str,
help='Optional. Read reference score file and compare scores.')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify a target device to infer on. '
'CPU, GPU, MYRIAD, GNA_AUTO, GNA_HW, GNA_SW_FP32, GNA_SW_EXACT and HETERO with combination of GNA'
' as the primary device and CPU as a secondary (e.g. HETERO:GNA,CPU) are supported. '
'The sample will look for a suitable plugin for device specified. Default value is CPU.')
args.add_argument('-bs', '--batch_size', default=1, type=int, help='Optional. Batch size 1-8 (default 1).')
args.add_argument('-qb', '--quantization_bits', default=16, type=int,
help='Optional. Weight bits for quantization: 8 or 16 (default 16).')
args.add_argument('-sf', '--scale_factor', type=str,
help='Optional. The user-specified input scale factor for quantization. '
'If the network contains multiple inputs, provide scale factors by separating them with commas.')
args.add_argument('-wg', '--export_gna_model', type=str,
help='Optional. Write GNA model to file using path/filename provided.')
args.add_argument('-we', '--export_embedded_gna_model', type=str, help=argparse.SUPPRESS)
args.add_argument('-we_gen', '--embedded_gna_configuration', default='GNA1', type=str, help=argparse.SUPPRESS)
args.add_argument('-pc', '--performance_counter', action='store_true',
help='Optional. Enables performance report (specify -a to ensure arch accurate results).')
args.add_argument('-a', '--arch', default='CORE', type=str.upper, choices=['CORE', 'ATOM'],
help='Optional. Specify architecture. CORE, ATOM with the combination of -pc.')
args.add_argument('-iname', '--input_layers', type=str,
help='Optional. Layer names for input blobs. The names are separated with ",". '
'Allows to change the order of input layers for -i flag. Example: Input1,Input2')
args.add_argument('-oname', '--output_layers', type=str,
help='Optional. Layer names for output blobs. The names are separated with ",". '
'Allows to change the order of output layers for -o flag. Example: Output1:port,Output2:port.')
return parser.parse_args()

View File

@@ -0,0 +1,103 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging as log
import sys
from typing import IO, Any
import numpy as np
def read_ark_file(file_name: str) -> dict:
"""Read utterance matrices from a .ark file"""
def read_key(input_file: IO[Any]) -> str:
"""Read a identifier of utterance matrix"""
key = ''
char = input_file.read(1).decode()
while char not in ('', ' '):
key += char
char = input_file.read(1).decode()
return key
def read_matrix(input_file: IO[Any]) -> np.ndarray:
"""Read a utterance matrix"""
header = input_file.read(5).decode()
if 'FM' in header:
num_of_bytes = 4
dtype = 'float32'
elif 'DM' in header:
num_of_bytes = 8
dtype = 'float64'
else:
log.error(f'The utterance header "{header}" does not contain information about a type of elements.')
sys.exit(-7)
_, rows, _, cols = np.frombuffer(input_file.read(10), 'int8, int32, int8, int32')[0]
buffer = input_file.read(rows * cols * num_of_bytes)
vector = np.frombuffer(buffer, dtype)
matrix = np.reshape(vector, (rows, cols))
return matrix
utterances = {}
with open(file_name, 'rb') as input_file:
key = read_key(input_file)
while key:
utterances[key] = read_matrix(input_file)
key = read_key(input_file)
return utterances
def write_ark_file(file_name: str, utterances: dict):
"""Write utterance matrices to a .ark file"""
with open(file_name, 'wb') as output_file:
for key, matrix in sorted(utterances.items()):
# write a matrix key
output_file.write(key.encode())
output_file.write(' '.encode())
output_file.write('\0B'.encode())
# write a matrix precision
if matrix.dtype == 'float32':
output_file.write('FM '.encode())
elif matrix.dtype == 'float64':
output_file.write('DM '.encode())
# write a matrix shape
output_file.write('\04'.encode())
output_file.write(matrix.shape[0].to_bytes(4, byteorder='little', signed=False))
output_file.write('\04'.encode())
output_file.write(matrix.shape[1].to_bytes(4, byteorder='little', signed=False))
# write a matrix data
output_file.write(matrix.tobytes())
def read_utterance_file(file_name: str) -> dict:
"""Read utterance matrices from a file"""
file_extension = file_name.split('.')[-1]
if file_extension == 'ark':
return read_ark_file(file_name)
elif file_extension == 'npz':
return dict(np.load(file_name))
else:
log.error(f'The file {file_name} cannot be read. The sample supports only .ark and .npz files.')
sys.exit(-1)
def write_utterance_file(file_name: str, utterances: dict):
"""Write utterance matrices to a file"""
file_extension = file_name.split('.')[-1]
if file_extension == 'ark':
write_ark_file(file_name, utterances)
elif file_extension == 'npz':
np.savez(file_name, **utterances)
else:
log.error(f'The file {file_name} cannot be written. The sample supports only .ark and .npz files.')
sys.exit(-2)

View File

@@ -0,0 +1,336 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import re
import sys
from timeit import default_timer
from typing import Union
import numpy as np
from arg_parser import parse_args
from file_options import read_utterance_file, write_utterance_file
from openvino.inference_engine import ExecutableNetwork, IECore, IENetwork
# Operating Frequency for GNA HW devices for Core and Atom architecture
GNA_CORE_FREQUENCY = 400
GNA_ATOM_FREQUENCY = 200
def get_scale_factor(matrix: np.ndarray) -> float:
"""Get scale factor for quantization using utterance matrix"""
# Max to find scale factor
target_max = 16384
max_val = np.max(matrix)
if max_val == 0:
return 1.0
else:
return target_max / max_val
def infer_data(data: dict, exec_net: ExecutableNetwork, input_blobs: list, output_blobs: list) -> np.ndarray:
"""Do a synchronous matrix inference"""
matrix_shape = next(iter(data.values())).shape
result = {}
for blob_name in output_blobs:
shape = exec_net.outputs[blob_name].shape
batch_size = shape[0]
result[blob_name] = np.ndarray((matrix_shape[0], shape[-1]))
slice_begin = 0
slice_end = batch_size
while slice_begin < matrix_shape[0]:
vectors = {blob_name: data[blob_name][slice_begin:slice_end] for blob_name in input_blobs}
num_of_vectors = next(iter(vectors.values())).shape[0]
if num_of_vectors < batch_size:
temp = {blob_name: np.zeros((batch_size, vectors[blob_name].shape[1])) for blob_name in input_blobs}
for blob_name in input_blobs:
temp[blob_name][:num_of_vectors] = vectors[blob_name]
vectors = temp
vector_results = exec_net.infer(vectors)
for blob_name in output_blobs:
result[blob_name][slice_begin:slice_end] = vector_results[blob_name][:num_of_vectors]
slice_begin += batch_size
slice_end += batch_size
return result
def compare_with_reference(result: np.ndarray, reference: np.ndarray):
error_matrix = np.absolute(result - reference)
max_error = np.max(error_matrix)
sum_error = np.sum(error_matrix)
avg_error = sum_error / error_matrix.size
sum_square_error = np.sum(np.square(error_matrix))
avg_rms_error = np.sqrt(sum_square_error / error_matrix.size)
stdev_error = np.sqrt(sum_square_error / error_matrix.size - avg_error * avg_error)
log.info(f'max error: {max_error:.7f}')
log.info(f'avg error: {avg_error:.7f}')
log.info(f'avg rms error: {avg_rms_error:.7f}')
log.info(f'stdev error: {stdev_error:.7f}')
def get_input_layer_list(net: Union[IENetwork, ExecutableNetwork], args: argparse.Namespace) -> list:
"""Get a list of input layer names"""
return re.split(', |,', args.input_layers) if args.input_layers else [next(iter(net.input_info))]
def get_output_layer_list(net: Union[IENetwork, ExecutableNetwork],
args: argparse.Namespace, with_ports: bool) -> list:
"""Get a list of output layer names"""
if args.output_layers:
output_name_port = [output.split(':') for output in re.split(', |,', args.output_layers)]
if with_ports:
try:
return [(blob_name, int(port)) for blob_name, port in output_name_port]
except ValueError:
log.error('Incorrect value for -oname/--output_layers option, please specify a port for output layer.')
sys.exit(-4)
else:
return [blob_name for blob_name, _ in output_name_port]
else:
return [list(net.outputs.keys())[-1]]
def parse_scale_factors(args: argparse.Namespace) -> list:
"""Get a list of scale factors for input files"""
input_files = re.split(', |,', args.input)
scale_factors = re.split(', |,', str(args.scale_factor))
scale_factors = list(map(float, scale_factors))
if len(input_files) != len(scale_factors):
log.error(f'Incorrect command line for multiple inputs: {len(scale_factors)} scale factors provided for '
f'{len(input_files)} input files.')
sys.exit(-7)
for i, scale_factor in enumerate(scale_factors):
if float(scale_factor) < 0:
log.error(f'Scale factor for input #{i} (counting from zero) is out of range (must be positive).')
sys.exit(-8)
return scale_factors
def set_scale_factors(plugin_config: dict, scale_factors: list):
"""Set a scale factor provided for each input"""
for i, scale_factor in enumerate(scale_factors):
log.info(f'For input {i} using scale factor of {scale_factor:.7f}')
plugin_config[f'GNA_SCALE_FACTOR_{i}'] = str(scale_factor)
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation---------------
if args.model:
log.info(f'Reading the network: {args.model}')
# .xml and .bin files
net = ie.read_network(model=args.model)
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Mark layers from args.output_layers as outputs
if args.output_layers:
net.add_outputs(get_output_layer_list(net, args, with_ports=True))
# Get names of input and output blobs
input_blobs = get_input_layer_list(net, args)
output_blobs = get_output_layer_list(net, args, with_ports=False)
# Set input and output precision manually
for blob_name in input_blobs:
net.input_info[blob_name].precision = 'FP32'
for blob_name in output_blobs:
net.outputs[blob_name].precision = 'FP32'
net.batch_size = args.batch_size
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
devices = args.device.replace('HETERO:', '').split(',')
plugin_config = {}
if 'GNA' in args.device:
gna_device_mode = devices[0] if '_' in devices[0] else 'GNA_AUTO'
devices[0] = 'GNA'
plugin_config['GNA_DEVICE_MODE'] = gna_device_mode
plugin_config['GNA_PRECISION'] = f'I{args.quantization_bits}'
# Set a GNA scale factor
if args.import_gna_model:
if args.scale_factor:
log.warning(f'Custom scale factor will be used for imported GNA model: {args.import_gna_model}')
set_scale_factors(plugin_config, parse_scale_factors(args))
else:
log.info(f'Using scale factor from the imported GNA model: {args.import_gna_model}')
else:
if args.scale_factor:
set_scale_factors(plugin_config, parse_scale_factors(args))
else:
scale_factors = []
for file_name in re.split(', |,', args.input):
first_utterance = next(iter(read_utterance_file(file_name).values()))
scale_factors.append(get_scale_factor(first_utterance))
log.info('Using scale factor(s) calculated from first utterance')
set_scale_factors(plugin_config, scale_factors)
if args.export_embedded_gna_model:
plugin_config['GNA_FIRMWARE_MODEL_IMAGE'] = args.export_embedded_gna_model
plugin_config['GNA_FIRMWARE_MODEL_IMAGE_GENERATION'] = args.embedded_gna_configuration
if args.performance_counter:
plugin_config['PERF_COUNT'] = 'YES'
device_str = f'HETERO:{",".join(devices)}' if 'HETERO' in args.device else devices[0]
log.info('Loading the model to the plugin')
if args.model:
exec_net = ie.load_network(net, device_str, plugin_config)
else:
exec_net = ie.import_network(args.import_gna_model, device_str, plugin_config)
input_blobs = get_input_layer_list(exec_net, args)
output_blobs = get_output_layer_list(exec_net, args, with_ports=False)
if args.input:
input_files = re.split(', |,', args.input)
if len(input_blobs) != len(input_files):
log.error(f'Number of network inputs ({len(input_blobs)}) is not equal '
f'to number of ark files ({len(input_files)})')
sys.exit(-3)
if args.reference:
reference_files = re.split(', |,', args.reference)
if len(output_blobs) != len(reference_files):
log.error('The number of reference files is not equal to the number of network outputs.')
sys.exit(-5)
if args.output:
output_files = re.split(', |,', args.output)
if len(output_blobs) != len(output_files):
log.error('The number of output files is not equal to the number of network outputs.')
sys.exit(-6)
if args.export_gna_model:
log.info(f'Writing GNA Model to {args.export_gna_model}')
exec_net.export(args.export_gna_model)
return 0
if args.export_embedded_gna_model:
log.info(f'Exported GNA embedded model to file {args.export_embedded_gna_model}')
log.info(f'GNA embedded model export done for GNA generation {args.embedded_gna_configuration}')
return 0
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
file_data = [read_utterance_file(file_name) for file_name in input_files]
input_data = {
utterance_name: {
input_blobs[i]: file_data[i][utterance_name] for i in range(len(input_blobs))
}
for utterance_name in file_data[0].keys()
}
if args.reference:
references = {output_blobs[i]: read_utterance_file(reference_files[i]) for i in range(len(output_blobs))}
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
results = {blob_name: {} for blob_name in output_blobs}
total_infer_time = 0
for i, key in enumerate(sorted(input_data)):
start_infer_time = default_timer()
# Reset states between utterance inferences to remove a memory impact
for request in exec_net.requests:
for state in request.query_state():
state.reset()
result = infer_data(input_data[key], exec_net, input_blobs, output_blobs)
for blob_name in result.keys():
results[blob_name][key] = result[blob_name]
infer_time = default_timer() - start_infer_time
total_infer_time += infer_time
num_of_frames = file_data[0][key].shape[0]
avg_infer_time_per_frame = infer_time / num_of_frames
# ---------------------------Step 8. Process output--------------------------------------------------------------------
log.info('')
log.info(f'Utterance {i} ({key}):')
log.info(f'Total time in Infer (HW and SW): {infer_time * 1000:.2f}ms')
log.info(f'Frames in utterance: {num_of_frames}')
log.info(f'Average Infer time per frame: {avg_infer_time_per_frame * 1000:.2f}ms')
for blob_name in output_blobs:
log.info('')
log.info(f'Output blob name: {blob_name}')
log.info(f'Number scores per frame: {results[blob_name][key].shape[1]}')
if args.reference:
log.info('')
compare_with_reference(results[blob_name][key], references[blob_name][key])
if args.performance_counter:
if 'GNA' in args.device:
pc = exec_net.requests[0].get_perf_counts()
total_cycles = int(pc['1.1 Total scoring time in HW']['real_time'])
stall_cycles = int(pc['1.2 Stall scoring time in HW']['real_time'])
active_cycles = total_cycles - stall_cycles
frequency = 10**6
if args.arch == 'CORE':
frequency *= GNA_CORE_FREQUENCY
else:
frequency *= GNA_ATOM_FREQUENCY
total_inference_time = total_cycles / frequency
active_time = active_cycles / frequency
stall_time = stall_cycles / frequency
log.info('')
log.info('Performance Statistics of GNA Hardware')
log.info(f' Total Inference Time: {(total_inference_time * 1000):.4f} ms')
log.info(f' Active Time: {(active_time * 1000):.4f} ms')
log.info(f' Stall Time: {(stall_time * 1000):.4f} ms')
log.info('')
log.info(f'Total sample time: {total_infer_time * 1000:.2f}ms')
if args.output:
for i, blob_name in enumerate(results):
write_utterance_file(output_files[i], results[blob_name])
log.info(f'File {output_files[i]} was created!')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, '
'for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,146 @@
# Style Transfer Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README}
This sample demonstrates how to do synchronous inference of style transfer networks using Network Batch Size Feature.
You can specify multiple images to input, a network batch size will be set equal to their number automatically.
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork.batch_size] | Managing of network: configure input and output blobs |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [fast-neural-style-mosaic-onnx](@ref omz_models_model_fast_neural_style_mosaic_onnx) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/style_transfer_sample/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image(s), logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/style_transfer_sample.py -h
```
Usage message:
```
usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l EXTENSION] [-c CONFIG] [-d DEVICE]
[--original_size] [--mean_val_r MEAN_VAL_R]
[--mean_val_g MEAN_VAL_G]
[--mean_val_b MEAN_VAL_B]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--original_size Optional. Resize an output image to original image
size.
--mean_val_r MEAN_VAL_R
Optional. Mean value of red channel for mean value
subtraction in postprocessing.
--mean_val_g MEAN_VAL_G
Optional. Mean value of green channel for mean value
subtraction in postprocessing.
--mean_val_b MEAN_VAL_B
Optional. Mean value of blue channel for mean value
subtraction in postprocessing.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name fast-neural-style-mosaic-onnx
```
2. `fast-neural-style-mosaic-onnx` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script:
```
python <path_to_omz_tools>/converter.py --name <model_name>
```
3. Perform inference of `car.bmp` and `cat.jpg` using `fast-neural-style-mosaic-onnx` model on a `GPU`, for example:
```
python <path_to_sample>/style_transfer_sample.py -m <path_to_model>/fast-neural-style-mosaic-onnx.onnx -i <path_to_image>/car.bmp <path_to_image>/cat.jpg -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and creates an output image (`out_0.bmp`) or a sequence of images (`out_0.bmp`, .., `out_<n>.bmp`) that are redrawn in the style of the style transfer model used.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\fast-neural-style-mosaic-onnx\fast-neural-style-mosaic-onnx.onnx
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (224, 224)
[ WARNING ] Image c:\images\cat.jpg is resized from (300, 300) to (224, 224)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image out_0.bmp created!
[ INFO ] Image out_1.bmp created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import os
import sys
import cv2
import numpy as np
from openvino.inference_engine import IECore
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to an .xml or .onnx file with a trained model.')
args.add_argument('-i', '--input', required=True, type=str, nargs='+', help='Required. Path to an image file.')
args.add_argument('-l', '--extension', type=str, default=None,
help='Optional. Required by the CPU Plugin for executing the custom operation on a CPU. '
'Absolute path to a shared library with the kernels implementations.')
args.add_argument('-c', '--config', type=str, default=None,
help='Optional. Required by GPU or VPU Plugins for the custom operation kernel. '
'Absolute path to operation description file (.xml).')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--original_size', action='store_true', default=False,
help='Optional. Resize an output image to original image size.')
args.add_argument('--mean_val_r', default=0, type=float,
help='Optional. Mean value of red channel for mean value subtraction in postprocessing.')
args.add_argument('--mean_val_g', default=0, type=float,
help='Optional. Mean value of green channel for mean value subtraction in postprocessing.')
args.add_argument('--mean_val_b', default=0, type=float,
help='Optional. Mean value of blue channel for mean value subtraction in postprocessing.')
# fmt: on
return parser.parse_args()
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
if args.extension and args.device == 'CPU':
log.info(f'Loading the {args.device} extension: {args.extension}')
ie.add_extension(args.extension, args.device)
if args.config and args.device in ('GPU', 'MYRIAD', 'HDDL'):
log.info(f'Loading the {args.device} configuration: {args.config}')
ie.set_config({'CONFIG_FILE': args.config}, args.device)
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
log.info(f'Reading the network: {args.model}')
# (.xml and .bin files) or (.onnx file)
net = ie.read_network(model=args.model)
if len(net.input_info) != 1:
log.error('Sample supports only single input topologies')
return -1
if len(net.outputs) != 1:
log.error('Sample supports only single output topologies')
return -1
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get names of input and output blobs
input_blob = next(iter(net.input_info))
out_blob = next(iter(net.outputs))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
net.outputs[out_blob].precision = 'FP32'
# Set a batch size to a equal number of input images
net.batch_size = len(args.input)
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
original_images = []
n, c, h, w = net.input_info[input_blob].input_data.shape
input_data = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = cv2.imread(args.input[i])
original_images.append(image)
if image.shape[:-1] != (h, w):
log.warning(f'Image {args.input[i]} is resized from {image.shape[:-1]} to {(h, w)}')
image = cv2.resize(image, (w, h))
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
input_data[i] = image
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
res = exec_net.infer(inputs={input_blob: input_data})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
res = res[out_blob]
for i in range(n):
output_image = res[i]
# Change data layout from CHW to HWC
output_image = output_image.transpose((1, 2, 0))
# Convert BGR color order to RGB
output_image = cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB)
# Apply mean argument values
output_image = output_image[::] - (args.mean_val_r, args.mean_val_g, args.mean_val_b)
# Set pixel values bitween 0 and 255
output_image = np.clip(output_image, 0, 255)
# Resize a output image to original size
if args.original_size:
h, w, _ = original_images[i].shape
output_image = cv2.resize(output_image, (w, h))
cv2.imwrite(f'out_{i}.bmp', output_image)
if os.path.exists(f'out_{i}.bmp'):
log.info(f'Image out_{i}.bmp created!')
else:
log.error(f'Image out_{i}.bmp was not created. Check your permissions.')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())