Publishing 2019 R1 content

This commit is contained in:
Alexey Suhov
2019-04-12 18:25:53 +03:00
parent 669bee86e5
commit 72660e9a4d
3639 changed files with 266396 additions and 63952 deletions

View File

@@ -1,4 +1,4 @@
# Benchmark Application Demo
# Benchmark Application Python* Demo
This topic demonstrates how to run the Benchmark Application demo, which performs inference using convolutional networks.
@@ -8,6 +8,7 @@ This topic demonstrates how to run the Benchmark Application demo, which perform
Upon the start-up, the application reads command-line parameters and loads a network and images to the Inference Engine plugin. The number of infer requests and execution approach depend on a mode defined with the `-api` command-line parameter.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
### Synchronous API
For synchronous mode, the primary metric is latency. The application creates one infer request and executes the `Infer` method. A number of executions is defined by one of the two values:
@@ -30,37 +31,69 @@ The infer requests are executed asynchronously. `Wait` method is used to wait fo
## Running
Running the application with the `-h` or `--help`' option yields the following usage message:
```python3 benchmark_app.py -h
```python3 benchmark_app.py -h```
The command yields the following usage message:
```
usage: benchmark_app.py [-h] -i PATH_TO_IMAGES -m PATH_TO_MODEL
[-c PATH_TO_CLDNN_CONFIG] [-l PATH_TO_EXTENSION]
[-api {sync,async}] [-d TARGET_DEVICE]
[-niter NUMBER_ITERATIONS]
[-nireq NUMBER_INFER_REQUESTS]
[-nthreads NUMBER_THREADS] [-b BATCH_SIZE]
[-pin {YES,NO}]
benchmark_app [OPTION]
Options:
-h, --help Print a usage message
-i, --path_to_images "<path>" Required. Path to a folder with images or to image files.
-m, --path_to_model "<path>" Required. Path to an .xml file with a trained model.
-pp "<path>" Path to a plugin folder.
-api, --api_type "<sync/async>" Required. Enable using sync/async API.
-d, --target_device "<device>" Specify a target device to infer on: CPU, GPU, FPGA or MYRIAD. Use "-d HETERO:<comma separated devices list>" format to specify HETERO plugin. The application looks for a suitable plugin for the specified device.
-niter, --number_iterations "<integer>" Optional. Number of iterations. If not specified, the number of iterations is calculated depending on a device.
-nireq, --number_infer_requests "<integer>" Optional. Number of infer requests (default value is 2).
-l, --path_to_extension "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
Or
-c, --path_to_cldnn_config "<absolute_path>" Required for GPU custom kernels. Absolute path to an .xml file with the kernels description.
-b, --batch_size "<integer>" Optional. Batch size value. If not specified, the batch size value is determined from IR.
-nthreads, --number_threads "<integer>" Number of threads to use for inference on the CPU (including Hetero cases).
-pin {YES,NO}, --infer_threads_pinning {YES,NO} Optional. Enable ("YES" is default value) or disable ("NO")CPU threads pinning for CPU-involved inference.
-h, --help Show this help message and exit.
-i PATH_TO_IMAGES, --path_to_images PATH_TO_IMAGES
Required. Path to a folder with images or to image
files.
-m PATH_TO_MODEL, --path_to_model PATH_TO_MODEL
Required. Path to an .xml file with a trained model.
-c PATH_TO_CLDNN_CONFIG, --path_to_cldnn_config PATH_TO_CLDNN_CONFIG
Optional. Required for GPU custom kernels. Absolute
path to an .xml file with the kernels description.
-l PATH_TO_EXTENSION, --path_to_extension PATH_TO_EXTENSION
Optional. Required for GPU custom kernels. Absolute
path to an .xml file with the kernels description.
-api {sync,async}, --api_type {sync,async}
Optional. Enable using sync/async API. Default value
is sync
-d TARGET_DEVICE, --target_device TARGET_DEVICE
Optional. Specify a target device to infer on: CPU,
GPU, FPGA, HDDL or MYRIAD. Use "-d HETERO:<comma
separated devices list>" format to specify HETERO
plugin. The application looks for a suitable plugin
for the specified device.
-niter NUMBER_ITERATIONS, --number_iterations NUMBER_ITERATIONS
Optional. Number of iterations. If not specified, the
number of iterations is calculated depending on a
device.
-nireq NUMBER_INFER_REQUESTS, --number_infer_requests NUMBER_INFER_REQUESTS
Optional. Number of infer requests (default value is
2).
-nthreads NUMBER_THREADS, --number_threads NUMBER_THREADS
Number of threads to use for inference on the CPU
(including Hetero cases).
-b BATCH_SIZE, --batch_size BATCH_SIZE
Optional. Batch size value. If not specified, the
batch size value is determined from IR
-pin {YES,NO}, --infer_threads_pinning {YES,NO}
Optional. Enable ("YES" is default value) or disable
("NO")CPU threads pinning for CPU-involved inference.
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the demo, you can use one-layer public models or one-layer pre-trained and optimized models delivered with the package that support images as input.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
For example, to do inference on an image using a trained network with multiple outputs on CPU, run the following command:
```python3 benchmark_app.py -i <path_to_image>/inputImage.bmp -m <path_to_model>/multiple-output.xml -d CPU
```
> **NOTE**: Public models should be first converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
python3 benchmark_app.py -i <path_to_image>/inputImage.bmp -m <path_to_model>/multiple-output.xml -d CPU
```
## Demo Output
@@ -79,3 +112,5 @@ For asynchronous API, the application outputs only throughput:
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)
* [Model Optimizer](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader)

View File

@@ -0,0 +1,18 @@
"""
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from .benchmark import main
from .utils.constants import HELP_MESSAGES

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -18,7 +17,7 @@
from statistics import median
from openvino.inference_engine import IENetwork, IEPlugin
from utils.benchmark_utils import *
from .utils.benchmark_utils import *
def main(args=None):
try:
@@ -198,7 +197,3 @@ def main(args=None):
except Exception as e:
logging.exception(e)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,15 @@
"""
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

View File

@@ -1,5 +1,5 @@
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -26,7 +26,7 @@ from random import choice
from datetime import datetime
from fnmatch import fnmatch
from . constants import *
from .constants import *
logging.basicConfig(format="[ %(levelname)s ] %(message)s", level=logging.INFO, stream=sys.stdout)
logger = logging.getLogger('BenchmarkApp')
@@ -42,27 +42,29 @@ def validate_args(args):
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--path_to_images', type=str, required=True, help=HELP_MESSAGES['IMAGE_MESSAGE'])
parser.add_argument('-m', '--path_to_model', type=str, required=True, help=HELP_MESSAGES['MODEL_MESSAGE'])
parser.add_argument('-c', '--path_to_cldnn_config', type=str, required=False,
help=HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
parser.add_argument('-l', '--path_to_extension', type=str, required=False, default=None,
help=HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
parser.add_argument('-api', '--api_type', type=str, required=False, default='async', choices=['sync', 'async'],
help=HELP_MESSAGES['API_MESSAGE'])
parser.add_argument('-d', '--target_device', type=str, required=False, default="CPU",
help=HELP_MESSAGES['TARGET_DEVICE_MESSAGE'])
parser.add_argument('-niter', '--number_iterations', type=int, required=False, default=None,
help=HELP_MESSAGES['ITERATIONS_COUNT_MESSAGE'])
parser.add_argument('-nireq', '--number_infer_requests', type=int, required=False, default=2,
help=HELP_MESSAGES['INFER_REQUESTS_COUNT_MESSAGE'])
parser.add_argument('-nthreads', '--number_threads', type=int, required=False, default=None,
help=HELP_MESSAGES['INFER_NUM_THREADS_MESSAGE'])
parser.add_argument('-b', '--batch_size', type=int, required=False, default=None,
help=HELP_MESSAGES['BATCH_SIZE_MESSAGE'])
parser.add_argument('-pin', '--infer_threads_pinning', type=str, required=False, default='YES',
choices=['YES', 'NO'], help=HELP_MESSAGES['INFER_THREADS_PINNING_MESSAGE'])
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=argparse.SUPPRESS, help=HELP_MESSAGES["HELP"])
args.add_argument('-i', '--path_to_images', type=str, required=True, help=HELP_MESSAGES['IMAGE_MESSAGE'])
args.add_argument('-m', '--path_to_model', type=str, required=True, help=HELP_MESSAGES['MODEL_MESSAGE'])
args.add_argument('-c', '--path_to_cldnn_config', type=str, required=False,
help=HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
args.add_argument('-l', '--path_to_extension', type=str, required=False, default=None,
help=HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
args.add_argument('-api', '--api_type', type=str, required=False, default='async', choices=['sync', 'async'],
help=HELP_MESSAGES['API_MESSAGE'])
args.add_argument('-d', '--target_device', type=str, required=False, default="CPU",
help=HELP_MESSAGES['TARGET_DEVICE_MESSAGE'])
args.add_argument('-niter', '--number_iterations', type=int, required=False, default=None,
help=HELP_MESSAGES['ITERATIONS_COUNT_MESSAGE'])
args.add_argument('-nireq', '--number_infer_requests', type=int, required=False, default=2,
help=HELP_MESSAGES['INFER_REQUESTS_COUNT_MESSAGE'])
args.add_argument('-nthreads', '--number_threads', type=int, required=False, default=None,
help=HELP_MESSAGES['INFER_NUM_THREADS_MESSAGE'])
args.add_argument('-b', '--batch_size', type=int, required=False, default=None,
help=HELP_MESSAGES['BATCH_SIZE_MESSAGE'])
args.add_argument('-pin', '--infer_threads_pinning', type=str, required=False, default='YES',
choices=['YES', 'NO'], help=HELP_MESSAGES['INFER_THREADS_PINNING_MESSAGE'])
return parser.parse_args()

View File

@@ -1,5 +1,5 @@
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -15,22 +15,24 @@
"""
HELP_MESSAGES = {
'IMAGE_MESSAGE': "Path to a folder with images or to image files.",
'MULTI_INPUT_MESSAGE': "Path to multi input file containing.",
'MODEL_MESSAGE': "Path to an .xml file with a trained model.",
'PLUGIN_PATH_MESSAGE': "Path to a plugin folder.",
'API_MESSAGE': "Enable using sync/async API. Default value is sync",
'TARGET_DEVICE_MESSAGE': "Specify a target device to infer on: CPU, GPU, FPGA or MYRIAD. "
'HELP': "Show this help message and exit.",
'IMAGE_MESSAGE': "Required. Path to a folder with images or to image files.",
'MULTI_INPUT_MESSAGE': "Optional. Path to multi input file containing.",
'MODEL_MESSAGE': "Required. Path to an .xml file with a trained model.",
'PLUGIN_PATH_MESSAGE': "Optional. Path to a plugin folder.",
'API_MESSAGE': "Optional. Enable using sync/async API. Default value is sync",
'TARGET_DEVICE_MESSAGE': "Optional. Specify a target device to infer on: CPU, GPU, FPGA, HDDL or MYRIAD. "
"Use \"-d HETERO:<comma separated devices list>\" format to specify HETERO plugin. "
"The application looks for a suitable plugin for the specified device.",
'ITERATIONS_COUNT_MESSAGE': "Number of iterations. "
'ITERATIONS_COUNT_MESSAGE': "Optional. Number of iterations. "
"If not specified, the number of iterations is calculated depending on a device.",
'INFER_REQUESTS_COUNT_MESSAGE': "Number of infer requests (default value is 2).",
'INFER_REQUESTS_COUNT_MESSAGE': "Optional. Number of infer requests (default value is 2).",
'INFER_NUM_THREADS_MESSAGE': "Number of threads to use for inference on the CPU "
"(including Hetero cases).",
'CUSTOM_CPU_LIBRARY_MESSAGE': "Required for CPU custom layers. "
'CUSTOM_CPU_LIBRARY_MESSAGE': "Optional. Required for CPU custom layers. "
"Absolute path to a shared library with the kernels implementations.",
'CUSTOM_GPU_LIBRARY_MESSAGE': "Required for GPU custom kernels. Absolute path to an .xml file with the kernels description.",
'CUSTOM_GPU_LIBRARY_MESSAGE': "Optional. Required for GPU custom kernels. Absolute path to an .xml file with the "
"kernels description.",
'BATCH_SIZE_MESSAGE': "Optional. Batch size value. If not specified, the batch size value is determined from IR",
'INFER_THREADS_PINNING_MESSAGE': "Optional. Enable (\"YES\" is default value) or disable (\"NO\")"
"CPU threads pinning for CPU-involved inference."

View File

@@ -0,0 +1,37 @@
import benchmark
from argparse import ArgumentParser, SUPPRESS
def parse_args():
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help=benchmark.HELP_MESSAGES["HELP"])
args.add_argument('-i', '--path_to_images', type=str, required=True,
help=benchmark.HELP_MESSAGES['IMAGE_MESSAGE'])
args.add_argument('-m', '--path_to_model', type=str, required=True,
help=benchmark.HELP_MESSAGES['MODEL_MESSAGE'])
args.add_argument('-c', '--path_to_cldnn_config', type=str, required=False,
help=benchmark.HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
args.add_argument('-l', '--path_to_extension', type=str, required=False, default=None,
help=benchmark.HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
args.add_argument('-api', '--api_type', type=str, required=False, default='async', choices=['sync', 'async'],
help=benchmark.HELP_MESSAGES['API_MESSAGE'])
args.add_argument('-d', '--target_device', type=str, required=False, default="CPU",
help=benchmark.HELP_MESSAGES['TARGET_DEVICE_MESSAGE'])
args.add_argument('-niter', '--number_iterations', type=int, required=False, default=None,
help=benchmark.HELP_MESSAGES['ITERATIONS_COUNT_MESSAGE'])
args.add_argument('-nireq', '--number_infer_requests', type=int, required=False, default=2,
help=benchmark.HELP_MESSAGES['INFER_REQUESTS_COUNT_MESSAGE'])
args.add_argument('-nthreads', '--number_threads', type=int, required=False, default=None,
help=benchmark.HELP_MESSAGES['INFER_NUM_THREADS_MESSAGE'])
args.add_argument('-b', '--batch_size', type=int, required=False, default=None,
help=benchmark.HELP_MESSAGES['BATCH_SIZE_MESSAGE'])
args.add_argument('-pin', '--infer_threads_pinning', type=str, required=False, default='YES',
choices=['YES', 'NO'], help=benchmark.HELP_MESSAGES['INFER_THREADS_PINNING_MESSAGE'])
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
benchmark.main(args)

View File

@@ -0,0 +1,79 @@
# Image Classification Python* Sample
This topic demonstrates how to run the Image Classification sample application, which performs
inference using image classification networks such as AlexNet and GoogLeNet.
### How It Works
Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference
Engine plugin. When inference is done, the application creates an
output image and outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Run the application with the `-h` option yields the usage message:
```
python3 classification_sample.py -h
```
The command yields the following usage message:
```
usage: classification_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION] [-pp PLUGIN_DIR]
[-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP]
[-ni NUMBER_ITER] [-pc]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. MKLDNN (CPU)-targeted custom layers.
Absolute path to a shared library with the kernels
implementations.
-pp PLUGIN_DIR, --plugin_dir PLUGIN_DIR
Optional. Path to a plugin folder
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU
--labels LABELS Optional. Path to a labels mapping file
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
-ni NUMBER_ITER, --number_iter NUMBER_ITER
Optional. Number of inference iterations
-pc, --perf_counts Optional. Report performance counters
```
Running the application with the empty list of options yields the usage message given above.
To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
For example, to perform inference of an AlexNet model (previously converted to the Inference Engine format) on CPU, use the following command:
```
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml
```
### Sample Output
By default the application outputs top-10 inference results.
Add the `-nt` option to the previous command to modify the number of top output results.
For example, to get the top-5 results on GPU, run the following command:
```
python3 classification_sample.py<path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d GPU
```
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)
* [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,7 +17,7 @@
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
@@ -26,22 +26,29 @@ from openvino.inference_engine import IENetwork, IEPlugin
def build_argparser():
parser = ArgumentParser()
parser.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
parser.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
parser.add_argument("-l", "--cpu_extension",
help="MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels "
"impl.", type=str, default=None)
parser.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
parser.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified (CPU by default)", default="CPU",
type=str)
parser.add_argument("--labels", help="Labels mapping file", default=None, type=str)
parser.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
parser.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
parser.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.", required=True,
type=str)
args.add_argument("-i", "--input", help="Required. Path to a folder with images or path to an image files",
required=True,
type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. "
"MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the"
" kernels implementations.", type=str, default=None)
args.add_argument("-pp", "--plugin_dir", help="Optional. Path to a plugin folder", type=str, default=None)
args.add_argument("-d", "--device",
help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL, MYRIAD or HETERO: is "
"acceptable. The sample will look for a suitable plugin for device specified. Default "
"value is CPU",
default="CPU", type=str)
args.add_argument("--labels", help="Optional. Path to a labels mapping file", default=None, type=str)
args.add_argument("-nt", "--number_top", help="Optional. Number of top results", default=10, type=int)
args.add_argument("-ni", "--number_iter", help="Optional. Number of inference iterations", default=1, type=int)
args.add_argument("-pc", "--perf_counts", help="Optional. Report performance counters", default=False,
action="store_true")
return parser
@@ -93,7 +100,6 @@ def main():
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = plugin.load(network=net)
del net
# Start sync inference
log.info("Starting inference ({} iterations)".format(args.number_iter))
@@ -101,7 +107,7 @@ def main():
for i in range(args.number_iter):
t0 = time()
res = exec_net.infer(inputs={input_blob: images})
infer_time.append((time()-t0)*1000)
infer_time.append((time() - t0) * 1000)
log.info("Average running time of one iteration: {} ms".format(np.average(np.asarray(infer_time))))
if args.perf_counts:
perf_counts = exec_net.requests[0].get_perf_counts()
@@ -120,18 +126,25 @@ def main():
labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]
else:
labels_map = None
classid_str = "classid"
probability_str = "probability"
for i, probs in enumerate(res):
probs = np.squeeze(probs)
top_ind = np.argsort(probs)[-args.number_top:][::-1]
print("Image {}\n".format(args.input[i]))
print(classid_str, probability_str)
print("{} {}".format('-' * len(classid_str), '-' * len(probability_str)))
for id in top_ind:
det_label = labels_map[id] if labels_map else "#{}".format(id)
print("{:.7f} label {}".format(probs[id], det_label))
det_label = labels_map[id] if labels_map else "{}".format(id)
label_length = len(det_label)
space_num_before = (len(classid_str) - label_length) // 2
space_num_after = len(classid_str) - (space_num_before + label_length) + 2
space_num_before_prob = (len(probability_str) - len(str(probs[id]))) // 2
print("{}{}{}{}{:.7f}".format(' ' * space_num_before, det_label,
' ' * space_num_after, ' ' * space_num_before_prob,
probs[id]))
print("\n")
del exec_net
del plugin
if __name__ == '__main__':
sys.exit(main() or 0)

View File

@@ -0,0 +1,89 @@
# Image Classification Python* Sample Async
This sample demonstrates how to build and execute inference in pipelined mode on example of classifications networks.
The pipelined mode might increase the throughput of the pictures. The latency of one inference will be the same as for synchronous execution.
<br>
The throughput increases due to follow reasons:
* Some plugins have heterogeneity inside themselves: data transferring, execution on remote device, pre-processing and post-processing on the host.
* Using of explicit heterogeneous plugin with execution of different parts of network on different devices, for example HETERO:CPU,GPU.
When two or more devices process one image, creating several infer requests and starting asynchronous inference allow for using devices in the most efficient way.
If two devices are involved in execution, the most optimal value for `-nireq` option is 2.
To process infer requests more efficiently, Classification Sample Async uses round-robin algorithm. It starts execution of the current infer request and switches to waiting for results of the previous one. After finishing of waiting, it switches infer requests and repeat the procedure.
Another required aspect of good throughput is a number of iterations. Only with big number of iterations you can emulate the real application work and get good performance.
The batch mode is an independent attribute on the pipelined mode. Pipelined mode works efficiently with any batch size.
### How It Works
Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference
Engine plugin.
Then application creates several infer requests pointed in `-nireq` parameter and loads images for inference.
Then in a loop it starts inference for the current infer request and switches to waiting for the previous one. When results are ready, it swaps infer requests.
When inference is done, the application outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Running the application with the <code>-h</code> option yields the following usage message:
```
python3 classification_sample_async.py -h
```
The command yields the following usage message:
```
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION] [-pp PLUGIN_DIR]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP] [-ni NUMBER_ITER] [-pc]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
path to a shared library with the kernels
implementations.
-pp PLUGIN_DIR, --plugin_dir PLUGIN_DIR
Optional. Path to a plugin folder
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU
--labels LABELS Optional. Labels mapping file
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
-ni NUMBER_ITER, --number_iter NUMBER_ITER
Optional. Number of inference iterations
-pc, --perf_counts Optional. Report performance counters
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
You can do inference on an image using a trained AlexNet network on FPGA with fallback to CPU using the following command:
```
python3 classification_sample_async.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d HETERO:FPGA,CPU -nireq 2 -ni 200
```
### Sample Output
By default, the application outputs top-10 inference results for each infer request.
It also provides throughput value measured in frames per seconds.
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,7 +17,7 @@
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
@@ -26,22 +26,26 @@ from openvino.inference_engine import IENetwork, IEPlugin
def build_argparser():
parser = ArgumentParser()
parser.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
parser.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
parser.add_argument("-l", "--cpu_extension",
help="MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels "
"impl.", type=str, default=None)
parser.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
parser.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified (CPU by default)", default="CPU",
type=str)
parser.add_argument("--labels", help="Labels mapping file", default=None, type=str)
parser.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
parser.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
parser.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",
required=True, type=str)
args.add_argument("-i", "--input", help="Required. Path to a folder with images or path to an image files",
required=True, type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. Absolute path to a shared library with the"
" kernels implementations.", type=str, default=None)
args.add_argument("-pp", "--plugin_dir", help="Optional. Path to a plugin folder", type=str, default=None)
args.add_argument("-d", "--device",
help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is "
"acceptable. The sample will look for a suitable plugin for device specified. Default value is CPU",
default="CPU", type=str)
args.add_argument("--labels", help="Optional. Labels mapping file", default=None, type=str)
args.add_argument("-nt", "--number_top", help="Optional. Number of top results", default=10, type=int)
args.add_argument("-ni", "--number_iter", help="Optional. Number of inference iterations", default=1, type=int)
args.add_argument("-pc", "--perf_counts", help="Optional. Report performance counters",
default=False, action="store_true")
return parser
@@ -92,7 +96,6 @@ def main():
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = plugin.load(network=net)
del net
# Start sync inference
log.info("Starting inference ({} iterations)".format(args.number_iter))
@@ -119,18 +122,25 @@ def main():
labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]
else:
labels_map = None
classid_str = "classid"
probability_str = "probability"
for i, probs in enumerate(res):
probs = np.squeeze(probs)
top_ind = np.argsort(probs)[-args.number_top:][::-1]
print("Image {}\n".format(args.input[i]))
print(classid_str, probability_str)
print("{} {}".format('-' * len(classid_str), '-' * len(probability_str)))
for id in top_ind:
det_label = labels_map[id] if labels_map else "#{}".format(id)
print("{:.7f} {}".format(probs[id], det_label))
det_label = labels_map[id] if labels_map else "{}".format(id)
label_length = len(det_label)
space_num_before = (7 - label_length) // 2
space_num_after = 7 - (space_num_before + label_length) + 2
space_num_before_prob = (11 - len(str(probs[id]))) // 2
print("{}{}{}{}{:.7f}".format(' ' * space_num_before, det_label,
' ' * space_num_after, ' ' * space_num_before_prob,
probs[id]))
print("\n")
del exec_net
del plugin
if __name__ == '__main__':
sys.exit(main() or 0)

View File

@@ -1,49 +0,0 @@
# This README demonstrates use of all GreenGrass samples
# GreenGrass Classification Sample
This topic demonstrates how to build and run the GreenGrass Image Classification sample application, which does inference using image classification networks like AlexNet and GoogLeNet on on Intel® Processors, Intel® HD Graphics and Intel® FPGA.
## Running
1. Modify the "accelerator" parameter inside the sample to deploy the sample on any accelerator option of your choice(CPU/GPU/FPGA)
For CPU, please specify "CPU"
For GPU, please specify "GPU"
For FPGA, please specify "HETERO:FPGA,CPU"
2. Enable the option(s) on how output is displayed/consumed
3. Now follow the instructions listed in the Greengrass-FaaS-User-Guide.pdf to create the lambda and deploy on edge device using Greengrass
### Outputs
The application publishes top-10 results on AWS IoT Cloud every second by default. For other output consumption options, please refer to Greengrass-FaaS-User-Guide.pdf
### How it works
Upon deployment,the sample application loads a network and an image to the Inference Engine plugin. When inference is done, the application publishes results to AWS IoT Cloud
=====================================================================================================
# GreenGrass Object Detection Sample SSD
This topic demonstrates how to run the GreenGrass Object Detection SSD sample application, which does inference using object detection networks like Squeezenet-SSD on Intel® Processors, Intel® HD Graphics and Intel® FPGA.
## Running
1. Modify the "accelerator" parameter inside the sample to deploy the sample on any accelerator option of your choice(CPU/GPU/FPGA)
For CPU, please specify "CPU"
For GPU, please specify "GPU"
For FPGA, please specify "HETERO:FPGA,CPU"
2. Enable the option(s) on how output is displayed/consumed
3. Set the variable is_async_mode to 'True' for Asynchronous execution and 'False' for Synchronous execution
3. Now follow the instructions listed in the Greengrass-FaaS-User-Guide.pdf to create the lambda and deploy on edge device using Greengrass
### Outputs
The application publishes detection outputs such as class label, class confidence, and bounding box coordinates on AWS IoT Cloud every second. For other output consumption options, please refer to Greengrass-FaaS-User-Guide.pdf
### How it works
Upon deployment,the sample application loads a network and an image to the Inference Engine plugin. When inference is done, the application publishes results to AWS IoT Cloud

View File

@@ -1,180 +0,0 @@
"""
BSD 3-clause "New" or "Revised" license
Copyright (C) 2018 Intel Corporation.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import os
import cv2
import numpy as np
import greengrasssdk
import boto3
import timeit
import datetime
import json
from collections import OrderedDict
from openvino.inference_engine import IENetwork, IEPlugin
# Specify the delta in seconds between each report
reporting_interval = 1.0
# Parameters for IoT Cloud
enable_iot_cloud_output = True
# Parameters for Kinesis
enable_kinesis_output = False
kinesis_stream_name = ""
kinesis_partition_key = ""
kinesis_region = ""
# Parameters for S3
enable_s3_jpeg_output = False
s3_bucket_name = ""
# Parameters for jpeg output on local disk
enable_local_jpeg_output = False
# Create a Greengrass Core SDK client for publishing messages to AWS Cloud
client = greengrasssdk.client("iot-data")
# Create an S3 client for uploading files to S3
if enable_s3_jpeg_output:
s3_client = boto3.client("s3")
# Create a Kinesis client for putting records to streams
if enable_kinesis_output:
kinesis_client = boto3.client("kinesis", "us-west-2")
# Read environment variables set by Lambda function configuration
PARAM_MODEL_XML = os.environ.get("PARAM_MODEL_XML")
PARAM_INPUT_SOURCE = os.environ.get("PARAM_INPUT_SOURCE")
PARAM_DEVICE = os.environ.get("PARAM_DEVICE")
PARAM_OUTPUT_DIRECTORY = os.environ.get("PARAM_OUTPUT_DIRECTORY")
PARAM_CPU_EXTENSION_PATH = os.environ.get("PARAM_CPU_EXTENSION_PATH")
PARAM_LABELMAP_FILE = os.environ.get("PARAM_LABELMAP_FILE")
PARAM_TOPIC_NAME = os.environ.get("PARAM_TOPIC_NAME", "intel/faas/classification")
PARAM_NUM_TOP_RESULTS = int(os.environ.get("PARAM_NUM_TOP_RESULTS", "10"))
def report(res_json, frame):
now = datetime.datetime.now()
date_prefix = str(now).replace(" ", "_")
if enable_iot_cloud_output:
data = json.dumps(res_json)
client.publish(topic=PARAM_TOPIC_NAME, payload=data)
if enable_kinesis_output:
kinesis_client.put_record(StreamName=kinesis_stream_name, Data=json.dumps(res_json),
PartitionKey=kinesis_partition_key)
if enable_s3_jpeg_output:
temp_image = os.path.join(PARAM_OUTPUT_DIRECTORY, "inference_result.jpeg")
cv2.imwrite(temp_image, frame)
with open(temp_image) as file:
image_contents = file.read()
s3_client.put_object(Body=image_contents, Bucket=s3_bucket_name, Key=date_prefix + ".jpeg")
if enable_local_jpeg_output:
cv2.imwrite(os.path.join(PARAM_OUTPUT_DIRECTORY, date_prefix + ".jpeg"), frame)
def greengrass_classification_sample_run():
client.publish(topic=PARAM_TOPIC_NAME, payload="OpenVINO: Initializing...")
model_bin = os.path.splitext(PARAM_MODEL_XML)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=PARAM_DEVICE, plugin_dirs="")
if "CPU" in PARAM_DEVICE:
plugin.add_cpu_extension(PARAM_CPU_EXTENSION_PATH)
# Read IR
net = IENetwork(model=PARAM_MODEL_XML, weights=model_bin)
assert len(net.inputs.keys()) == 1, "Sample supports only single input topologies"
assert len(net.outputs) == 1, "Sample supports only single output topologies"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
# Read and pre-process input image
n, c, h, w = net.inputs[input_blob]
cap = cv2.VideoCapture(PARAM_INPUT_SOURCE)
exec_net = plugin.load(network=net)
del net
client.publish(topic=PARAM_TOPIC_NAME, payload="Starting inference on %s" % PARAM_INPUT_SOURCE)
start_time = timeit.default_timer()
inf_seconds = 0.0
frame_count = 0
res_json = []
labeldata = None
if PARAM_LABELMAP_FILE is not None:
with open(PARAM_LABELMAP_FILE) as labelmap_file:
labeldata = json.load(labelmap_file)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
frameid = cap.get(cv2.CAP_PROP_POS_FRAMES)
initial_w = cap.get(3)
initial_h = cap.get(4)
in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
in_frame = in_frame.reshape((n, c, h, w))
# Start synchronous inference
inf_start_time = timeit.default_timer()
res = exec_net.infer(inputs={input_blob: in_frame})
inf_seconds += timeit.default_timer() - inf_start_time
top_ind = np.argsort(res[out_blob], axis=1)[0, -PARAM_NUM_TOP_RESULTS:][::-1]
# Parse detection results of the current request
res_json = OrderedDict()
res_json["Candidates"] = OrderedDict()
frame_timestamp = datetime.datetime.now()
for i in top_ind:
classlabel = labeldata[str(i)] if labeldata else str(i)
res_json["Candidates"][classlabel] = round(res[out_blob][0, i], 2)
frame_count += 1
# Measure elapsed seconds since the last report
seconds_elapsed = timeit.default_timer() - start_time
if seconds_elapsed >= reporting_interval:
res_json["timestamp"] = frame_timestamp.isoformat()
res_json["frame_id"] = int(frameid)
res_json["inference_fps"] = frame_count / inf_seconds
start_time = timeit.default_timer()
report(res_json, frame)
frame_count = 0
inf_seconds = 0.0
client.publish(topic=PARAM_TOPIC_NAME, payload="End of the input, exiting...")
del exec_net
del plugin
greengrass_classification_sample_run()
def function_handler(event, context):
client.publish(topic=PARAM_TOPIC_NAME, payload='HANDLER_CALLED!')
return

View File

@@ -1,184 +0,0 @@
"""
BSD 3-clause "New" or "Revised" license
Copyright (C) 2018 Intel Corporation.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import os
import cv2
import numpy as np
import greengrasssdk
import boto3
import timeit
import datetime
import json
from collections import OrderedDict
from openvino.inference_engine import IENetwork, IEPlugin
# Specify the delta in seconds between each report
reporting_interval = 1.0
# Parameters for IoT Cloud
enable_iot_cloud_output = True
# Parameters for Kinesis
enable_kinesis_output = False
kinesis_stream_name = ""
kinesis_partition_key = ""
kinesis_region = ""
# Parameters for S3
enable_s3_jpeg_output = False
s3_bucket_name = "ssd_test"
# Parameters for jpeg output on local disk
enable_local_jpeg_output = False
# Create a Greengrass Core SDK client for publishing messages to AWS Cloud
client = greengrasssdk.client("iot-data")
# Create an S3 client for uploading files to S3
if enable_s3_jpeg_output:
s3_client = boto3.client("s3")
# Create a Kinesis client for putting records to streams
if enable_kinesis_output:
kinesis_client = boto3.client("kinesis", "us-west-2")
# Read environment variables set by Lambda function configuration
PARAM_MODEL_XML = os.environ.get("PARAM_MODEL_XML")
PARAM_INPUT_SOURCE = os.environ.get("PARAM_INPUT_SOURCE")
PARAM_DEVICE = os.environ.get("PARAM_DEVICE")
PARAM_OUTPUT_DIRECTORY = os.environ.get("PARAM_OUTPUT_DIRECTORY")
PARAM_CPU_EXTENSION_PATH = os.environ.get("PARAM_CPU_EXTENSION_PATH")
PARAM_LABELMAP_FILE = os.environ.get("PARAM_LABELMAP_FILE")
PARAM_TOPIC_NAME = os.environ.get("PARAM_TOPIC_NAME", "intel/faas/ssd")
def report(res_json, frame):
now = datetime.datetime.now()
date_prefix = str(now).replace(" ", "_")
if enable_iot_cloud_output:
data = json.dumps(res_json)
client.publish(topic=PARAM_TOPIC_NAME, payload=data)
if enable_kinesis_output:
kinesis_client.put_record(StreamName=kinesis_stream_name, Data=json.dumps(res_json),
PartitionKey=kinesis_partition_key)
if enable_s3_jpeg_output:
temp_image = os.path.join(PARAM_OUTPUT_DIRECTORY, "inference_result.jpeg")
cv2.imwrite(temp_image, frame)
with open(temp_image) as file:
image_contents = file.read()
s3_client.put_object(Body=image_contents, Bucket=s3_bucket_name, Key=date_prefix + ".jpeg")
if enable_local_jpeg_output:
cv2.imwrite(os.path.join(PARAM_OUTPUT_DIRECTORY, date_prefix + ".jpeg"), frame)
def greengrass_object_detection_sample_ssd_run():
client.publish(topic=PARAM_TOPIC_NAME, payload="OpenVINO: Initializing...")
model_bin = os.path.splitext(PARAM_MODEL_XML)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=PARAM_DEVICE, plugin_dirs="")
if "CPU" in PARAM_DEVICE:
plugin.add_cpu_extension(PARAM_CPU_EXTENSION_PATH)
# Read IR
net = IENetwork(model=PARAM_MODEL_XML, weights=model_bin)
assert len(net.inputs.keys()) == 1, "Sample supports only single input topologies"
assert len(net.outputs) == 1, "Sample supports only single output topologies"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
# Read and pre-process input image
n, c, h, w = net.inputs[input_blob]
cap = cv2.VideoCapture(PARAM_INPUT_SOURCE)
exec_net = plugin.load(network=net)
del net
client.publish(topic=PARAM_TOPIC_NAME, payload="Starting inference on %s" % PARAM_INPUT_SOURCE)
start_time = timeit.default_timer()
inf_seconds = 0.0
frame_count = 0
labeldata = None
if PARAM_LABELMAP_FILE is not None:
with open(PARAM_LABELMAP_FILE) as labelmap_file:
labeldata = json.load(labelmap_file)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
frameid = cap.get(cv2.CAP_PROP_POS_FRAMES)
initial_w = cap.get(3)
initial_h = cap.get(4)
in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
in_frame = in_frame.reshape((n, c, h, w))
# Start synchronous inference
inf_start_time = timeit.default_timer()
res = exec_net.infer(inputs={input_blob: in_frame})
inf_seconds += timeit.default_timer() - inf_start_time
# Parse detection results of the current request
res_json = OrderedDict()
frame_timestamp = datetime.datetime.now()
object_id = 0
for obj in res[out_blob][0][0]:
if obj[2] > 0.5:
xmin = int(obj[3] * initial_w)
ymin = int(obj[4] * initial_h)
xmax = int(obj[5] * initial_w)
ymax = int(obj[6] * initial_h)
cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (255, 165, 20), 4)
obj_id = "Object" + str(object_id)
classlabel = labeldata[str(int(obj[1]))] if labeldata else ""
res_json[obj_id] = {"label": int(obj[1]), "class": classlabel, "confidence": round(obj[2], 2), "xmin": round(
obj[3], 2), "ymin": round(obj[4], 2), "xmax": round(obj[5], 2), "ymax": round(obj[6], 2)}
object_id += 1
frame_count += 1
# Measure elapsed seconds since the last report
seconds_elapsed = timeit.default_timer() - start_time
if seconds_elapsed >= reporting_interval:
res_json["timestamp"] = frame_timestamp.isoformat()
res_json["frame_id"] = int(frameid)
res_json["inference_fps"] = frame_count / inf_seconds
start_time = timeit.default_timer()
report(res_json, frame)
frame_count = 0
inf_seconds = 0.0
client.publish(topic=PARAM_TOPIC_NAME, payload="End of the input, exiting...")
del exec_net
del plugin
greengrass_object_detection_sample_ssd_run()
def function_handler(event, context):
client.publish(topic=PARAM_TOPIC_NAME, payload='HANDLER_CALLED!')
return

View File

@@ -1,463 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook demonstrates the worklflow of a simple image classification task.\n",
"We will go through all the pipeline steps: downloading the model, generating the Intermediate Representation (IR) using the Model Optimizer, running inference in Python, and parsing and interpretating the output results.\n",
"\n",
"To demonstrate the scenario, we will use the pre-trained SquezeNet V1.1 Caffe\\* model. SqueezeNet is a pretty accurate and at the same time lightweight network. For more information about the model, please visit <a href=\"https://github.com/DeepScale/SqueezeNet/\">GitHub</a> page and refer to original <a href=\"https://arxiv.org/abs/1602.07360\">SqueezeNet paper</a>.\n",
"\n",
"Follow the steps to perform image classification with the SquezeNet V1.1 model:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**1. Download the model files:** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"echo \"Downloading deploy.protxt ...\"\n",
"if [ -f deploy.prototxt ]; then \n",
" echo \"deploy.protxt file already exists. Downloading skipped\"\n",
"else\n",
" wget https://raw.githubusercontent.com/DeepScale/SqueezeNet/a47b6f13d30985279789d08053d37013d67d131b/SqueezeNet_v1.1/deploy.prototxt -q\n",
" echo \"Finished!\"\n",
"fi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"! echo \"Downloading squeezenet_v1.1.caffemodel ...\"\n",
"if [ -f squeezenet_v1.1.caffemodel ]; then\n",
" echo \"squeezenet_v1.1.caffemodel file already exists. Download skipped\"\n",
"else\n",
" wget https://github.com/DeepScale/SqueezeNet/raw/a47b6f13d30985279789d08053d37013d67d131b/SqueezeNet_v1.1/squeezenet_v1.1.caffemodel -q\n",
" echo \"Finished!\"\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Run the following command to see the model files:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls -la"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* `deploy.prototxt` contains the network toplogy description in text format. \n",
"* `squeezenet_v1.1.caffemodel` contains weights for all network layers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**2. Optimize and convert the model from intial Caffe representation to the IR representation, which is required for scoring the model using Inference Engine. To convert and optimize the model, use the Model Optimizer command line tool.**\n",
"\n",
"To locate Model Optimizer scripts, specify the path to the Model Optimizer root directory in the `MO_ROOT` variable in the cell bellow and then run it (If you use the installed OpenVINO&trade; package, you can find the Model Optimizer in `<INSTALLATION_ROOT_DIR>/deployment_tools/model_optimizer`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"MO_ROOT=/localdisk/repos/model-optimizer-tensorflow/\n",
"echo $MO_ROOT\n",
"python3 $MO_ROOT/mo.py --input_model squeezenet_v1.1.caffemodel --input_proto deploy.prototxt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**3. Now, you have the SqueezeNet model converted to the IR, and you can infer it.**\n",
"\n",
"a. First, import required modules:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from openvino.inference_engine import IENetwork, IEPlugin\n",
"import numpy as np\n",
"import cv2\n",
"import logging as log\n",
"from time import time\n",
"import sys\n",
"import glob\n",
"import os\n",
"from matplotlib import pyplot as plt\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"b. Initialize required constants:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Configure logging format\n",
"log.basicConfig(format=\"[ %(levelname)s ] %(message)s\", level=log.INFO, stream=sys.stdout)\n",
"\n",
"# Path to IR model files\n",
"MODEL_XML = \"./squeezenet_v1.1.xml\"\n",
"MODEL_BIN = \"./squeezenet_v1.1.bin\"\n",
"\n",
"# Target device to run inference\n",
"TARGET_DEVICE = \"CPU\"\n",
"\n",
"# Folder with input images for the model\n",
"IMAGES_FOLDER = \"./images\"\n",
"\n",
"# File containing information about classes names \n",
"LABELS_FILE = \"./image_net_synset.txt\"\n",
"\n",
"# Number of top prediction results to parse\n",
"NTOP = 5\n",
"\n",
"# Required batch size - number of images which will be processed in parallel\n",
"BATCH = 4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"c. Create a plugin instance for the specified target device \n",
"d. Read the IR files and create an `IENEtwork` instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plugin = IEPlugin(TARGET_DEVICE)\n",
"net = IENetwork(model=MODEL_XML, weights=MODEL_BIN)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"e. Set the network batch size to the constatns specified above. \n",
"\n",
"Batch size is an \"amount\" of input data that will be infered in parallel. In this cases it is a number of images, which will be classified in parallel. \n",
"\n",
"You can set the network batch size using one of the following options:\n",
"1. On the IR generation stage, run the Model Optimizer with `-b` command line option. For example, to generate the IR with batch size equal to 4, add `-b 4` to Model Optimizer command line options. By default, it takes the batch size from the original network in framework representation (usually, it is equal to 1, but in this case, the original Caffe model is provided with the batch size equal to 10). \n",
"2. Use Inference Engine after reading IR. We will use this option.\n",
"\n",
"To set the batch size with the Inference Engine:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"log.info(\"Current network batch size is {}, will be changed to {}\".format(net.batch_size, BATCH))\n",
"net.batch_size = BATCH"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"f. After setting batch size, you can get required information about network input layers.\n",
"To preprocess input images, you need to know input layer shape.\n",
"\n",
"`inputs` property of `IENetwork` returns the dicitonary with input layer names and `InputInfo` objects, which contain information about an input layer including its shape.\n",
"\n",
"SqueezeNet is a single-input toplogy, so to get the input layer name and its shape, you can get the first item from the `inputs` dictionary:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_layer = next(iter(net.inputs))\n",
"n,c,h,w = net.inputs[input_layer].shape\n",
"layout = net.inputs[input_layer].layout\n",
"log.info(\"Network input layer {} has shape {} and layout {}\".format(input_layer, (n,c,h,w), layout))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what do the shape and layout mean? \n",
"Layout will helps to interprete the shape dimsesnions meaning. \n",
"\n",
"`NCHW` input layer layout means:\n",
"* the fisrt dimension of an input data is a batch of **N** images processed in parallel \n",
"* the second dimension is a numnber of **C**hannels expected in the input images\n",
"* the third and the forth are a spatial dimensions - **H**eight and **W**idth of an input image\n",
"\n",
"Our shapes means that the network expects four 3-channel images running in parallel with size 227x227."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"g. Read and preprocess input images.\n",
"\n",
"For it, go to `IMAGES_FOLDER`, find all `.bmp` files, and take four images for inference:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"search_pattern = os.path.join(IMAGES_FOLDER, \"*.bmp\")\n",
"images = glob.glob(search_pattern)[:BATCH]\n",
"log.info(\"Input images:\\n {}\".format(\"\\n\".join(images)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can read and preprocess the image files and create an array with input blob data.\n",
"\n",
"For preprocessing, you must do the following:\n",
"1. Resize the images to fit the HxW input dimenstions.\n",
"2. Transpose the HWC layout.\n",
"\n",
"Transposing is tricky and not really obvious.\n",
"As you alredy saw above, the network has the `NCHW` layout, so each input image should be in `CHW` format. But by deafult, OpenCV\\* reads images in the `HWC` format. That is why you have to swap the axes using the `numpy.transpose()` function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_data = np.ndarray(shape=(n, c, h, w))\n",
"orig_images = [] # Will be used to show image in notebook\n",
"for i, img in enumerate(images):\n",
" image = cv2.imread(img)\n",
" orig_images.append(image)\n",
" if image.shape[:-1] != (h, w):\n",
" log.warning(\"Image {} is resized from {} to {}\".format(img, image.shape[:-1], (h, w)))\n",
" image = cv2.resize(image, (w, h))\n",
" image = image.transpose((2, 0, 1)) # Change data layout from HWC to CHW\n",
" input_data[i] = image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"i. Infer the model model to classify input images:\n",
"\n",
"1. Load the `IENetwork` object to the plugin to create `ExectuableNEtwork` object. \n",
"2. Start inference using the `infer()` function specifying dictionary with input layer name and prepared data as an argument for the function. \n",
"3. Measure inference time in miliseconds and calculate throughput metric in frames-per-second (FPS)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"exec_net = plugin.load(net)\n",
"t0 = time()\n",
"res_map = exec_net.infer({input_layer: input_data})\n",
"inf_time = (time() - t0) * 1000 \n",
"fps = BATCH * inf_time \n",
"log.info(\"Inference time: {} ms.\".format(inf_time))\n",
"log.info(\"Throughput: {} fps.\".format(fps))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**4. After the inference, you need to parse and interpretate the inference results.**\n",
"\n",
"First, you need to see the shape of the network output layer. It can be done in similar way as for the inputs, but here you need to call `outputs` property of `IENetwork` object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"output_layer = next(iter(net.outputs))\n",
"n,c,h,w = net.outputs[output_layer].shape\n",
"layout = net.outputs[output_layer].layout\n",
"log.info(\"Network output layer {} has shape {} and layout {}\".format(output_layer, (n,c,h,w), layout))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is not a common case for classification netowrks to have output layer with *NCHW* layout. Usually, it is just *NC*. However, in this case, the last two dimensions are just a feature of the network and do not have much sense. Ignore them as you will remove them on the final parsing stage. \n",
"\n",
"What are the first and second dimensions of the output layer? \n",
"* The first dimension is a batch. We precoessed four images, and the prediction result for a particular image is stored in the first dimension of the output array. For example, prediction results for the third image is `res[2]` (since numeration starts from 0).\n",
"* The second dimension is an array with normalized probabilities (from 0 to 1) for each class. This network is trained using the <a href=\"http://image-net.org/index\">ImageNet</a> dataset with 1000 classes. Each `n`-th value in the output data for a certain image represent the probability of the image belonging to the `n`-th class. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To parse the output results:\n",
"\n",
"a. Read the `LABELS_FILE`, which maps the class ID to human-readable class names:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open(LABELS_FILE, 'r') as f:\n",
" labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"b. Parse the output array with prediction results. The parsing algorith is the following:\n",
"0. Squeeze the last two \"extra\" dimensions of the output data.\n",
"1. Iterate over all batches.\n",
"2. Sort the probabilities vector descendingly to get `NTOP` classes with the highest probabilities (by default, the `numpy.argsort` sorts the data in the ascending order, but using the array slicing `[::-1]`, you can reverse the data order).\n",
"3. Map the `NTOP` probabilities to the corresponding labeles in `labeles_map`.\n",
"\n",
"For the vizualization, you also need to store top-1 class and probability."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"top1_res = [] # will be used for the visualization\n",
"res = np.squeeze(res_map[output_layer])\n",
"log.info(\"Top {} results: \".format(NTOP))\n",
"for i, probs in enumerate(res):\n",
" top_ind = np.argsort(probs)[-NTOP:][::-1]\n",
" print(\"Image {}\".format(images[i]))\n",
" top1_ind = top_ind[0]\n",
" top1_res.append((labels_map[top1_ind], probs[top1_ind]))\n",
" for id in top_ind:\n",
" print(\"label: {} probability: {:.2f}% \".format(labels_map[id], probs[id] * 100))\n",
" print(\"\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code above prints the results as plain text. \n",
"You can also use OpenCV\\* to visualize the results using the `orig_images` and `top1_res` variables, which you created during images reading and results parsing:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.clf()\n",
"for i, img in enumerate(orig_images):\n",
" label_str = \"{}\".format(top1_res[i][0].split(',')[0])\n",
" prob_str = \"{:.2f}%\".format(top1_res[i][1])\n",
" cv2.putText(img, label_str, (5, 15), cv2.FONT_HERSHEY_COMPLEX, 0.6, (220,100,10), 1)\n",
" cv2.putText(img, prob_str, (5, 35), cv2.FONT_HERSHEY_COMPLEX, 0.6, (220,100,10), 1)\n",
" plt.figure()\n",
" plt.axis(\"off\")\n",
" \n",
" # We have to convert colors, because matplotlib expects an image in RGB color format \n",
" # but by default, the OpenCV read images in BRG format\n",
" im_to_show = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
" plt.imshow(im_to_show)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,74 @@
# Neural Style Transfer Python* Sample
This topic demonstrates how to run the Neural Style Transfer sample application, which performs
inference of style transfer models.
> **NOTE**: The OpenVINO™ toolkit does not include a pre-trained model to run the Neural Style Transfer sample. A public model from the [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style) can be used. Read the [Converting a Style Transfer Model from MXNet*](./docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md) topic from the [Model Optimizer Developer Guide](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to learn about how to get the trained model and how to convert it to the Inference Engine format (\*.xml + \*.bin).
## How It Works
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Running the application with the <code>-h</code> option yields the following usage message:
```
python3 style_transfer_sample.py --help
```
The command yields the following usage message:
```
usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION] [-pp PLUGIN_DIR]
[-d DEVICE] [-nt NUMBER_TOP] [-ni NUMBER_ITER]
[--mean_val_r MEAN_VAL_R]
[--mean_val_g MEAN_VAL_G]
[--mean_val_b MEAN_VAL_B] [-pc]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Path to a folder with images or path to an image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
MKLDNN (CPU)-targeted custom layers. Absolute path to
a shared library with the kernels implementations
-pp PLUGIN_DIR, --plugin_dir PLUGIN_DIR
Path to a plugin folder
-d DEVICE, --device DEVICE
Specify the target device to infer on; CPU, GPU, FPGA,
HDDL or MYRIAD is acceptable. Sample will look for a
suitable plugin for device specified. Default value is CPU
-nt NUMBER_TOP, --number_top NUMBER_TOP
Number of top results
-ni NUMBER_ITER, --number_iter NUMBER_ITER
Number of inference iterations
--mean_val_r MEAN_VAL_R, -mean_val_r MEAN_VAL_R
Mean value of red chanel for mean value subtraction in
postprocessing
--mean_val_g MEAN_VAL_G, -mean_val_g MEAN_VAL_G
Mean value of green chanel for mean value subtraction
in postprocessing
--mean_val_b MEAN_VAL_B, -mean_val_b MEAN_VAL_B
Mean value of blue chanel for mean value subtraction
in postprocessing
-pc, --perf_counts Report performance counters
```
Running the application with the empty list of options yields the usage message given above and an error message.
To perform inference on an image using a trained model of NST network on Intel® CPUs, use the following command:
```
python3 style_transfer_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/1_decoder_FP32.xml
```
### Demo Output
The application outputs an image (`out1.bmp`) or a sequence of images (`out1.bmp`, ..., `out<N>.bmp`) which are redrawn in style of the style transfer model used for sample.
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,7 +17,7 @@
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
@@ -26,30 +26,33 @@ from openvino.inference_engine import IENetwork, IEPlugin
def build_argparser():
parser = ArgumentParser()
parser.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
parser.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
parser.add_argument("-l", "--cpu_extension",
help="MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels "
"impl.", type=str, default=None)
parser.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
parser.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified (CPU by default)", default="CPU",
type=str)
parser.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
parser.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
parser.add_argument("--mean_val_r", "-mean_val_r",
help="Mean value of red chanel for mean value subtraction in postprocessing ", default=0,
type=float)
parser.add_argument("--mean_val_g", "-mean_val_g",
help="Mean value of green chanel for mean value subtraction in postprocessing ", default=0,
type=float)
parser.add_argument("--mean_val_b", "-mean_val_b",
help="Mean value of blue chanel for mean value subtraction in postprocessing ", default=0,
type=float)
parser.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
args.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. "
"Absolute MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the "
"kernels implementations", type=str, default=None)
args.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
args.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified. Default value is CPU", default="CPU",
type=str)
args.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
args.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
args.add_argument("--mean_val_r", "-mean_val_r",
help="Mean value of red chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("--mean_val_g", "-mean_val_g",
help="Mean value of green chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("--mean_val_b", "-mean_val_b",
help="Mean value of blue chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
return parser
@@ -101,7 +104,6 @@ def main():
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = plugin.load(network=net)
del net
# Start sync inference
log.info("Starting inference ({} iterations)".format(args.number_iter))
@@ -133,8 +135,6 @@ def main():
out_img = os.path.join(os.path.dirname(__file__), "out_{}.bmp".format(batch))
cv2.imwrite(out_img, data)
log.info("Result image was saved to {}".format(out_img))
del exec_net
del plugin
if __name__ == '__main__':

View File

@@ -1,21 +0,0 @@
background
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor