rm python object_detection_sample_ssd (#8880)

* remove python object_detection_sample_ssd

* rm refs to deleted python sample
This commit is contained in:
Vladimir Dudnik 2021-12-02 09:51:36 +03:00 committed by GitHub
parent 4b8d6c59e3
commit bc8fbf530b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 16 additions and 313 deletions

View File

@ -40,7 +40,6 @@ Inference Engine sample applications include the following:
- **Object Detection for SSD Sample** Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
- [Object Detection SSD C++ Sample](../../samples/cpp/object_detection_sample_ssd/README.md)
- [Object Detection SSD C Sample](../../samples/c/object_detection_sample_ssd/README.md)
- [Object Detection SSD Python* Sample](../../samples/python/object_detection_sample_ssd/README.md)
> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.

View File

@ -49,7 +49,7 @@ Intermediate blobs between these sub graphs are allocated automatically in the m
Samples can be used with the following command:
```sh
./object_detection_sample_ssd -m <path_to_model>/ModelSSD.xml -i <path_to_pictures>/picture.jpg -d HETERO:GPU,CPU
./hello_classification <path_to_model>/squeezenet1.1.xml <path_to_pictures>/picture.jpg HETERO:GPU,CPU
```
where:
- `HETERO` stands for heterogeneous plugin

View File

@ -69,7 +69,6 @@ The attribute names are self-explanatory or match the name in the `hparams_confi
OpenVINO&trade; toolkit provides samples that can be used to infer EfficientDet model. For more information, refer to
[Object Detection for SSD C++ Sample](@ref openvino_inference_engine_samples_object_detection_sample_ssd_README) and
[Object Detection for SSD Python Sample](@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README).
## <a name="efficientdet-ir-results-interpretation"></a>Interpreting Results of the TensorFlow Model and the IR

View File

@ -174,7 +174,6 @@ limitations under the License.
<tab type="user" title="nGraph Function Creation C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="nGraph Function Creation Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection SSD C++ Sample" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection SSD Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection SSD C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Automatic Speech Recognition C++ Sample" url="@ref openvino_inference_engine_samples_speech_sample_README"/>
<tab type="user" title="Automatic Speech Recognition Python Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_speech_sample_README"/>

View File

@ -54,29 +54,29 @@ The OpenVINO™ workflow on Raspbian* OS is as follows:
## <a name="using-sample"></a>Build and Run Code Samples
Follow the steps below to run pre-trained Face Detection network using Inference Engine samples from the OpenVINO toolkit.
Follow the steps below to run pre-trained SqueezeNet image classification network using Inference Engine samples from the OpenVINO toolkit.
1. Create a samples build directory. This example uses a directory named `build`:
```sh
mkdir build && cd build
```
2. Build the Object Detection Sample with the following command:
2. Build the Hello Classification Sample with the following command:
```sh
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2022/samples/cpp
make -j2 object_detection_sample_ssd
make -j2 hello_classification
```
3. Download the pre-trained Face Detection model with the [Model Downloader tool](@ref omz_tools_downloader):
3. Download the pre-trained SqueezeNet image classification model with the [Model Downloader tool](@ref omz_tools_downloader):
```sh
git clone --depth 1 https://github.com/openvinotoolkit/open_model_zoo
cd open_model_zoo/tools/downloader
python3 -m pip install -r requirements.in
python3 downloader.py --name face-detection-adas-0001
python3 downloader.py --name squeezenet1.1
```
4. Run the sample, specifying the model and path to the input image:
```sh
./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i <path_to_image>
./armv7l/Release/hello_classification <path_to_model>/squeezenet1.1.xml <path_to_image> MYRIAD
```
The application outputs an image (`out_0.bmp`) with detected faced enclosed in rectangles.
The application outputs to console window top 10 classification results.
## <a name="basic-guidelines-sample-application"></a>Basic Guidelines for Using Code Samples

View File

@ -138,25 +138,25 @@ Follow the next steps to use the pre-trained face detection model using Inferenc
```sh
mkdir build && cd build
```
2. Build the Object Detection Sample:
2. Build the Hello Classification Sample:
```sh
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2022/samples/cpp
```
```sh
make -j2 object_detection_sample_ssd
make -j2 hello_classifiaction
```
3. Download the pre-trained Face Detection model with the Model Downloader or copy it from the host machine:
3. Download the pre-trained squeezenet1.1 image classifiaction model with the Model Downloader or copy it from the host machine:
```sh
git clone --depth 1 https://github.com/openvinotoolkit/open_model_zoo
cd open_model_zoo/tools/downloader
python3 -m pip install -r requirements.in
python3 downloader.py --name face-detection-adas-0001
python3 downloader.py --name squeezenet1.1
```
4. Run the sample specifying the model, a path to the input image, and the VPU required to run with the Raspbian* OS:
```sh
./armv7l/Release/object_detection_sample_ssd -m <path_to_model>/face-detection-adas-0001.xml -d MYRIAD -i <path_to_image>
./armv7l/Release/hello_classification <path_to_model>/squeezenet1.1.xml <path_to_image> MYRIAD
```
The application outputs an image (`out_0.bmp`) with detected faced enclosed in rectangles.
The application outputs to console window top 10 classification results.
Congratulations, you have finished the OpenVINO™ toolkit for Raspbian* OS installation. You have completed all required installation, configuration and build steps in this guide.

View File

@ -24,7 +24,7 @@ Basic Inference Engine API is covered by [Hello Classification C sample](../hell
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)
| Validated images | The sample uses OpenCV* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (.bmp, .png, .jpg)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../samples/cpp/object_detection_sample_ssd/README.md), [Python](../../python/object_detection_sample_ssd/README.md) |
| Other language realization | [C++](../../../samples/cpp/object_detection_sample_ssd/README.md) |
## How It Works

View File

@ -20,7 +20,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C](../../../samples/c/object_detection_sample_ssd/README.md), [Python](../../../samples/python/object_detection_sample_ssd/README.md) |
| Other language realization | [C](../../../samples/c/object_detection_sample_ssd/README.md) |
## How It Works

View File

@ -1,133 +0,0 @@
# Object Detection SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README}
This sample demonstrates how to do inference of object detection networks using Synchronous Inference Request API.
Models with 1 input and 1 or 2 outputs are supported.
In the last case names of output blobs must be "boxes" and "labels".
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd), [face-detection-0206](@ref omz_models_model_face_detection_0206) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../samples/cpp/object_detection_sample_ssd/README.md), [C](../../c/object_detection_sample_ssd/README.md) |
## How It Works
On startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/object_detection_sample_ssd.py -h
```
Usage message:
```
usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION]
[-c CONFIG] [-d DEVICE]
[--labels LABELS]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name mobilenet-ssd
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name mobilenet-ssd
```
3. Perform inference of `car.bmp` using `mobilenet-ssd` model on a `GPU`, for example:
```
python <path_to_sample>/object_detection_sample_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/car.bmp -d GPU
```
## Sample Output
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\mobilenet-ssd\FP32\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (300, 300)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Found: label = 7, confidence = 1.00, coords = (228, 120), (502, 460)
[ INFO ] Found: label = 7, confidence = 0.95, coords = (637, 233), (743, 608)
[ INFO ] Image out.bmp created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519

View File

@ -1,161 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import logging as log
import os
import sys
import cv2
import numpy as np
from openvino.inference_engine import IECore
def parse_args() -> argparse.Namespace:
"""Parse and return command line arguments"""
parser = argparse.ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
# fmt: off
args.add_argument('-h', '--help', action='help', help='Show this help message and exit.')
args.add_argument('-m', '--model', required=True, type=str,
help='Required. Path to an .xml or .onnx file with a trained model.')
args.add_argument('-i', '--input', required=True, type=str, help='Required. Path to an image file.')
args.add_argument('-l', '--extension', type=str, default=None,
help='Optional. Required by the CPU Plugin for executing the custom operation on a CPU. '
'Absolute path to a shared library with the kernels implementations.')
args.add_argument('-c', '--config', type=str, default=None,
help='Optional. Required by GPU or VPU Plugins for the custom operation kernel. '
'Absolute path to operation description file (.xml).')
args.add_argument('-d', '--device', default='CPU', type=str,
help='Optional. Specify the target device to infer on; CPU, GPU, MYRIAD, HDDL or HETERO: '
'is acceptable. The sample will look for a suitable plugin for device specified. '
'Default value is CPU.')
args.add_argument('--labels', default=None, type=str, help='Optional. Path to a labels mapping file.')
# fmt: on
return parser.parse_args()
def main(): # noqa
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
args = parse_args()
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
log.info('Creating Inference Engine')
ie = IECore()
if args.extension and args.device == 'CPU':
log.info(f'Loading the {args.device} extension: {args.extension}')
ie.add_extension(args.extension, args.device)
if args.config and args.device in ('GPU', 'MYRIAD', 'HDDL'):
log.info(f'Loading the {args.device} configuration: {args.config}')
ie.set_config({'CONFIG_FILE': args.config}, args.device)
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
log.info(f'Reading the network: {args.model}')
# (.xml and .bin files) or (.onnx file)
net = ie.read_network(model=args.model)
if len(net.input_info) != 1:
log.error('The sample supports only single input topologies')
return -1
if len(net.outputs) != 1 and not ('boxes' in net.outputs or 'labels' in net.outputs):
log.error('The sample supports models with 1 output or with 2 with the names "boxes" and "labels"')
return -1
# ---------------------------Step 3. Configure input & output----------------------------------------------------------
log.info('Configuring input and output blobs')
# Get name of input blob
input_blob = next(iter(net.input_info))
# Set input and output precision manually
net.input_info[input_blob].precision = 'U8'
if len(net.outputs) == 1:
output_blob = next(iter(net.outputs))
net.outputs[output_blob].precision = 'FP32'
else:
net.outputs['boxes'].precision = 'FP32'
net.outputs['labels'].precision = 'U16'
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
exec_net = ie.load_network(network=net, device_name=args.device)
# ---------------------------Step 5. Create infer request--------------------------------------------------------------
# load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
# instance which stores infer requests. So you already created Infer requests in the previous step.
# ---------------------------Step 6. Prepare input---------------------------------------------------------------------
original_image = cv2.imread(args.input)
image = original_image.copy()
_, _, net_h, net_w = net.input_info[input_blob].input_data.shape
if image.shape[:-1] != (net_h, net_w):
log.warning(f'Image {args.input} is resized from {image.shape[:-1]} to {(net_h, net_w)}')
image = cv2.resize(image, (net_w, net_h))
# Change data layout from HWC to CHW
image = image.transpose((2, 0, 1))
# Add N dimension to transform to NCHW
image = np.expand_dims(image, axis=0)
# ---------------------------Step 7. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
res = exec_net.infer(inputs={input_blob: image})
# ---------------------------Step 8. Process output--------------------------------------------------------------------
# Generate a label list
if args.labels:
with open(args.labels, 'r') as f:
labels = [line.split(',')[0].strip() for line in f]
output_image = original_image.copy()
h, w, _ = output_image.shape
if len(net.outputs) == 1:
res = res[output_blob]
# Change a shape of a numpy.ndarray with results ([1, 1, N, 7]) to get another one ([N, 7]),
# where N is the number of detected bounding boxes
detections = res.reshape(-1, 7)
else:
detections = res['boxes']
labels = res['labels']
# Redefine scale coefficients
w, h = w / net_w, h / net_h
for i, detection in enumerate(detections):
if len(net.outputs) == 1:
_, class_id, confidence, xmin, ymin, xmax, ymax = detection
else:
class_id = labels[i]
xmin, ymin, xmax, ymax, confidence = detection
if confidence > 0.5:
label = int(labels[class_id]) if args.labels else int(class_id)
xmin = int(xmin * w)
ymin = int(ymin * h)
xmax = int(xmax * w)
ymax = int(ymax * h)
log.info(f'Found: label = {label}, confidence = {confidence:.2f}, ' f'coords = ({xmin}, {ymin}), ({xmax}, {ymax})')
# Draw a bounding box on a output image
cv2.rectangle(output_image, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)
cv2.imwrite('out.bmp', output_image)
if os.path.exists('out.bmp'):
log.info('Image out.bmp created!')
else:
log.error('Image out.bmp was not created. Check your permissions.')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())