rm C/C++ object_detection_sample_ssd (#9020)

* rm C/C++ object_detection_sample_ssd

* fix link
This commit is contained in:
Vladimir Dudnik 2021-12-06 10:38:40 +03:00 committed by GitHub
parent f8c5a4abf4
commit 355fdeec55
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
19 changed files with 16 additions and 2202 deletions

View File

@ -34,9 +34,6 @@ Inference Engine sample applications include the following:
- **nGraph Function Creation Sample** Construction of the LeNet network using the nGraph function creation sample.
- [nGraph Function Creation C++ Sample](../../samples/cpp/ngraph_function_creation_sample/README.md)
- [nGraph Function Creation Python Sample](../../samples/python/ngraph_function_creation_sample/README.md)
- **Object Detection for SSD Sample** Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
- [Object Detection SSD C++ Sample](../../samples/cpp/object_detection_sample_ssd/README.md)
- [Object Detection SSD C Sample](../../samples/c/object_detection_sample_ssd/README.md)
> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.

View File

@ -68,7 +68,7 @@ The attribute names are self-explanatory or match the name in the `hparams_confi
> **NOTE:** The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../Converting_Model_General.md).
OpenVINO&trade; toolkit provides samples that can be used to infer EfficientDet model. For more information, refer to
[Object Detection for SSD C++ Sample](@ref openvino_inference_engine_samples_object_detection_sample_ssd_README) and
[Open Model Zoo Demos](@ref omz_demos) and
## <a name="efficientdet-ir-results-interpretation"></a>Interpreting Results of the TensorFlow Model and the IR

View File

@ -55,13 +55,19 @@ For example, if you downloaded the [pre-trained SSD InceptionV2 topology](http:/
<INSTALL_DIR>/tools/model_optimizer/mo_tf.py --input_model=/tmp/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --transformations_config <INSTALL_DIR>/tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /tmp/ssd_inception_v2_coco_2018_01_28/pipeline.config --reverse_input_channels
```
## OpenVINO&; Toolkit Samples and Open Model Zoo Demos
Inference Engine comes with a number of samples to demonstrate use of OpenVINO API, additionally,
Open Model Zoo provides set of demo applications to show implementation of close to real life applications
based on deep learning in various tasks, including Image Classifiacton, Visual Object Detection, Text Recognition,
Speech Recognition, Natural Language Processing and others. Refer to the links below for more details.
* [Inference Engine Samples](../../../../IE_DG/Samples_Overview.md)
* [Open Model Zoo Demos](@ref omz_demos)
## Important Notes About Feeding Input Images to the Samples
Inference Engine comes with a number of samples to infer Object Detection API models including:
* [Object Detection for SSD Sample](../../../../../samples/cpp/object_detection_sample_ssd/README.md) --- for RFCN, SSD and Faster R-CNNs
* [Mask R-CNN Sample for TensorFlow* Object Detection API Models](@ref omz_demos_mask_rcnn_demo_cpp) --- for Mask R-CNNs
There are several important notes about feeding input images to the samples:
1. Inference Engine samples stretch input image to the size of the input operation without preserving aspect ratio. This behavior is usually correct for most topologies (including SSDs), but incorrect for other models like Faster R-CNN, Mask R-CNN and R-FCN. These models usually use keeps aspect ratio resizer. The type of pre-processing is defined in the pipeline configuration file in the section `image_resizer`. If keeping aspect ratio is used, then it is necessary to resize image before passing it to the sample and optionally pad the resized image with 0s (if the attribute "pad_to_max_dimension" in the pipeline.config is equal to "true").

View File

@ -173,8 +173,6 @@ limitations under the License.
<tab type="user" title="Hello Query Device Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README"/>
<tab type="user" title="nGraph Function Creation C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="nGraph Function Creation Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection SSD C++ Sample" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection SSD C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Automatic Speech Recognition C++ Sample" url="@ref openvino_inference_engine_samples_speech_sample_README"/>
<tab type="user" title="Automatic Speech Recognition Python Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_speech_sample_README"/>
<tab type="user" title="Benchmark C++ Tool" url="@ref openvino_inference_engine_samples_benchmark_app_README"/>

View File

@ -432,13 +432,6 @@ Template to call sample code or a demo application:
<path_to_app> -i <path_to_media> -m <path_to_model> -d <target_device>
```
With the sample information specified, the command might look like this:
```sh
./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU
```
## <a name="advanced-samples"></a> Advanced Demo Use
Some demo applications let you use multiple models for different purposes. In these cases, the output of the first model is usually used as the input for later models.
@ -453,22 +446,6 @@ For head pose:
`-m_hp <headpose model> -d_hp <headpose hardware target>`
**Example of an Entire Command (object_detection + head pose):**
```sh
./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU
```
**Example of an Entire Command (object_detection + head pose + age-gender):**
```sh
./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/r/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU -m_ag age-gender.xml -d_ag CPU
```
You can see all the sample applications parameters by adding the `-h` or `--help` option at the command line.

View File

@ -416,22 +416,6 @@ For head pose:
`-m_hp <headpose model> -d_hp <headpose hardware target>`
**Example of an Entire Command (object_detection + head pose):**
```sh
./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/ir/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU
```
**Example of an Entire Command (object_detection + head pose + age-gender):**
```sh
./object_detection_demo_ssd_async -i ~/Videos/catshow.mp4 \
-m ~/r/fp32/mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU -m_ag age-gender.xml -d_ag CPU
```
You can see all the sample applications parameters by adding the `-h` or `--help` option at the command line.

View File

@ -396,13 +396,6 @@ Template to call sample code or a demo application:
<path_to_app> -i <path_to_media> -m <path_to_model> -d <target_device>
```
With the sample information specified, the command might look like this:
```bat
.\object_detection_demo_ssd_async -i C:\Users\<USER_ID>\Documents\Videos\catshow.mp4 \
-m C:\Users\<USER_ID>\Documents\ir\fp32\mobilenet-ssd.xml -d CPU
```
## <a name="advanced-samples"></a> Advanced Demo Use
Some demo applications let you use multiple models for different purposes. In these cases, the output of the first model is usually used as the input for later models.
@ -417,22 +410,6 @@ For head pose:
`-m_hp <headpose model> -d_hp <headpose hardware target>`
**Example of an Entire Command (object_detection + head pose):**
```bat
.\object_detection_demo_ssd_async -i C:\Users\<USER_ID>\Documents\Videos\catshow.mp4 \
-m C:\Users\<USER_ID>\Documents\ir\fp32\mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU
```
**Example of an Entire Command (object_detection + head pose + age-gender):**
```bat
.\object_detection_demo_ssd_async -i C:\Users\<USER_ID>\Documents\Videos\catshow.mp4 \
-m C:\Users\<USER_ID>\Documents\ir\fp32\mobilenet-ssd.xml -d CPU -m_hp headpose.xml \
-d_hp CPU -m_ag age-gender.xml -d_ag CPU
```
You can see all the sample applications parameters by adding the `-h` or `--help` option at the command line.

View File

@ -238,12 +238,12 @@ For general details on the heterogeneous plugin, refer to the [corresponding sec
### Trying the Heterogeneous Plugin with Inference Engine Samples <a name="heterogeneous-plugin-with-samples"></a>
Every Inference Engine sample supports the `-d` (device) option.
Target device cand be specified from command line for every Inference Engine sample.
For example, here is a command to run an [Object Detection Sample SSD Sample](../../samples/cpp/object_detection_sample_ssd/README.md):
For example, here is a command to run an [Classification Sample Async](../../samples/cpp/classification_sample_async/README.md):
```sh
./object_detection_sample_ssd -m <path_to_model>/ModelSSD.xml -i <path_to_pictures>/picture.jpg -d HETERO:GPU,CPU
./classification_sample_async -m <path_to_model>/Model.xml -i <path_to_pictures>/picture.jpg -d HETERO:GPU,CPU
```
where:

View File

@ -1,9 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
ie_add_sample(NAME object_detection_sample_ssd_c
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.c"
HEADERS "${CMAKE_CURRENT_SOURCE_DIR}/object_detection_sample_ssd.h"
"${CMAKE_CURRENT_SOURCE_DIR}/c_w_dirent.h"
DEPENDENCIES opencv_c_wrapper)

View File

@ -1,171 +0,0 @@
# Object Detection SSD C Sample {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README}
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Asynchronous Inference Request API and [input reshape feature](../../../docs/IE_DG/ShapeInference.md).
Object Detection SSD C sample application demonstrates how to use the following Inference Engine C API in applications:
| Feature | API | Description |
|:--- |:--- |:---
|Asynchronous Infer |[ie_infer_request_infer_async][ie_infer_request_wait]| Do Asynchronous inference
|Inference Engine Version| [ie_c_api_version] | Get Inference Engine API version
|Available Devices| [ie_core_get_versions] | Get version information of the devices for inference
|Custom Extension Kernels|[ie_core_add_extension] [ie_core_set_config]| Load extension library and config to the device
|Network Operations|[ie_network_get_inputs_number] [ie_network_get_input_dims] [ie_network_get_input_shapes] [ie_network_get_outputs_number] [ie_network_get_output_dims]| Managing of network
|Blob Operations|[ie_blob_get_buffer]| Work with memory container for storing inputs, outputs of the network, weights and biases of the layers
|Input Reshape|[ie_network_reshape]| Set the batch size equal to the number of input images
Basic Inference Engine API is covered by [Hello Classification C sample](../hello_classification/README.md).
> **NOTE**: This sample uses `ie_network_reshape()` to set the batch size. While supported by SSD networks, reshape may not work with arbitrary topologies. See [Shape Inference Guide](../../../docs/IE_DG/ShapeInference.md) for more info.
| Options | Values |
|:--- |:---
| Validated Models | [person-detection-retail-0013](@ref omz_models_model_person_detection_retail_0013)
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)
| Validated images | The sample uses OpenCV* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (.bmp, .png, .jpg)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../samples/cpp/object_detection_sample_ssd/README.md) |
## How It Works
Upon the start-up the sample application reads command line parameters, loads specified network and image(s) to the Inference
Engine plugin. Then, the sample creates an asynchronous inference request object. When inference is done, the application creates output image(s) and output data to the standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Building
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
## Running
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
Running the application with the `-h` option yields the following usage message:
```
<path_to_sample>/object_detection_sample_ssd_c -h
[ INFO ] InferenceEngine:
<version><number>
[ INFO ] Parsing input parameters
object_detection_sample_ssd_c [OPTION]
Options:
-h Print a usage message.
-m "<path>" Required. Path to an .xml file with a trained model.
-i "<path>" Required. Path to one or more images or folder with images.
-l "<absolute_path>" Required for CPU plugin custom layers. Absolute path to a shared library with the kernels implementations.
Or
-c "<absolute_path>" Required for GPU, MYRIAD, HDDL custom kernels. Absolute path to the .xml config file
with the kernels descriptions.
-d "<device>" Optional. Specify the target device to infer. Default value is CPU.
Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. Sample will look for a suitable plugin for device specified
-g Path to the configuration file. Default value: "config".
```
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name person-detection-retail-0013
```
2. `person-detection-retail-0013` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script:
```
python <path_to_omz_tools>/converter.py --name <model_name>
```
3. For example, to perform inference on a CPU with the OpenVINO&trade; toolkit person detection SSD models, run one of the following commands:
- with one image and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model
```
<path_to_sample>/object_detection_sample_ssd_c -i <path_to_image>/inputImage.bmp -m <path_to_model>/person-detection-retail-0013.xml -d CPU
```
- with some images and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model
```
<path_to_sample>/object_detection_sample_ssd_c -i <path_to_image>/inputImage1.bmp <path_to_image>/inputImage2.bmp ... -m <path_to_model>/person-detection-retail-0013.xml -d CPU
```
- with [person-detection-retail-0002](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0002_description_person_detection_retail_0002.html) model
```
<path_to_sample>/object_detection_sample_ssd_c -i <path_to_folder_with_images> -m <path_to_model>/person-detection-retail-0002.xml -d CPU
```
## Sample Output
The application outputs several images (`out_0.bmp`, `out_1.bmp`, ... ) with detected objects enclosed in rectangles. It outputs the list of
classes of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream.
```
<path_to_sample>/object_detection_sample_ssd_c -m person-detection-retail-0013.xml -i image_1.png image_2.jpg
[ INFO ] InferenceEngine:
<version><number>
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 2
[ INFO ] image_1.png
[ INFO ] image_2.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... <version><number>
Build ......... <version><number>
[ INFO ] Loading network:
person-detection-retail-0013.xml
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (1699, 960) to (544, 320)
[ WARNING ] Image is resized from (614, 346) to (544, 320)
[ INFO ] Batch size is 2
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ INFO ] Start inference
[ INFO ] Processing output blobs
[0, 1] element, prob = 0.999090 (370, 201)-(634, 762) batch id : 0 WILL BE PRINTED!
[1, 1] element, prob = 0.997386 (836, 192)-(999, 663) batch id : 0 WILL BE PRINTED!
[2, 1] element, prob = 0.314753 (192, 2)-(265, 172) batch id : 0
...
[ INFO ] Image out_0.bmp created!
[ INFO ] Image out_1.bmp created!
[ INFO ] Execution successful
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[ie_infer_request_infer_async]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#gad2351010e292b6faec959a3d5a8fb60e
[ie_infer_request_wait]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#ga0c05e63e63c8d9cdd92900e82b0137c9
[ie_c_api_version]:https://docs.openvinotoolkit.org/latest/ie_c_api/ie__c__api_8h.html#a8fe3efe9cc606dcc7bec203102043e68
[ie_core_get_versions]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#ga2932e188a690393f5d594572ac5d237b
[ie_core_add_extension]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gadded2444ba81d2d396516b72c2478f8e
[ie_core_set_config]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaf09d1e77cc264067e4e22ddf99f21ec1
[ie_network_get_inputs_number]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga6a3349bca66c4ba8b41a434061fccf52
[ie_network_get_input_dims]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#gac621a654b89d413041cbc2288627f6a5
[ie_network_get_input_shapes]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga5409734f25ffbb1379e876217c0bc6f3
[ie_network_get_outputs_number]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga869b8c309797f1e09f73ddffd1b57509
[ie_network_get_output_dims]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga8de7bf2f626f19eba08a2f043fc1b5d2
[ie_network_reshape]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#gac4f690afd0c2221f7db2ff9be4aa0637
[ie_blob_get_buffer]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#ga948e0186cea6a393c113d5c399cfcb4c

View File

@ -1,189 +0,0 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#if defined(_WIN32)
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN_UNDEF
# endif
# ifndef NOMINMAX
# define NOMINMAX
# define NOMINMAX_UNDEF
# endif
# if defined(_M_IX86) && !defined(_X86_) && !defined(_AMD64_)
# define _X86_
# endif
# if defined(_M_X64) && !defined(_X86_) && !defined(_AMD64_)
# define _AMD64_
# endif
# if defined(_M_ARM) && !defined(_ARM_) && !defined(_ARM64_)
# define _ARM_
# endif
# if defined(_M_ARM64) && !defined(_ARM_) && !defined(_ARM64_)
# define _ARM64_
# endif
// clang-format off
#include <string.h>
#include <windef.h>
#include <fileapi.h>
#include <Winbase.h>
#include <sys/stat.h>
// clang-format on
// Copied from linux libc sys/stat.h:
# define S_ISREG(m) (((m)&S_IFMT) == S_IFREG)
# define S_ISDIR(m) (((m)&S_IFMT) == S_IFDIR)
/// @brief structure to store directory names
typedef struct dirent {
char* d_name;
} dirent;
/**
* @brief Add directory to directory names struct
* @param int argc - count of args
* @param char *argv[] - array values of args
* @param char *opts - array of options
* @return pointer to directory names struct
*/
static dirent* createDirent(const wchar_t* wsFilePath) {
dirent* d = (dirent*)malloc(sizeof(dirent));
size_t i;
size_t slen = wcslen(wsFilePath);
d->d_name = (char*)(malloc(slen + 1));
wcstombs_s(&i, d->d_name, slen + 1, wsFilePath, slen);
return d;
}
/**
* @brief Free directory names struct
* @param point to directory names structure
* @return none
*/
static void freeDirent(dirent** d) {
free((*d)->d_name);
(*d)->d_name = NULL;
free(*d);
*d = NULL;
}
/// @brief structure to store directory data (files meta)
typedef struct DIR {
WIN32_FIND_DATAA FindFileData;
HANDLE hFind;
dirent* next;
} DIR;
/**
* @brief Compare two string, second string is the end of the first
* @param string to compare
* @param end string to find
* @return status 1(success) or 0(fail)
*/
static int endsWith(const char* src, const char* with) {
int wl = (int)(strlen(with));
int so = (int)(strlen(with)) - wl;
if (so < 0)
return 0;
if (strncmp(with, &(src[so]), wl) == 0)
return 1;
else
return 0;
}
/**
* @brief Check file handler is valid
* @param struct of directory data
* @return status 1(success) or 0(fail)
*/
static int isValid(DIR* dp) {
if (dp->hFind != INVALID_HANDLE_VALUE && dp->FindFileData.dwReserved0) {
return 1;
} else {
return 0;
}
}
/**
* @brief Create directory data struct element
* @param string directory path
* @return pointer to directory data struct element
*/
static DIR* opendir(const char* dirPath) {
DIR* dp = (DIR*)malloc(sizeof(DIR));
dp->next = NULL;
char* ws = (char*)(malloc(strlen(dirPath) + 1));
strcpy(ws, dirPath);
if (endsWith(ws, "\\"))
strcat(ws, "*");
else
strcat(ws, "\\*");
dp->hFind = FindFirstFileA(ws, &dp->FindFileData);
dp->FindFileData.dwReserved0 = dp->hFind != INVALID_HANDLE_VALUE;
free(ws);
if (isValid(dp)) {
free(dp);
return NULL;
}
return dp;
}
/**
* @brief Walk throw directory data struct
* @param pointer to directory data struct
* @return pointer to directory data struct next element
*/
static struct dirent* readdir(DIR* dp) {
if (dp->next != NULL)
freeDirent(&(dp->next));
if (!dp->FindFileData.dwReserved0)
return NULL;
wchar_t wbuf[4096];
size_t outSize;
mbstowcs_s(&outSize, wbuf, 4094, dp->FindFileData.cFileName, 4094);
dp->next = createDirent(wbuf);
dp->FindFileData.dwReserved0 = FindNextFileA(dp->hFind, &(dp->FindFileData));
return dp->next;
}
/**
* @brief Remove directory data struct
* @param pointer to struct directory data
* @return none
*/
static void closedir(DIR* dp) {
if (dp->next) {
freeDirent(&(dp->next));
}
free(dp);
}
# ifdef WIN32_LEAN_AND_MEAN_UNDEF
# undef WIN32_LEAN_AND_MEAN
# undef WIN32_LEAN_AND_MEAN_UNDEF
# endif
# ifdef NOMINMAX_UNDEF
# undef NOMINMAX_UNDEF
# undef NOMINMAX
# endif
#else
# include <dirent.h>
# include <sys/types.h>
#endif

View File

@ -1,852 +0,0 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <c_api/ie_c_api.h>
#include <opencv_c_wrapper.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>
#include "object_detection_sample_ssd.h"
#ifdef _WIN32
# include "c_w_dirent.h"
#else
# include <dirent.h>
#endif
#define MAX_IMAGES 20
static const char* img_msg = NULL;
static const char* input_model = NULL;
static const char* device_name = "CPU";
static const char* custom_plugin_cfg_msg = NULL;
static const char* custom_ex_library_msg = NULL;
static const char* config_msg = NULL;
static int file_num = 0;
static char** file_paths = NULL;
const char* info = "[ INFO ] ";
const char* warn = "[ WARNING ] ";
/**
* @brief Parse and check command line arguments
* @param int argc - count of args
* @param char *argv[] - array values of args
* @return int - status 1(success) or -1(fail)
*/
int ParseAndCheckCommandLine(int argc, char* argv[]) {
int opt = 0;
int help = 0;
char* string = "hi:m:d:c:l:g:";
printf("%sParsing input parameters\n", info);
while ((opt = getopt(argc, argv, string)) != -1) {
switch (opt) {
case 'h':
showUsage();
help = 1;
break;
case 'i':
img_msg = optarg;
break;
case 'm':
input_model = optarg;
break;
case 'd':
device_name = optarg;
break;
case 'c':
custom_plugin_cfg_msg = optarg;
break;
case 'l':
custom_ex_library_msg = optarg;
break;
case 'g':
config_msg = optarg;
break;
default:
fprintf(stderr, "Unknown argument `%c`. Please use -h option.\n", opt);
return -1;
}
}
if (help)
return -1;
if (input_model == NULL) {
fprintf(stderr, "Model is required but not set. Please set -m option.\n");
return -1;
}
if (img_msg == NULL) {
fprintf(stderr, "Input is required but not set.Please set -i option.\n");
return -1;
}
return 1;
}
/**
* @brief This function checks input args and existence of specified files in a
* given folder. Updated the file_paths and file_num.
* @param arg path to a file to be checked for existence
* @return none.
*/
void readInputFilesArgument(const char* arg) {
struct stat sb;
if (stat(arg, &sb) != 0) {
fprintf(stderr, "%sFile %s cannot be opened!\n", warn, arg);
return;
}
if (S_ISDIR(sb.st_mode)) {
DIR* dp;
dp = opendir(arg);
if (dp == NULL) {
fprintf(stderr, "%sFile %s cannot be opened!\n", warn, arg);
return;
}
struct dirent* ep;
while (NULL != (ep = readdir(dp))) {
const char* fileName = ep->d_name;
if (strcmp(fileName, ".") == 0 || strcmp(fileName, "..") == 0)
continue;
char* file_path = (char*)calloc(strlen(arg) + strlen(ep->d_name) + 2, sizeof(char));
memcpy(file_path, arg, strlen(arg));
memcpy(file_path + strlen(arg), "/", strlen("/"));
memcpy(file_path + strlen(arg) + strlen("/"), ep->d_name, strlen(ep->d_name) + 1);
if (file_num == 0) {
file_paths = (char**)calloc(1, sizeof(char*));
file_paths[0] = file_path;
++file_num;
} else {
char** temp = (char**)realloc(file_paths, sizeof(char*) * (file_num + 1));
if (temp) {
file_paths = temp;
file_paths[file_num++] = file_path;
} else {
int i;
for (i = 0; i < file_num; ++i) {
free(file_paths[i]);
}
free(file_path);
free(file_paths);
file_num = 0;
}
}
}
closedir(dp);
dp = NULL;
} else {
char* file_path = (char*)calloc(strlen(arg) + 1, sizeof(char));
memcpy(file_path, arg, strlen(arg) + 1);
if (file_num == 0) {
file_paths = (char**)calloc(1, sizeof(char*));
}
file_paths[file_num++] = file_path;
}
}
/**
* @brief This function find -i key in input args. It's necessary to process
* multiple values for single key
* @return none.
*/
void parseInputFilesArguments(int argc, char** argv) {
int readArguments = 0, i;
for (i = 0; i < argc; ++i) {
if (strcmp(argv[i], "-i") == 0) {
readArguments = 1;
continue;
}
if (!readArguments) {
continue;
}
if (argv[i][0] == '-') {
break;
}
readInputFilesArgument(argv[i]);
}
if (file_num < MAX_IMAGES) {
printf("%sFiles were added: %d\n", info, file_num);
for (i = 0; i < file_num; ++i) {
printf("%s %s\n", info, file_paths[i]);
}
} else {
printf("%sFiles were added: %d. Too many to display each of them.\n", info, file_num);
}
}
/**
* @brief Convert the contents of configuration file to the ie_config_t struct.
* @param config_file File path.
* @param comment Separator symbol.
* @return A pointer to the ie_config_t instance.
*/
ie_config_t* parseConfig(const char* config_file, char comment) {
FILE* file = fopen(config_file, "r");
if (!file) {
fprintf(stderr, "ERROR file `%s` opening failure\n", config_file);
return NULL;
}
ie_config_t* cfg = NULL;
char key[256], value[256];
if (fscanf(file, "%s", key) != EOF && fscanf(file, "%s", value) != EOF) {
char* cfg_name = (char*)calloc(strlen(key) + 1, sizeof(char));
char* cfg_value = (char*)calloc(strlen(value) + 1, sizeof(char));
memcpy(cfg_name, key, strlen(key) + 1);
memcpy(cfg_value, value, strlen(value) + 1);
ie_config_t* cfg_t = (ie_config_t*)calloc(1, sizeof(ie_config_t));
cfg_t->name = cfg_name;
cfg_t->value = cfg_value;
cfg_t->next = NULL;
cfg = cfg_t;
}
if (cfg) {
ie_config_t* cfg_temp = cfg;
while (fscanf(file, "%s", key) != EOF && fscanf(file, "%s", value) != EOF) {
if (strlen(key) == 0 || key[0] == comment) {
continue;
}
char* cfg_name = (char*)calloc(strlen(key) + 1, sizeof(char));
char* cfg_value = (char*)calloc(strlen(value) + 1, sizeof(char));
memcpy(cfg_name, key, strlen(key) + 1);
memcpy(cfg_value, value, strlen(value) + 1);
ie_config_t* cfg_t = (ie_config_t*)calloc(1, sizeof(ie_config_t));
cfg_t->name = cfg_name;
cfg_t->value = cfg_value;
cfg_t->next = NULL;
cfg_temp->next = cfg_t;
cfg_temp = cfg_temp->next;
}
}
fclose(file);
return cfg;
}
/**
* @brief Releases memory occupied by config
* @param config A pointer to the config to free memory.
* @return none
*/
void config_free(ie_config_t* config) {
while (config) {
ie_config_t* temp = config;
if (config->name) {
free((char*)config->name);
config->name = NULL;
}
if (config->value) {
free((char*)config->value);
config->value = NULL;
}
if (config->next) {
config = config->next;
}
free(temp);
temp = NULL;
}
}
/**
* @brief Convert the numbers to char *;
* @param str A pointer to the converted string.
* @param num The number to convert.
* @return none.
*/
void int2str(char* str, int num) {
int i = 0, j;
if (num == 0) {
str[0] = '0';
str[1] = '\0';
return;
}
while (num != 0) {
str[i++] = num % 10 + '0';
num = num / 10;
}
str[i] = '\0';
--i;
for (j = 0; j < i; ++j, --i) {
char temp = str[j];
str[j] = str[i];
str[i] = temp;
}
}
int main(int argc, char** argv) {
/** This sample covers certain topology and cannot be generalized for any
* object detection one **/
// ------------------------------ Get Inference Engine API version
// ---------------------------------
ie_version_t version = ie_c_api_version();
printf("%sInferenceEngine: \n", info);
printf("%s\n", version.api_version);
ie_version_free(&version);
// ------------------------------ Parsing and validation of input args
// ---------------------------------
char** argv_temp = (char**)calloc(argc, sizeof(char*));
if (!argv_temp) {
return EXIT_FAILURE;
}
int i, j;
for (i = 0; i < argc; ++i) {
argv_temp[i] = argv[i];
}
char *input_weight = NULL, *imageInputName = NULL, *imInfoInputName = NULL, *output_name = NULL;
ie_core_t* core = NULL;
ie_network_t* network = NULL;
ie_executable_network_t* exe_network = NULL;
ie_infer_request_t* infer_request = NULL;
ie_blob_t *imageInput = NULL, *output_blob = NULL;
if (ParseAndCheckCommandLine(argc, argv) < 0) {
free(argv_temp);
return EXIT_FAILURE;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Read input
// -----------------------------------------------------------
/** This file_paths stores paths to the processed images **/
parseInputFilesArguments(argc, argv_temp);
if (!file_num) {
fprintf(stderr, "No suitable images were found\n");
free(argv_temp);
return EXIT_FAILURE;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 1. Initialize inference engine core
// -------------------------------------
printf("%sLoading Inference Engine\n", info);
IEStatusCode status = ie_core_create("", &core);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_create status %d, line %d\n", status, __LINE__);
goto err;
}
// ------------------------------ Get Available Devices
// ------------------------------------------------------
ie_core_versions_t ver;
printf("%sDevice info: \n", info);
status = ie_core_get_versions(core, device_name, &ver);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_get_versions status %d, line %d\n", status, __LINE__);
goto err;
}
for (i = 0; i < ver.num_vers; ++i) {
printf(" %s\n", ver.versions[i].device_name);
printf(" %s version ......... %zu.%zu\n",
ver.versions[i].description,
ver.versions[i].major,
ver.versions[i].minor);
printf(" Build ......... %s\n", ver.versions[i].build_number);
}
ie_core_versions_free(&ver);
if (custom_ex_library_msg) {
// Custom CPU extension is loaded as a shared library and passed as a
// pointer to base extension
status = ie_core_add_extension(core, custom_ex_library_msg, "CPU");
if (status != OK) {
fprintf(stderr, "ERROR ie_core_add_extension status %d, line %d\n", status, __LINE__);
goto err;
}
printf("%sCustom extension loaded: %s\n", info, custom_ex_library_msg);
}
if (custom_plugin_cfg_msg &&
(strcmp(device_name, "GPU") == 0 || strcmp(device_name, "MYRIAD") == 0 || strcmp(device_name, "HDDL") == 0)) {
// Config for device plugin custom extension is loaded from an .xml
// description
ie_config_t cfg = {"CONFIG_FILE", custom_plugin_cfg_msg, NULL};
status = ie_core_set_config(core, &cfg, device_name);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_set_config status %d, line %d\n", status, __LINE__);
goto err;
}
printf("%sConfig for device plugin custom extension loaded: %s\n", info, custom_plugin_cfg_msg);
}
// -----------------------------------------------------------------------------------------------------
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin
// files) or ONNX (.onnx file) format
printf("%sLoading network:\n", info);
printf("\t%s\n", input_model);
status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_read_network status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 3. Configure input & output
// ---------------------------------------------
// --------------------------- Prepare input blobs
// -----------------------------------------------------
printf("%sPreparing input blobs\n", info);
/** SSD network has one input and one output **/
size_t input_num = 0;
status = ie_network_get_inputs_number(network, &input_num);
if (status != OK || (input_num != 1 && input_num != 2)) {
fprintf(stderr, "Sample supports topologies only with 1 or 2 inputs\n");
goto err;
}
/**
* Some networks have SSD-like output format (ending with DetectionOutput
* layer), but having 2 inputs as Faster-RCNN: one for image and one for
* "image info".
*
* Although object_datection_sample_ssd's main task is to support clean SSD,
* it could score the networks with two inputs as well. For such networks
* imInfoInputName will contain the "second" input name.
*/
size_t input_width = 0, input_height = 0;
/** Stores input image **/
/** Iterating over all input blobs **/
for (i = 0; i < input_num; ++i) {
char* name = NULL;
status |= ie_network_get_input_name(network, i, &name);
dimensions_t input_dim;
status |= ie_network_get_input_dims(network, name, &input_dim);
if (status != OK)
goto err;
/** Working with first input tensor that stores image **/
if (input_dim.ranks == 4) {
imageInputName = name;
input_height = input_dim.dims[2];
input_width = input_dim.dims[3];
/** Creating first input blob **/
status = ie_network_set_input_precision(network, name, U8);
if (status != OK)
goto err;
} else if (input_dim.ranks == 2) {
imInfoInputName = name;
status = ie_network_set_input_precision(network, name, FP32);
if (status != OK || (input_dim.dims[1] != 3 && input_dim.dims[1] != 6)) {
fprintf(stderr, "Invalid input info. Should be 3 or 6 values length\n");
goto err;
}
}
}
if (imageInputName == NULL) {
status = ie_network_get_input_name(network, 0, &imageInputName);
if (status != OK)
goto err;
dimensions_t input_dim;
status = ie_network_get_input_dims(network, imageInputName, &input_dim);
if (status != OK)
goto err;
input_height = input_dim.dims[2];
input_width = input_dim.dims[3];
}
/** Collect images data **/
c_mat_t* originalImages = (c_mat_t*)calloc(file_num, sizeof(c_mat_t));
c_mat_t* images = (c_mat_t*)calloc(file_num, sizeof(c_mat_t));
if (!originalImages || !images)
goto err;
int image_num = 0;
for (i = 0; i < file_num; ++i) {
c_mat_t img = {NULL, 0, 0, 0, 0, 0};
if (image_read(file_paths[i], &img) == -1) {
fprintf(stderr, "%sImage %s cannot be read!\n", warn, file_paths[i]);
continue;
}
/** Store image data **/
c_mat_t resized_img = {NULL, 0, 0, 0, 0, 0};
if ((input_width == img.mat_width) && (input_height == img.mat_height)) {
resized_img.mat_data_size = img.mat_data_size;
resized_img.mat_channels = img.mat_channels;
resized_img.mat_width = img.mat_width;
resized_img.mat_height = img.mat_height;
resized_img.mat_type = img.mat_type;
resized_img.mat_data = calloc(1, resized_img.mat_data_size);
if (resized_img.mat_data == NULL) {
image_free(&img);
continue;
}
for (j = 0; j < resized_img.mat_data_size; ++j)
resized_img.mat_data[j] = img.mat_data[j];
} else {
printf("%sImage is resized from (%d, %d) to (%zu, %zu)\n",
warn,
img.mat_width,
img.mat_height,
input_width,
input_height);
if (image_resize(&img, &resized_img, (int)input_width, (int)input_height) == -1) {
printf("%sImage %s cannot be resized!\n", warn, file_paths[i]);
image_free(&img);
continue;
}
}
originalImages[image_num] = img;
images[image_num] = resized_img;
++image_num;
}
if (!image_num) {
fprintf(stderr, "Valid input images were not found!\n");
free(originalImages);
free(images);
goto err;
}
input_shapes_t shapes;
status = ie_network_get_input_shapes(network, &shapes);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_input_shapes status %d, line %d\n", status, __LINE__);
goto err;
}
/** Using ie_network_reshape() to set the batch size equal to the number of
* input images **/
/** For input with NCHW/NHWC layout the first dimension N is the batch size
* **/
shapes.shapes[0].shape.dims[0] = image_num;
status = ie_network_reshape(network, shapes);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_reshape status %d, line %d\n", status, __LINE__);
goto err;
}
ie_network_input_shapes_free(&shapes);
input_shapes_t shapes2;
status = ie_network_get_input_shapes(network, &shapes2);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_input_shapes status %d, line %d\n", status, __LINE__);
goto err;
}
size_t batchSize = shapes2.shapes[0].shape.dims[0];
ie_network_input_shapes_free(&shapes2);
printf("%sBatch size is %zu\n", info, batchSize);
// --------------------------- Prepare output blobs
// ----------------------------------------------------
printf("%sPreparing output blobs\n", info);
size_t output_num = 0;
status = ie_network_get_outputs_number(network, &output_num);
if (status != OK || !output_num) {
fprintf(stderr, "Can't find a DetectionOutput layer in the topology\n");
goto err;
}
status = ie_network_get_output_name(network, output_num - 1, &output_name);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_output_name status %d, line %d\n", status, __LINE__);
goto err;
}
dimensions_t output_dim;
status = ie_network_get_output_dims(network, output_name, &output_dim);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_output_dims status %d, line %d\n", status, __LINE__);
goto err;
}
if (output_dim.ranks != 4) {
fprintf(stderr, "Incorrect output dimensions for SSD model\n");
goto err;
}
const int maxProposalCount = (int)output_dim.dims[2];
const int objectSize = (int)output_dim.dims[3];
if (objectSize != 7) {
printf("Output item should have 7 as a last dimension\n");
goto err;
}
/** Set the precision of output data provided by the user, should be called
* before load of the network to the device **/
status = ie_network_set_output_precision(network, output_name, FP32);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_set_output_precision status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 4. Loading model to the device
// ------------------------------------------
printf("%sLoading model to the device\n", info);
if (config_msg) {
ie_config_t* config = parseConfig(config_msg, '#');
status = ie_core_load_network(core, network, device_name, config, &exe_network);
config_free(config);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_load_network status %d, line %d\n", status, __LINE__);
goto err;
}
} else {
ie_config_t cfg = {NULL, NULL, NULL};
status = ie_core_load_network(core, network, device_name, &cfg, &exe_network);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_load_network status %d, line %d\n", status, __LINE__);
goto err;
}
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 5. Create infer request
// -------------------------------------------------
printf("%sCreate infer request\n", info);
status = ie_exec_network_create_infer_request(exe_network, &infer_request);
if (status != OK) {
fprintf(stderr, "ERROR ie_exec_network_create_infer_request status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 6. Prepare input
// --------------------------------------------------------
/** Creating input blob **/
status = ie_infer_request_get_blob(infer_request, imageInputName, &imageInput);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_get_blob status %d, line %d\n", status, __LINE__);
goto err;
}
/** Filling input tensor with images. First b channel, then g and r channels
* **/
dimensions_t input_tensor_dims;
status = ie_blob_get_dims(imageInput, &input_tensor_dims);
if (status != OK) {
fprintf(stderr, "ERROR ie_blob_get_dims status %d, line %d\n", status, __LINE__);
goto err;
}
size_t num_channels = input_tensor_dims.dims[1];
size_t image_size = input_tensor_dims.dims[3] * input_tensor_dims.dims[2];
ie_blob_buffer_t blob_buffer;
status = ie_blob_get_buffer(imageInput, &blob_buffer);
if (status != OK) {
fprintf(stderr, "ERROR ie_blob_get_buffer status %d, line %d\n", status, __LINE__);
goto err;
}
unsigned char* data = (unsigned char*)(blob_buffer.buffer);
/** Iterate over all input images **/
int image_id, pid, ch, k;
for (image_id = 0; image_id < batchSize; ++image_id) {
/** Iterate over all pixel in image (b,g,r) **/
for (pid = 0; pid < image_size; ++pid) {
/** Iterate over all channels **/
for (ch = 0; ch < num_channels; ++ch) {
/** [images stride + channels stride + pixel id ] all in bytes
* **/
data[image_id * image_size * num_channels + ch * image_size + pid] =
images[image_id].mat_data[pid * num_channels + ch];
}
}
image_free(&images[image_id]);
}
free(images);
ie_blob_free(&imageInput);
if (imInfoInputName != NULL) {
ie_blob_t* input2 = NULL;
status = ie_infer_request_get_blob(infer_request, imInfoInputName, &input2);
dimensions_t imInfoDim;
status |= ie_blob_get_dims(input2, &imInfoDim);
// Fill input tensor with values
ie_blob_buffer_t info_blob_buffer;
status |= ie_blob_get_buffer(input2, &info_blob_buffer);
if (status != OK) {
fprintf(stderr, "ERROR ie_blob_get_buffer status %d, line %d\n", status, __LINE__);
ie_blob_free(&input2);
goto err;
}
float* p = (float*)(info_blob_buffer.buffer);
for (image_id = 0; image_id < batchSize; ++image_id) {
p[image_id * imInfoDim.dims[1] + 0] = (float)input_height;
p[image_id * imInfoDim.dims[1] + 1] = (float)input_width;
for (k = 2; k < imInfoDim.dims[1]; k++) {
p[image_id * imInfoDim.dims[1] + k] = 1.0f; // all scale factors are set to 1.0
}
}
ie_blob_free(&input2);
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 7. Do inference
// --------------------------------------------------------
printf("%sStart inference\n", info);
status = ie_infer_request_infer_async(infer_request);
status |= ie_infer_request_wait(infer_request, -1);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_infer_async status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 8. Process output
// ------------------------------------------------------
printf("%sProcessing output blobs\n", info);
status = ie_infer_request_get_blob(infer_request, output_name, &output_blob);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_get_blob status %d, line %d\n", status, __LINE__);
goto err;
}
ie_blob_buffer_t output_blob_buffer;
status = ie_blob_get_cbuffer(output_blob, &output_blob_buffer);
if (status != OK) {
fprintf(stderr, "ERROR ie_blob_get_cbuffer status %d, line %d\n", status, __LINE__);
goto err;
}
const float* detection = (float*)(output_blob_buffer.cbuffer);
int** classes = (int**)calloc(image_num, sizeof(int*));
rectangle_t** boxes = (rectangle_t**)calloc(image_num, sizeof(rectangle_t*));
int* object_num = (int*)calloc(image_num, sizeof(int));
for (i = 0; i < image_num; ++i) {
classes[i] = (int*)calloc(maxProposalCount, sizeof(int));
boxes[i] = (rectangle_t*)calloc(maxProposalCount, sizeof(rectangle_t));
object_num[i] = 0;
}
/* Each detection has image_id that denotes processed image */
int curProposal;
for (curProposal = 0; curProposal < maxProposalCount; curProposal++) {
image_id = (int)(detection[curProposal * objectSize + 0]);
if (image_id < 0) {
break;
}
float confidence = detection[curProposal * objectSize + 2];
int label = (int)(detection[curProposal * objectSize + 1]);
int xmin = (int)(detection[curProposal * objectSize + 3] * originalImages[image_id].mat_width);
int ymin = (int)(detection[curProposal * objectSize + 4] * originalImages[image_id].mat_height);
int xmax = (int)(detection[curProposal * objectSize + 5] * originalImages[image_id].mat_width);
int ymax = (int)(detection[curProposal * objectSize + 6] * originalImages[image_id].mat_height);
printf("[%d, %d] element, prob = %f (%d, %d)-(%d, %d) batch id : %d",
curProposal,
label,
confidence,
xmin,
ymin,
xmax,
ymax,
image_id);
if (confidence > 0.5) {
/** Drawing only objects with >50% probability **/
classes[image_id][object_num[image_id]] = label;
boxes[image_id][object_num[image_id]].x_min = xmin;
boxes[image_id][object_num[image_id]].y_min = ymin;
boxes[image_id][object_num[image_id]].rect_width = xmax - xmin;
boxes[image_id][object_num[image_id]].rect_height = ymax - ymin;
printf(" WILL BE PRINTED!");
++object_num[image_id];
}
printf("\n");
}
/** Adds rectangles to the image and save **/
int batch_id;
for (batch_id = 0; batch_id < batchSize; ++batch_id) {
if (object_num[batch_id] > 0) {
image_add_rectangles(&originalImages[batch_id],
boxes[batch_id],
classes[batch_id],
object_num[batch_id],
2);
}
const char* out = "out_";
char str_num[16] = {0};
int2str(str_num, batch_id);
char* img_path = (char*)calloc(strlen(out) + strlen(str_num) + strlen(".bmp") + 1, sizeof(char));
memcpy(img_path, out, strlen(out));
memcpy(img_path + strlen(out), str_num, strlen(str_num));
memcpy(img_path + strlen(out) + strlen(str_num), ".bmp", strlen(".bmp") + 1);
image_save(img_path, &originalImages[batch_id]);
printf("%sImage %s created!\n", info, img_path);
free(img_path);
image_free(&originalImages[batch_id]);
}
free(originalImages);
// -----------------------------------------------------------------------------------------------------
printf("%sExecution successful\n", info);
printf("\nThis sample is an API example,"
" for any performance measurements please use the dedicated benchmark_"
"app tool\n");
for (i = 0; i < image_num; ++i) {
free(classes[i]);
free(boxes[i]);
}
free(classes);
free(boxes);
free(object_num);
ie_blob_free(&output_blob);
ie_infer_request_free(&infer_request);
ie_exec_network_free(&exe_network);
ie_network_free(&network);
ie_core_free(&core);
ie_network_name_free(&imageInputName);
ie_network_name_free(&imInfoInputName);
ie_network_name_free(&output_name);
free(input_weight);
free(argv_temp);
return EXIT_SUCCESS;
err:
free(argv_temp);
if (input_weight)
free(input_weight);
if (core)
ie_core_free(&core);
if (network)
ie_network_free(&network);
if (imageInputName)
ie_network_name_free(&imageInputName);
if (imInfoInputName)
ie_network_name_free(&imInfoInputName);
if (output_name)
ie_network_name_free(&output_name);
if (exe_network)
ie_exec_network_free(&exe_network);
if (imageInput)
ie_blob_free(&imageInput);
if (output_blob)
ie_blob_free(&output_blob);
return EXIT_FAILURE;
}

View File

@ -1,114 +0,0 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <stdlib.h>
/// @brief message for help argument
static const char* help_message = "Print a usage message.";
/// @brief message for model argument
static const char* model_message = "Required. Path to an .xml file with a trained model.";
/// @brief message for images argument
static const char* image_message = "Required. Path to one or more images or folder with images.";
/// @brief message for assigning cnn calculation to device
static const char* target_device_message =
"Optional. Specify the target device to infer. "
"Default value is CPU. Use \"-d HETERO:<comma-separated_devices_list>\" format to specify "
"HETERO plugin. "
"Sample will look for a suitable plugin for device specified.";
/// @brief message for plugin custom kernels desc
static const char* custom_plugin_config_message =
"Required for GPU, MYRIAD, HDDL custom kernels. "
"Absolute path to the .xml config file with the kernels descriptions.";
/// @brief message for user extension library argument
static const char* custom_ex_library_message = "Required for CPU plugin custom layers. "
"Absolute path to a shared library with the kernels implementations.";
/// @brief message for config argument
static const char* config_message = "Path to the configuration file. Default value: \"config\".";
/**
* \brief This function show a help message
*/
static void showUsage() {
printf("\nobject_detection_sample_ssd_c [OPTION]\n");
printf("Options:\n\n");
printf(" -h %s\n", help_message);
printf(" -m \"<path>\" %s\n", model_message);
printf(" -i \"<path>\" %s\n", image_message);
printf(" -l \"<absolute_path>\" %s\n", custom_ex_library_message);
printf(" Or\n");
printf(" -c \"<absolute_path>\" %s\n", custom_plugin_config_message);
printf(" -d \"<device>\" %s\n", target_device_message);
printf(" -g %s\n", config_message);
}
int opterr = 1;
int optind = 1;
int optopt;
char* optarg;
#define ERR(s, c) \
if (opterr) { \
fputs(argv[0], stderr); \
fputs(s, stderr); \
fputc('\'', stderr); \
fputc(c, stderr); \
fputs("\'\n", stderr); \
}
/**
* @brief Check command line arguments with available options
* @param int argc - count of args
* @param char *argv[] - array values of args
* @param char *opts - array of options
* @return option name or -1(fail)
*/
static int getopt(int argc, char** argv, char* opts) {
static int sp = 1;
register int c = 0;
register char* cp = NULL;
if (sp == 1) {
if (optind >= argc || argv[optind][0] != '-' || argv[optind][1] == '\0')
return -1;
else if (strcmp(argv[optind], "--") == 0) {
optind++;
return -1;
}
optopt = c = argv[optind][sp];
if (c == ':' || (cp = strchr(opts, c)) == 0) {
ERR(": unrecognized option -- ", c);
showUsage();
if (argv[optind][++sp] == '\0') {
optind++;
sp = 1;
}
return ('?');
}
if (*++cp == ':') {
if (argv[optind][sp + 1] != '\0')
optarg = &argv[optind++][sp + 1];
else if (++optind >= argc) {
ERR(": option requires an argument -- ", c);
sp = 1;
return ('?');
} else
optarg = argv[optind++];
sp = 1;
} else {
if (argv[optind][++sp] == '\0') {
sp = 1;
optind++;
}
optarg = NULL;
}
}
return (c);
}

View File

@ -20,7 +20,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C](../../ie_bridges/c/samples/object_detection_sample_ssd/README.md), [Python](../../../samples/python/hello_reshape_ssd/README.md) |
| Other language realization | [Python](../../../samples/python/hello_reshape_ssd/README.md) |
## How It Works

View File

@ -1,8 +0,0 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
ie_add_sample(NAME object_detection_sample_ssd
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp"
HEADERS "${CMAKE_CURRENT_SOURCE_DIR}/object_detection_sample_ssd.h"
DEPENDENCIES format_reader ie_samples_utils)

View File

@ -1,150 +0,0 @@
# Object Detection SSD C++ Sample {#openvino_inference_engine_samples_object_detection_sample_ssd_README}
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Synchronous Inference Request API.
Object Detection SSD C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
| Feature | API | Description |
|:--- |:--- |:---
|Inference Engine Version| `InferenceEngine::GetInferenceEngineVersion` | Get Inference Engine API version
|Available Devices|`InferenceEngine::Core::GetAvailableDevices`| Get version information of the devices for inference
|Custom Extension Kernels|`InferenceEngine::Core::AddExtension`, `InferenceEngine::Core::SetConfig`| Load extension library and config to the device
| Network Operations | `InferenceEngine::CNNNetwork::getBatchSize`, `InferenceEngine::CNNNetwork::getFunction` | Managing of network, operate with its batch size.
|nGraph Functions|`ngraph::Function::get_ops`, `ngraph::Node::get_friendly_name`, `ngraph::Node::get_type_info`| Go thru network nGraph
Basic Inference Engine API is covered by [Hello Classification C++ sample](../hello_classification/README.md).
| Options | Values |
|:--- |:---
| Validated Models | [person-detection-retail-0013](@ref omz_models_model_person_detection_retail_0013)
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C](../../../samples/c/object_detection_sample_ssd/README.md) |
## How It Works
Upon the start-up the sample application reads command line parameters, loads specified network and image to the Inference
Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the application creates output image and output data to the standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Building
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
## Running
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
Running the application with the `-h` option yields the following usage message:
```
./object_detection_sample_ssd -h
InferenceEngine:
API version ............ <version>
Build .................. <build>
Description ....... API
object_detection_sample_ssd [OPTION]
Options:
-h Print a usage message.
-m "<path>" Required. Path to an .xml file with a trained model.
-i "<path>" Required. Path to an image.
-l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
Or
-c "<absolute_path>" Required for GPU, MYRIAD, HDDL custom kernels. Absolute path to the .xml config file with the kernels descriptions.
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma_separated_devices_list>" format to specify HETERO plugin. Sample will look for a suitable plugin for device specified.
Available target devices: <devices>
```
Running the application with the empty list of options yields the usage message given above and an error message.
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (\*.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name person-detection-retail-0013
```
2. `person-detection-retail-0013` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script:
```
python <path_to_omz_tools>/converter.py --name <model_name>
```
3. For example, to do inference on a CPU with the OpenVINO&trade; toolkit person detection SSD models, run one of the following commands:
- with one image and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model
```
<path_to_sample>/object_detection_sample_ssd -m <path_to_model>/person-detection-retail-0013.xml -i <path_to_image>/person_detection.png -d CPU
```
- with one image and [person-detection-retail-0002](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0002_description_person_detection_retail_0002.html) model
```
<path_to_sample>/object_detection_sample_ssd -m <path_to_model>/person-detection-retail-0002.xml -i <path_to_image>/person_detection.png -d GPU
```
## Sample Output
The application outputs an image (`out_0.bmp`) with detected objects enclosed in rectangles. It outputs the list of classes
of the detected objects along with the respective confidence values and the coordinates of the
rectangles to the standard output stream.
```
object_detection_sample_ssd -m person-detection-retail-0013\FP16\person-detection-retail-0013.xml -i person_detection.png
[ INFO ] InferenceEngine:
API version ............ <version>
Build .................. <build>
Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] person_detection.png
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
CPU
MKLDNNPlugin version ......... <version>
Build ........... <build>
[ INFO ] Loading network files:
person-detection-retail-0013\FP16\person-detection-retail-0013.xml
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ WARNING ] Image is resized from (1699, 960) to (544, 320)
[ INFO ] Batch size is 1
[ INFO ] Start inference
[ INFO ] Processing output blobs
[0,1] element, prob = 0.99909 (370,201)-(634,762) batch id : 0 WILL BE PRINTED!
[1,1] element, prob = 0.997386 (836,192)-(999,663) batch id : 0 WILL BE PRINTED!
[2,1] element, prob = 0.314753 (192,2)-(265,172) batch id : 0
...
[ INFO ] Image out_0.bmp created!
[ INFO ] Execution successful
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

View File

@ -1,417 +0,0 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <format_reader_ptr.h>
#include <gflags/gflags.h>
#include <algorithm>
#include <inference_engine.hpp>
#include <iostream>
#include <map>
#include <memory>
#include <ngraph/ngraph.hpp>
#include <samples/args_helper.hpp>
#include <samples/common.hpp>
#include <samples/slog.hpp>
#include <string>
#include <vector>
#include "object_detection_sample_ssd.h"
using namespace InferenceEngine;
/**
* @brief Checks input args
* @param argc number of args
* @param argv list of input arguments
* @return bool status true(Success) or false(Fail)
*/
bool ParseAndCheckCommandLine(int argc, char* argv[]) {
gflags::ParseCommandLineNonHelpFlags(&argc, &argv, true);
if (FLAGS_h) {
showUsage();
showAvailableDevices();
return false;
}
slog::info << "Parsing input parameters" << slog::endl;
if (FLAGS_m.empty()) {
showUsage();
throw std::logic_error("Model is required but not set. Please set -m option.");
}
if (FLAGS_i.empty()) {
showUsage();
throw std::logic_error("Input is required but not set. Please set -i option.");
}
return true;
}
/**
* \brief The entry point for the Inference Engine object_detection sample
* application \file object_detection_sample_ssd/main.cpp \example
* object_detection_sample_ssd/main.cpp
*/
int main(int argc, char* argv[]) {
try {
/** This sample covers certain topology and cannot be generalized for any
* object detection one **/
// ------------------------------ Get Inference Engine version
// ------------------------------------------------------
slog::info << "InferenceEngine: " << GetInferenceEngineVersion() << "\n";
// --------------------------- Parsing and validation of input arguments
// ---------------------------------
if (!ParseAndCheckCommandLine(argc, argv)) {
return 0;
}
// -----------------------------------------------------------------------------------------------------
// ------------------------------ Read input
// -----------------------------------------------------------
/** This vector stores paths to the processed images **/
std::vector<std::string> images;
parseInputFilesArguments(images);
if (images.empty())
throw std::logic_error("No suitable images were found");
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 1. Initialize inference engine core
// -------------------------------------
slog::info << "Loading Inference Engine" << slog::endl;
Core ie;
// ------------------------------ Get Available Devices
// ------------------------------------------------------
slog::info << "Device info: " << slog::endl;
slog::info << ie.GetVersions(FLAGS_d) << slog::endl;
if (!FLAGS_l.empty()) {
IExtensionPtr extension_ptr = std::make_shared<Extension>(FLAGS_l);
ie.AddExtension(extension_ptr);
slog::info << "Extension loaded: " << FLAGS_l << slog::endl;
}
if (!FLAGS_c.empty() && (FLAGS_d == "GPU" || FLAGS_d == "MYRIAD" || FLAGS_d == "HDDL")) {
// Config for device plugin custom extension is loaded from an .xml
// description
ie.SetConfig({{PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c}}, FLAGS_d);
slog::info << "Config for " << FLAGS_d << " device plugin custom extension loaded: " << FLAGS_c
<< slog::endl;
}
// -----------------------------------------------------------------------------------------------------
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and
// .bin files) or ONNX (.onnx file) format
slog::info << "Loading network files:" << slog::endl << FLAGS_m << slog::endl;
/** Read network model **/
CNNNetwork network = ie.ReadNetwork(FLAGS_m);
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 3. Configure input & output
// ---------------------------------------------
// -------------------------------- Prepare input blobs
// --------------------------------------------------
slog::info << "Preparing input blobs" << slog::endl;
/** Taking information about all topology inputs **/
InputsDataMap inputsInfo(network.getInputsInfo());
/**
* Some networks have SSD-like output format (ending with DetectionOutput
* layer), but having 2 inputs as Faster-RCNN: one for image and one for
* "image info".
*
* Although object_datection_sample_ssd's main task is to support clean SSD,
* it could score the networks with two inputs as well. For such networks
* imInfoInputName will contain the "second" input name.
*/
if (inputsInfo.size() != 1 && inputsInfo.size() != 2)
throw std::logic_error("Sample supports topologies only with 1 or 2 inputs");
std::string imageInputName, imInfoInputName;
InputInfo::Ptr inputInfo = nullptr;
SizeVector inputImageDims;
/** Stores input image **/
/** Iterating over all input blobs **/
for (auto& item : inputsInfo) {
/** Working with first input tensor that stores image **/
if (item.second->getInputData()->getTensorDesc().getDims().size() == 4) {
imageInputName = item.first;
inputInfo = item.second;
slog::info << "Batch size is " << std::to_string(network.getBatchSize()) << slog::endl;
/** Creating first input blob **/
Precision inputPrecision = Precision::U8;
item.second->setPrecision(inputPrecision);
} else if (item.second->getInputData()->getTensorDesc().getDims().size() == 2) {
imInfoInputName = item.first;
Precision inputPrecision = Precision::FP32;
item.second->setPrecision(inputPrecision);
if ((item.second->getTensorDesc().getDims()[1] != 3 &&
item.second->getTensorDesc().getDims()[1] != 6)) {
throw std::logic_error("Invalid input info. Should be 3 or 6 values length");
}
}
}
if (inputInfo == nullptr) {
inputInfo = inputsInfo.begin()->second;
}
// --------------------------- Prepare output blobs
// -------------------------------------------------
slog::info << "Preparing output blobs" << slog::endl;
OutputsDataMap outputsInfo(network.getOutputsInfo());
std::string outputName;
DataPtr outputInfo;
outputInfo = outputsInfo.begin()->second;
outputName = outputInfo->getName();
// SSD has an additional post-processing DetectionOutput layer
// that simplifies output filtering, try to find it.
if (auto ngraphFunction = network.getFunction()) {
for (const auto& out : outputsInfo) {
for (const auto& op : ngraphFunction->get_ops()) {
if (op->get_type_info() == ngraph::op::DetectionOutput::get_type_info_static() &&
op->get_friendly_name() == out.second->getName()) {
outputName = out.first;
outputInfo = out.second;
break;
}
}
}
}
if (outputInfo == nullptr) {
throw std::logic_error("Can't find a DetectionOutput layer in the topology");
}
const SizeVector outputDims = outputInfo->getTensorDesc().getDims();
const int maxProposalCount = outputDims[2];
const int objectSize = outputDims[3];
if (objectSize != 7) {
throw std::logic_error("Output item should have 7 as a last dimension");
}
if (outputDims.size() != 4) {
throw std::logic_error("Incorrect output dimensions for SSD model");
}
/** Set the precision of output data provided by the user, should be called
* before load of the network to the device **/
outputInfo->setPrecision(Precision::FP32);
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 4. Loading model to the device
// ------------------------------------------
slog::info << "Loading model to the device" << slog::endl;
ExecutableNetwork executable_network = ie.LoadNetwork(network, FLAGS_d, parseConfig(FLAGS_config));
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 5. Create infer request
// -------------------------------------------------
slog::info << "Create infer request" << slog::endl;
InferRequest infer_request = executable_network.CreateInferRequest();
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 6. Prepare input
// --------------------------------------------------------
/** Collect images data ptrs **/
std::vector<std::shared_ptr<unsigned char>> imagesData, originalImagesData;
std::vector<size_t> imageWidths, imageHeights;
for (auto& i : images) {
FormatReader::ReaderPtr reader(i.c_str());
if (reader.get() == nullptr) {
slog::warn << "Image " + i + " cannot be read!" << slog::endl;
continue;
}
/** Store image data **/
std::shared_ptr<unsigned char> originalData(reader->getData());
std::shared_ptr<unsigned char> data(
reader->getData(inputInfo->getTensorDesc().getDims()[3], inputInfo->getTensorDesc().getDims()[2]));
if (data.get() != nullptr) {
originalImagesData.push_back(originalData);
imagesData.push_back(data);
imageWidths.push_back(reader->width());
imageHeights.push_back(reader->height());
}
}
if (imagesData.empty())
throw std::logic_error("Valid input images were not found!");
size_t batchSize = network.getBatchSize();
slog::info << "Batch size is " << std::to_string(batchSize) << slog::endl;
if (batchSize != imagesData.size()) {
slog::warn << "Number of images " + std::to_string(imagesData.size()) + " doesn't match batch size " +
std::to_string(batchSize)
<< slog::endl;
batchSize = std::min(batchSize, imagesData.size());
slog::warn << "Number of images to be processed is " << std::to_string(batchSize) << slog::endl;
}
/** Creating input blob **/
Blob::Ptr imageInput = infer_request.GetBlob(imageInputName);
/** Filling input tensor with images. First b channel, then g and r channels
* **/
MemoryBlob::Ptr mimage = as<MemoryBlob>(imageInput);
if (!mimage) {
slog::err << "We expect image blob to be inherited from MemoryBlob, but "
"by fact we were not able "
"to cast imageInput to MemoryBlob"
<< slog::endl;
return 1;
}
// locked memory holder should be alive all time while access to its buffer
// happens
auto minputHolder = mimage->wmap();
size_t num_channels = mimage->getTensorDesc().getDims()[1];
size_t image_size = mimage->getTensorDesc().getDims()[3] * mimage->getTensorDesc().getDims()[2];
unsigned char* data = minputHolder.as<unsigned char*>();
/** Iterate over all input images limited by batch size **/
for (size_t image_id = 0; image_id < std::min(imagesData.size(), batchSize); ++image_id) {
/** Iterate over all pixel in image (b,g,r) **/
for (size_t pid = 0; pid < image_size; pid++) {
/** Iterate over all channels **/
for (size_t ch = 0; ch < num_channels; ++ch) {
/** [images stride + channels stride + pixel id ] all in
* bytes **/
data[image_id * image_size * num_channels + ch * image_size + pid] =
imagesData.at(image_id).get()[pid * num_channels + ch];
}
}
}
if (imInfoInputName != "") {
Blob::Ptr input2 = infer_request.GetBlob(imInfoInputName);
auto imInfoDim = inputsInfo.find(imInfoInputName)->second->getTensorDesc().getDims()[1];
/** Fill input tensor with values **/
MemoryBlob::Ptr minput2 = as<MemoryBlob>(input2);
if (!minput2) {
slog::err << "We expect input2 blob to be inherited from MemoryBlob, "
"but by fact we were not able "
"to cast input2 to MemoryBlob"
<< slog::endl;
return 1;
}
// locked memory holder should be alive all time while access to its
// buffer happens
auto minput2Holder = minput2->wmap();
float* p = minput2Holder.as<PrecisionTrait<Precision::FP32>::value_type*>();
for (size_t image_id = 0; image_id < std::min(imagesData.size(), batchSize); ++image_id) {
p[image_id * imInfoDim + 0] =
static_cast<float>(inputsInfo[imageInputName]->getTensorDesc().getDims()[2]);
p[image_id * imInfoDim + 1] =
static_cast<float>(inputsInfo[imageInputName]->getTensorDesc().getDims()[3]);
for (size_t k = 2; k < imInfoDim; k++) {
p[image_id * imInfoDim + k] = 1.0f; // all scale factors are set to 1.0
}
}
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 7. Do inference
// ---------------------------------------------------------
slog::info << "Start inference" << slog::endl;
infer_request.Infer();
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 8. Process output
// -------------------------------------------------------
slog::info << "Processing output blobs" << slog::endl;
const Blob::Ptr output_blob = infer_request.GetBlob(outputName);
MemoryBlob::CPtr moutput = as<MemoryBlob>(output_blob);
if (!moutput) {
throw std::logic_error("We expect output to be inherited from MemoryBlob, "
"but by fact we were not able to cast output to MemoryBlob");
}
// locked memory holder should be alive all time while access to its buffer
// happens
auto moutputHolder = moutput->rmap();
const float* detection = moutputHolder.as<const PrecisionTrait<Precision::FP32>::value_type*>();
std::vector<std::vector<int>> boxes(batchSize);
std::vector<std::vector<int>> classes(batchSize);
/* Each detection has image_id that denotes processed image */
for (int curProposal = 0; curProposal < maxProposalCount; curProposal++) {
auto image_id = static_cast<int>(detection[curProposal * objectSize + 0]);
if (image_id < 0) {
break;
}
float confidence = detection[curProposal * objectSize + 2];
auto label = static_cast<int>(detection[curProposal * objectSize + 1]);
auto xmin = static_cast<int>(detection[curProposal * objectSize + 3] * imageWidths[image_id]);
auto ymin = static_cast<int>(detection[curProposal * objectSize + 4] * imageHeights[image_id]);
auto xmax = static_cast<int>(detection[curProposal * objectSize + 5] * imageWidths[image_id]);
auto ymax = static_cast<int>(detection[curProposal * objectSize + 6] * imageHeights[image_id]);
std::cout << "[" << curProposal << "," << label << "] element, prob = " << confidence << " (" << xmin
<< "," << ymin << ")-(" << xmax << "," << ymax << ")"
<< " batch id : " << image_id;
if (confidence > 0.5) {
/** Drawing only objects with >50% probability **/
classes[image_id].push_back(label);
boxes[image_id].push_back(xmin);
boxes[image_id].push_back(ymin);
boxes[image_id].push_back(xmax - xmin);
boxes[image_id].push_back(ymax - ymin);
std::cout << " WILL BE PRINTED!";
}
std::cout << std::endl;
}
for (size_t batch_id = 0; batch_id < batchSize; ++batch_id) {
addRectangles(originalImagesData[batch_id].get(),
imageHeights[batch_id],
imageWidths[batch_id],
boxes[batch_id],
classes[batch_id],
BBOX_THICKNESS);
const std::string image_path = "out_" + std::to_string(batch_id) + ".bmp";
if (writeOutputBmp(image_path,
originalImagesData[batch_id].get(),
imageHeights[batch_id],
imageWidths[batch_id])) {
slog::info << "Image " + image_path + " created!" << slog::endl;
} else {
throw std::logic_error(std::string("Can't create a file: ") + image_path);
}
}
// -----------------------------------------------------------------------------------------------------
} catch (const std::exception& error) {
slog::err << error.what() << slog::endl;
return 1;
} catch (...) {
slog::err << "Unknown/internal exception happened." << slog::endl;
return 1;
}
slog::info << "Execution successful" << slog::endl;
slog::info << slog::endl
<< "This sample is an API example, for any performance measurements "
"please use the dedicated benchmark_app tool"
<< slog::endl;
return 0;
}

View File

@ -1,85 +0,0 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <gflags/gflags.h>
#include <iostream>
#include <string>
#include <vector>
/* thickness of a line (in pixels) to be used for bounding boxes */
#define BBOX_THICKNESS 2
/// @brief message for help argument
static const char help_message[] = "Print a usage message.";
/// @brief message for model argument
static const char model_message[] = "Required. Path to an .xml file with a trained model.";
/// @brief message for images argument
static const char image_message[] = "Required. Path to an image.";
/// @brief message for assigning cnn calculation to device
static const char target_device_message[] =
"Optional. Specify the target device to infer on (the list of available devices is shown "
"below). "
"Default value is CPU. Use \"-d HETERO:<comma_separated_devices_list>\" format to specify "
"HETERO plugin. "
"Sample will look for a suitable plugin for device specified.";
/// @brief message for plugin custom kernels desc
static const char custom_plugin_cfg_message[] = "Required for GPU, MYRIAD, HDDL custom kernels. "
"Absolute path to the .xml config file with the kernels descriptions.";
/// @brief message for user library argument
static const char custom_ex_library_message[] = "Required for CPU plugin custom layers. "
"Absolute path to a shared library with the kernels implementations.";
/// @brief message for config argument
static constexpr char config_message[] = "Path to the configuration file.";
/// \brief Define flag for showing help message <br>
DEFINE_bool(h, false, help_message);
/// \brief Define parameter for set image file <br>
/// It is a required parameter
DEFINE_string(i, "", image_message);
/// \brief Define parameter for set model file <br>
/// It is a required parameter
DEFINE_string(m, "", model_message);
/// \brief device the target device to infer on <br>
/// It is an optional parameter
DEFINE_string(d, "CPU", target_device_message);
/// @brief Define parameter for plugin custom kernels path <br>
/// It is an optional parameter
DEFINE_string(c, "", custom_plugin_cfg_message);
/// @brief Absolute path to CPU extension library with user layers <br>
/// It is an optional parameter
DEFINE_string(l, "", custom_ex_library_message);
/// @brief Define path to plugin config
DEFINE_string(config, "", config_message);
/**
* \brief This function show a help message
*/
static void showUsage() {
std::cout << std::endl;
std::cout << "object_detection_sample_ssd [OPTION]" << std::endl;
std::cout << "Options:" << std::endl;
std::cout << std::endl;
std::cout << " -h " << help_message << std::endl;
std::cout << " -m \"<path>\" " << model_message << std::endl;
std::cout << " -i \"<path>\" " << image_message << std::endl;
std::cout << " -l \"<absolute_path>\" " << custom_ex_library_message << std::endl;
std::cout << " Or" << std::endl;
std::cout << " -c \"<absolute_path>\" " << custom_plugin_cfg_message << std::endl;
std::cout << " -d \"<device>\" " << target_device_message << std::endl;
}

View File

@ -1,130 +0,0 @@
"""
Copyright (C) 2018-2021 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import pytest
import sys
import logging as log
from common.samples_common_test_clas import SamplesCommonTestClass
from common.samples_common_test_clas import get_tests
from common.common_comparations import check_image_if_box
from common.samples_common_test_clas import Environment
log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=sys.stdout)
test_data_ssd_person_and_bicycles_detection_fp32 = \
get_tests(cmd_params={'i': [os.path.join('any', 'person_detection.png')],
'm': [os.path.join('FP32', 'pedestrian-detection-adas-0002.xml'),
os.path.join('FP32', 'person-vehicle-bike-detection-crossroad-1016.xml'),
os.path.join('FP32', 'pedestrian-and-vehicle-detector-adas-0001.xml')],
'batch': [1],
'd': ['CPU'],
'sample_type': ['C++', 'C']},
use_device=['d']
)
test_data_ssd_person_and_bicycles_detection_vehicle_fp32 = \
get_tests(cmd_params={'i': [os.path.join('any', 'car.bmp')],
'm': [os.path.join('FP32', 'vehicle-detection-adas-0002.xml')],
'batch': [1],
'd': ['CPU'],
'sample_type': ['C++', 'C']},
use_device=['d']
)
class TestObjectDetectionSSD(SamplesCommonTestClass):
@classmethod
def setup_class(cls):
cls.sample_name = 'object_detection_sample_ssd'
super().setup_class()
# The test above exesutes 3 different models:
# person-vehicle-bike-detection-crossroad-1016,
# pedestrian-detection-adas-0002,
# pedestrian-and-vehicle-detector-adas-0001,
# with the same parameters
#
# This test check
# 1) sample draw something on output image
@pytest.mark.parametrize("param", test_data_ssd_person_and_bicycles_detection_fp32)
def test_object_detection_sample_ssd_person_and_bicycles_detection_fp32(self, param):
_check_simple_output(self, param)
@pytest.mark.parametrize("param", test_data_ssd_person_and_bicycles_detection_vehicle_fp32)
def test_object_detection_sample_ssd_person_and_bicycles_detection_vehicle_fp32(self, param):
_check_simple_output(self, param)
def _check_simple_output(self, param, empty_outputs=False):
"""
Object_detection_sample_ssd has functional and accuracy testing.
For accuracy comparing several metrics (reference file collected in some IE):
-check that demo draw something in output image
"""
# Run _test function, that returns stdout or 0.
stdout = self._test(param)
if not stdout:
return
stdout = stdout.split('\n')
# This test check if boxes exist on output image (just that it draw something)
img_path1 = ''
img_path2 = param['i']
for line in stdout:
if "created!" in line:
img_path1 = line.split(' ')[-2]
acc_pass = check_image_if_box(os.path.join(os.getcwd(), img_path1),
os.path.join(Environment.env['test_data'], img_path2))
if not empty_outputs:
assert acc_pass != 0, "Sample didn't draw boxes"
else:
assert acc_pass == 0, "Sample did draw boxes"
log.info('Accuracy passed')
def _check_dog_class_output(self, param):
"""
Object_detection_sample_ssd has functional and accuracy testing.
For accuracy comparing several metrics (reference file collected in some IE):
-check that demo draw something in output image
-label of detected object with 100% equality
"""
# Run _test function, that returns stdout or 0.
stdout = self._test(param)
if not stdout:
return 0
stdout = stdout.split('\n')
# This test check if boxes exist on output image (just that it draw something)
img_path1 = ''
img_path2 = param['i']
for line in stdout:
if "created!" in line:
img_path1 = line.split(' ')[-2]
acc_pass = check_image_if_box(os.path.join(os.getcwd(), img_path1),
os.path.join(Environment.env['test_data'], img_path2))
assert acc_pass != 0, "Sample didn't draw boxes"
# Check top1 class
dog_class = '58'
is_ok = 0
for line in stdout:
if 'WILL BE PRINTED' in line:
is_ok += 1
top1 = line.split(' ')[0]
assert dog_class in top1, "Wrong top1 class, current {}, reference {}".format(top1, dog_class)
log.info('Accuracy passed')
break
assert is_ok != 0, "Accuracy check didn't passed, probably format of output has changes"