* Added OpenVINOConfig.cmake * OpenVINOConfig.cmake part 2 * Trying to fix cmake generation * Fixes * Export frontends as well * Fixed condition * Added OpenVINO cmake package usage: docs, C samples * Use more OpenVINO config * Install OpenVINOConfig.cmake * Trying to fix private plugins * Trying to fix .tox * Trying to fix ARM * Fixed samples build * Explicit ngraph duplicated targets * Fixed fuzzing tests build * Added IR frontend installation * Removed install directory for IE reader * Removed IR frontend from export list * Reverted ngraph_DIR * Try to fix .tox * Fixed ieFuncTests with ONNX extensions * Attempt #2 * Trying to fix ngraph setup.py * Fix * Trying to fix ONNX ngraph .tox CI * Trying to remove spaces * Fixed ngraph_DIR -> OpenVINO_DIR * Removed junk files * Try to fix ngraph wheel * Try to fix ie_wheel * Try to fix ngraph wheel
Hello Reshape SSD C++ Sample
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Synchronous Inference Request API, input reshape feature and implementation of custom extension library for CPU device (CustomReLU kernel).
Hello Reshape SSD C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
| Feature | API | Description |
|---|---|---|
| Network Operations | InferenceEngine::CNNNetwork::getBatchSize, InferenceEngine::CNNNetwork::getFunction |
Managing of network, operate with its batch size. |
| Input Reshape | InferenceEngine::CNNNetwork::getInputShapes, InferenceEngine::CNNNetwork::reshape |
Resize network to match image sizes and given batch |
| nGraph Functions | ngraph::Function::get_ops, ngraph::Node::get_friendly_name, ngraph::Node::get_type_info |
Go thru network nGraph |
| Custom Extension Kernels | InferenceEngine::Core::AddExtension |
Load extension library |
| CustomReLU kernel | InferenceEngine::ILayerExecImpl |
Implementation of custom extension library |
Basic Inference Engine API is covered by Hello Classification C++ sample.
| Options | Values |
|---|---|
| Validated Models | [person-detection-retail-0013](@ref omz_models_model_person_detection_retail_0013) |
| Model Format | Inference Engine Intermediate Representation (*.xml + *.bin), ONNX (*.onnx) |
| Validated images | The sample uses OpenCV* to read input image (*.bmp, *.png) |
| Supported devices | All |
| Other language realization | C, Python |
How It Works
Upon the start-up the sample application reads command line parameters, loads specified network and image to the Inference Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the application creates output image and output data to the standard output stream.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
Building
To build the sample, please use instructions available at Build the Sample Applications section in Inference Engine Samples guide.
Running
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
NOTES:
By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
The sample accepts models in ONNX format (*.onnx) that do not require preprocessing.
You can use the following command to do inference on CPU of an image using a trained SSD network:
<path_to_sample>/hello_reshape_ssd <path_to_model> <path_to_image> <device> <batch>
Example
- Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
python <path_to_omz_tools>/downloader.py --name person-detection-retail-0013
person-detection-retail-0013model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script:
python <path_to_omz_tools>/converter.py --name <model_name>
- Perform inference of
person_detection.pngusingperson-detection-retail-0013model on aGPU, for example:
<path_to_sample>/hello_reshape_ssd <path_to_model>/person-detection-retail-0013.xml <path_to_image>/person_detection.png GPU 1
Sample Output
The application renders an image with detected objects enclosed in rectangles. It outputs the list of classes of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream.
Resizing network to the image size = [960x1699] with batch = 1
Resulting input shape = [1,3,960,1699]
Resulting output shape = [1,1,200,7]
[0,1] element, prob = 0.722292, bbox = (852.382,187.756)-(983.352,520.733), batch id = 0
The resulting image was saved in the file: hello_reshape_ssd_output.jpg
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
See Also
- Integrate the Inference Engine with Your Application
- Using Inference Engine Samples
- [Model Downloader](@ref omz_tools_downloader)
- Model Optimizer