doc: update README for C samples, add comments (#4780)
* doc: update README for C samples, add comments * samples: revert extension library settings for CPU only * add validated image formats to samples README * add output to c samples README * add device check for xml config option
This commit is contained in:
parent
2bed9c9277
commit
26adcd1a61
@ -15,21 +15,25 @@ Inference Engine sample applications include the following:
|
||||
- **Hello Classification Sample** – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths).
|
||||
- [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md)
|
||||
- [Hello Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_classification/README.md)
|
||||
- [Hello Classification Python Sample](../../inference-engine/ie_bridges/python/sample/hello_classification/README.md)
|
||||
- **Hello NV12 Input Classification Sample** – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
|
||||
- [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md)
|
||||
- [Hello NV12 Input Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md)
|
||||
- **Hello Query Device Sample** – Query of available Inference Engine devices and their metrics, configuration values.
|
||||
- [Hello Query Device C++ Sample](../../inference-engine/samples/hello_query_device/README.md)
|
||||
- [Hello Query Device Python* Sample](../../inference-engine/ie_bridges/python/sample/hello_query_device/README.md)
|
||||
- **[Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md)** – Inference of SSD networks resized by ShapeInfer API according to an input size.
|
||||
- **Hello Reshape SSD Sample** – Inference of SSD networks resized by ShapeInfer API according to an input size.
|
||||
- [Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md)
|
||||
- [Hello Reshape SSD Python Sample**](../../inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md)
|
||||
- **Image Classification Sample Async** – Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
|
||||
- [Image Classification C++ Sample Async](../../inference-engine/samples/classification_sample_async/README.md)
|
||||
- [Image Classification Python* Sample Async](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md)
|
||||
- **[Image Classification Python* Sample](../../inference-engine/ie_bridges/python/sample/hello_classification/README.md)** – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API (the sample supports only images as inputs).
|
||||
- **Neural Style Transfer Sample** – Style Transfer sample (the sample supports only images as inputs).
|
||||
- [Neural Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md)
|
||||
- [Neural Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md)
|
||||
- **[nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md)** – Construction of the LeNet network using the nGraph function creation sample.
|
||||
- **nGraph Function Creation Sample** – Construction of the LeNet network using the nGraph function creation sample.
|
||||
- [nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md)
|
||||
- [nGraph Function Creation Python Sample](../../inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md)
|
||||
- **Object Detection for SSD Sample** – Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
|
||||
- [Object Detection for SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md)
|
||||
- [Object Detection for SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md)
|
||||
@ -39,7 +43,7 @@ Inference Engine sample applications include the following:
|
||||
|
||||
## Media Files Available for Samples
|
||||
|
||||
To run the sample applications, you can use images and videos from the media files collection available at https://github.com/intel-iot-devkit/sample-videos.
|
||||
To run the sample applications, you can use images and videos from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
|
||||
|
||||
## Samples that Support Pre-Trained Models
|
||||
|
||||
|
@ -1,31 +1,104 @@
|
||||
# Hello Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_classification_README}
|
||||
|
||||
This topic describes how to run the Hello Classification C sample application.
|
||||
Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature.
|
||||
|
||||
It demonstrates how to use the following Inference Engine C API in applications:
|
||||
* Synchronous Infer Request API
|
||||
* Input auto-resize API. It allows to set image of the original size as input for a network with other input size.
|
||||
Resize will be performed automatically by the corresponding plugin just before inference.
|
||||
Hello Classification C sample application demonstrates how to use the following Inference Engine C API in applications:
|
||||
|
||||
There is also an API introduced to crop a ROI object and set it as input without additional memory re-allocation.
|
||||
To properly demonstrate this API, it is required to run several networks in pipeline which is out of scope of this sample.
|
||||
| Feature | API | Description |
|
||||
|:--- |:--- |:---
|
||||
| Basic Infer Flow | [ie_core_create], [ie_core_read_network], [ie_core_load_network], [ie_exec_network_create_infer_request], [ie_infer_request_set_blob], [ie_infer_request_get_blob] | Common API to do inference: configure input and output blobs, loading model, create infer request
|
||||
| Synchronous Infer | [ie_infer_request_infer] | Do synchronous inference
|
||||
| Network Operations | [ie_network_get_input_name], [ie_network_get_inputs_number], [ie_network_get_outputs_number], [ie_network_set_input_precision], [ie_network_get_output_name], [ie_network_get_output_precision] | Managing of network
|
||||
| Blob Operations| [ie_blob_make_memory_from_preallocated], [ie_blob_get_dims], [ie_blob_get_cbuffer] | Work with memory container for storing inputs, outputs of the network, weights and biases of the layers
|
||||
| Input auto-resize | [ie_network_set_input_resize_algorithm], [ie_network_set_input_layout] | Set image of the original size as input for a network with other input size. Resize and layout conversions will be performed automatically by the corresponding plugin just before inference
|
||||
|
||||
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
| Options | Values |
|
||||
|:--- |:---
|
||||
| Validated Models | AlexNet and GoogLeNet (image classification networks)
|
||||
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)
|
||||
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
|
||||
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../../samples/hello_classification/README.md), [Python](../../../python/sample/hello_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
Upon the start-up, the sample application reads command line parameters, loads specified network and an image to the Inference Engine plugin.
|
||||
Then, the sample creates an synchronous inference request object. When inference is done, the application outputs data to the standard output stream.
|
||||
|
||||
You can see the explicit description of
|
||||
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
|
||||
## Building
|
||||
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
|
||||
## Running
|
||||
|
||||
To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
|
||||
To run the sample, you need specify a model and image:
|
||||
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
|
||||
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
|
||||
|
||||
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
> **NOTES**:
|
||||
>
|
||||
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
>
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
You can do inference of an image using a trained AlexNet network on a GPU using the following command:
|
||||
|
||||
```sh
|
||||
./hello_classification_c <path_to_model>/alexnet_fp32.xml <path_to_image>/cat.bmp GPU
|
||||
./hello_classification_c <path_to_model>/alexnet_fp32.xml <path_to_image>/cat.png GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
The application outputs top-10 inference results.
|
||||
|
||||
```sh
|
||||
Top 10 results:
|
||||
|
||||
Image /opt/intel/openvino/deployment_tools/demo/car.png
|
||||
|
||||
classid probability
|
||||
------- -----------
|
||||
479 0.7562205
|
||||
511 0.0760381
|
||||
436 0.0724111
|
||||
817 0.0462140
|
||||
656 0.0301231
|
||||
661 0.0056171
|
||||
581 0.0031622
|
||||
468 0.0029917
|
||||
717 0.0023081
|
||||
627 0.0016193
|
||||
|
||||
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Model Downloader](@ref omz_tools_downloader_README)
|
||||
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
|
||||
[ie_core_create]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaab73c7ee3704c742eaac457636259541
|
||||
[ie_core_read_network]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaa40803295255b3926a3d1b8924f26c29
|
||||
[ie_network_get_input_name]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga36b0c28dfab6db2bfcc2941fd57fbf6d
|
||||
[ie_network_set_input_precision]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#gadd99b7cc98b3c33daa2095b8a29f66d7
|
||||
[ie_network_get_output_name]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga1feabc49576db24d9821a150b2b50a6c
|
||||
[ie_network_get_output_precision]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#gaeaa7f1fb8f56956fc492cd9207235984
|
||||
[ie_core_load_network]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#ga318d4b0214b8a3fd33f9e44170befcc5
|
||||
[ie_exec_network_create_infer_request]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__ExecutableNetwork.html#gae72247391c1429a18c367594a4b7db9f
|
||||
[ie_blob_make_memory_from_preallocated]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#ga7a874d46375e10fa1a7e8e3d7e1c9c9c
|
||||
[ie_infer_request_set_blob]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#ga891c2d475501bba761148a0c3faca196
|
||||
[ie_infer_request_infer]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#gac6c6fcb67ccb4d0ec9ad1c63a5bee7b6
|
||||
[ie_infer_request_get_blob]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#ga6cd04044ea95987260037bfe17ce1a2d
|
||||
[ie_blob_get_dims]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#ga25d93efd7ec1052a8896ac61cc14c30a
|
||||
[ie_blob_get_cbuffer]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#gaf6b4a110b4c5723dcbde135328b3620a
|
||||
[ie_network_set_input_resize_algorithm]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga46ab3b3a06359f2b77f58bdd6e8a5492
|
||||
[ie_network_set_input_layout]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga27ea9f92290e0b2cdedbe8a85feb4c01
|
||||
[ie_network_get_inputs_number]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga6a3349bca66c4ba8b41a434061fccf52
|
||||
[ie_network_get_outputs_number]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga869b8c309797f1e09f73ddffd1b57509
|
||||
|
@ -2,17 +2,28 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdbool.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <opencv_c_wraper.h>
|
||||
#include <c_api/ie_c_api.h>
|
||||
|
||||
#include <c_api/ie_c_api.h>
|
||||
#include <opencv_c_wraper.h>
|
||||
|
||||
/**
|
||||
* @brief Struct to store classification results
|
||||
*/
|
||||
struct classify_res {
|
||||
size_t class_id;
|
||||
float probability;
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Sort result of image classification by probability
|
||||
* @param struct with classification results to sort
|
||||
* @param size of the struct
|
||||
* @return none
|
||||
*/
|
||||
void classify_res_sort(struct classify_res *res, size_t n) {
|
||||
size_t i, j;
|
||||
for (i = 0; i < n; ++i) {
|
||||
@ -30,6 +41,12 @@ void classify_res_sort(struct classify_res *res, size_t n) {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Convert output blob to classify struct for processing results
|
||||
* @param blob of output data
|
||||
* @param size of the blob
|
||||
* @return struct classify_res
|
||||
*/
|
||||
struct classify_res *output_blob_to_classify_res(ie_blob_t *blob, size_t *n) {
|
||||
dimensions_t output_dim;
|
||||
IEStatusCode status = ie_blob_get_dims(blob, &output_dim);
|
||||
@ -60,6 +77,13 @@ struct classify_res *output_blob_to_classify_res(ie_blob_t *blob, size_t *n) {
|
||||
return cls;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Print results of classification
|
||||
* @param struct of the classification results
|
||||
* @param size of the struct of classification results
|
||||
* @param string image path
|
||||
* @return none
|
||||
*/
|
||||
void print_classify_res(struct classify_res *cls, size_t n, const char *img_path) {
|
||||
printf("\nImage %s\n", img_path);
|
||||
printf("\nclassid probability\n");
|
||||
@ -68,6 +92,7 @@ void print_classify_res(struct classify_res *cls, size_t n, const char *img_path
|
||||
for (i = 0; i < n; ++i) {
|
||||
printf("%zu %f\n", cls[i].class_id, cls[i].probability);
|
||||
}
|
||||
printf("\nThis sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n");
|
||||
}
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
@ -86,22 +111,36 @@ int main(int argc, char **argv) {
|
||||
ie_infer_request_t *infer_request = NULL;
|
||||
char *input_name = NULL, *output_name = NULL;
|
||||
ie_blob_t *imgBlob = NULL, *output_blob = NULL;
|
||||
size_t network_input_size;
|
||||
size_t network_output_size;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 1. Load inference engine instance -------------------------------------
|
||||
// --------------------------- Step 1. Initialize inference engine core -------------------------------------
|
||||
|
||||
IEStatusCode status = ie_core_create("", &core);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
status = ie_core_read_network(core, input_model, NULL, &network);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// check the network topology
|
||||
status = ie_network_get_inputs_number(network, &network_input_size);
|
||||
if (status != OK || network_input_size != 1) {
|
||||
printf("Sample supports topologies with 1 input only\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
status = ie_network_get_outputs_number(network, &network_output_size);
|
||||
if (status != OK || network_output_size != 1) {
|
||||
printf("Sample supports topologies with 1 output only\n");
|
||||
goto err;
|
||||
}
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 3. Configure input & output ---------------------------------------------
|
||||
// --------------------------- Step 3. Configure input & output ---------------------------------------------
|
||||
// --------------------------- Prepare input blobs -----------------------------------------------------
|
||||
|
||||
status = ie_network_get_input_name(network, 0, &input_name);
|
||||
@ -124,20 +163,20 @@ int main(int argc, char **argv) {
|
||||
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 4. Loading model to the device ------------------------------------------
|
||||
// --------------------------- Step 4. Loading model to the device ------------------------------------------
|
||||
ie_config_t config = {NULL, NULL, NULL};
|
||||
status = ie_core_load_network(core, network, device_name, &config, &exe_network);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 5. Create infer request -------------------------------------------------
|
||||
// --------------------------- Step 5. Create infer request -------------------------------------------------
|
||||
status = ie_exec_network_create_infer_request(exe_network, &infer_request);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 6. Prepare input --------------------------------------------------------
|
||||
// --------------------------- Step 6. Prepare input --------------------------------------------------------
|
||||
/* Read input image to a blob and set it to an infer request without resize and layout conversions. */
|
||||
c_mat_t img;
|
||||
image_read(input_image_path, &img);
|
||||
@ -158,14 +197,14 @@ int main(int argc, char **argv) {
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 7. Do inference --------------------------------------------------------
|
||||
// --------------------------- Step 7. Do inference --------------------------------------------------------
|
||||
/* Running the request synchronously */
|
||||
status = ie_infer_request_infer(infer_request);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 8. Process output ------------------------------------------------------
|
||||
// --------------------------- Step 8. Process output ------------------------------------------------------
|
||||
status = ie_infer_request_get_blob(infer_request, output_name, &output_blob);
|
||||
if (status != OK) {
|
||||
image_free(&img);
|
||||
|
@ -3,5 +3,4 @@
|
||||
#
|
||||
|
||||
ie_add_sample(NAME hello_nv12_input_classification_c
|
||||
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.c"
|
||||
DEPENDENCIES opencv_c_wraper)
|
||||
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.c")
|
||||
|
@ -1,51 +1,104 @@
|
||||
# Hello NV12 Input Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README}
|
||||
|
||||
This topic describes how to run the Hello NV12 Input Classification sample application.
|
||||
The sample demonstrates how to use the new NV12 automatic input pre-processing API of the Inference Engine in your applications.
|
||||
Refer to [Integrate the Inference Engine New Request API with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) for details.
|
||||
Inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API.
|
||||
|
||||
Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API of the Inference Engine in your applications:
|
||||
|
||||
| Feature | API | Description |
|
||||
|:--- |:--- |:---
|
||||
| Blob Operations| [ie_blob_make_memory_nv12] | Create a NV12 blob
|
||||
| Input in N12 color format |[ie_network_set_color_format]| Change the color format of the input data
|
||||
Basic Inference Engine API is covered by [Hello Classification C sample](../hello_classification/README.md).
|
||||
|
||||
| Options | Values |
|
||||
|:--- |:---
|
||||
| Validated Models | AlexNet (image classification network)
|
||||
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
|
||||
| Validated images | An uncompressed image in the NV12 color format - \*.yuv
|
||||
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../../samples/hello_nv12_input_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
Upon the start-up, the sample application reads command-line parameters, loads a network and sets an
|
||||
image in the NV12 color format to an Inference Engine plugin. When inference is done, the
|
||||
Upon the start-up, the sample application reads command-line parameters, loads specified network and an
|
||||
image in the NV12 color format to an Inference Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the
|
||||
application outputs data to the standard output stream.
|
||||
|
||||
You can see the explicit description of
|
||||
each sample step at [Integration Steps](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Integrate_with_customer_application_new_API.html) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
|
||||
## Building
|
||||
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
|
||||
## Running
|
||||
|
||||
To run the sample, you need specify a model and image:
|
||||
|
||||
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
|
||||
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
|
||||
|
||||
The sample accepts an uncompressed image in the NV12 color format. To run the sample, you need to
|
||||
convert your BGR/RGB image to NV12. To do this, you can use one of the widely available tools such
|
||||
as FFmpeg\* or GStreamer\*. The following command shows how to convert an ordinary image into an
|
||||
uncompressed NV12 image using FFmpeg:
|
||||
|
||||
```sh
|
||||
ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
|
||||
```
|
||||
|
||||
> **NOTE**:
|
||||
> **NOTES**:
|
||||
>
|
||||
> * Because the sample reads raw image files, you should provide a correct image size along with the
|
||||
> - Because the sample reads raw image files, you should provide a correct image size along with the
|
||||
> image path. The sample expects the logical size of the image, not the buffer size. For example,
|
||||
> for 640x480 BGR/RGB image the corresponding NV12 logical image size is also 640x480, whereas the
|
||||
> buffer size is 640x720.
|
||||
> * The sample uses input autoresize API of the Inference Engine to simplify user-side
|
||||
> pre-processing.
|
||||
> * By default, this sample expects that network input has BGR channels order. If you trained your
|
||||
> - By default, this sample expects that network input has BGR channels order. If you trained your
|
||||
> model to work with RGB order, you need to reconvert your model using the Model Optimizer tool
|
||||
> with `--reverse_input_channels` argument specified. For more information about the argument,
|
||||
> refer to **When to Reverse Input Channels** section of
|
||||
> [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
|
||||
## Running
|
||||
|
||||
To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
|
||||
|
||||
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
|
||||
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
You can perform inference on an NV12 image using a trained AlexNet network on a CPU with the following command:
|
||||
|
||||
You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command:
|
||||
```sh
|
||||
./hello_nv12_input_classification_c <path_to_model>/alexnet_fp32.xml <path_to_image>/cat.yuv 640x480 CPU
|
||||
./hello_nv12_input_classification_c <path_to_model>/alexnet_fp32.xml <path_to_image>/cat.yuv 300x300 CPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
The application outputs top-10 inference results.
|
||||
|
||||
```sh
|
||||
Top 10 results:
|
||||
|
||||
Image ./cat.yuv
|
||||
|
||||
classid probability
|
||||
------- -----------
|
||||
435 0.091733
|
||||
876 0.081725
|
||||
999 0.069305
|
||||
587 0.043726
|
||||
666 0.038957
|
||||
419 0.032892
|
||||
285 0.030309
|
||||
700 0.029941
|
||||
696 0.021628
|
||||
855 0.020339
|
||||
|
||||
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Model Downloader](@ref omz_tools_downloader_README)
|
||||
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
|
||||
[ie_network_set_color_format]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga85f3251f1f7b08507c297e73baa58969
|
||||
[ie_blob_make_memory_nv12]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#ga0a2d97b0d40a53c01ead771f82ae7f4a
|
||||
|
@ -2,16 +2,27 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdbool.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
#include <c_api/ie_c_api.h>
|
||||
|
||||
/**
|
||||
* @brief Struct to store classification results
|
||||
*/
|
||||
struct classify_res {
|
||||
size_t class_id;
|
||||
float probability;
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Sort result of image classification by probability
|
||||
* @param struct with classification results to sort
|
||||
* @param size of the struct
|
||||
* @return none
|
||||
*/
|
||||
void classify_res_sort(struct classify_res *res, size_t n) {
|
||||
size_t i, j;
|
||||
for (i = 0; i < n; ++i) {
|
||||
@ -29,6 +40,12 @@ void classify_res_sort(struct classify_res *res, size_t n) {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Convert output blob to classify struct for processing results
|
||||
* @param blob of output data
|
||||
* @param size of the blob
|
||||
* @return struct classify_res
|
||||
*/
|
||||
struct classify_res *output_blob_to_classify_res(ie_blob_t *blob, size_t *n) {
|
||||
dimensions_t output_dim;
|
||||
IEStatusCode status = ie_blob_get_dims(blob, &output_dim);
|
||||
@ -59,6 +76,13 @@ struct classify_res *output_blob_to_classify_res(ie_blob_t *blob, size_t *n) {
|
||||
return cls;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Print results of classification
|
||||
* @param struct of the classification results
|
||||
* @param size of the struct of classification results
|
||||
* @param string image path
|
||||
* @return none
|
||||
*/
|
||||
void print_classify_res(struct classify_res *cls, size_t n, const char *img_path) {
|
||||
printf("\nImage %s\n", img_path);
|
||||
printf("\nclassid probability\n");
|
||||
@ -67,8 +91,16 @@ void print_classify_res(struct classify_res *cls, size_t n, const char *img_path
|
||||
for (i = 0; i < n; ++i) {
|
||||
printf("%zu %f\n", cls[i].class_id, cls[i].probability);
|
||||
}
|
||||
printf("\nThis sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n");
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Read image data
|
||||
* @param string image path
|
||||
* @param pointer to store image data
|
||||
* @param size bytes of image
|
||||
* @return total number of elements successfully read, in case of error it doesn't equal to size param
|
||||
*/
|
||||
size_t read_image_from_file(const char *img_path, unsigned char *img_data, size_t size) {
|
||||
FILE *fp = fopen(img_path, "rb+");
|
||||
size_t read_size = 0;
|
||||
@ -84,7 +116,14 @@ size_t read_image_from_file(const char *img_path, unsigned char *img_data, size_
|
||||
return read_size;
|
||||
}
|
||||
|
||||
size_t parse_image_size(const char *size_str, size_t *width, size_t *height) {
|
||||
/**
|
||||
* @brief Check image has supported width and height
|
||||
* @param string image size in WIDTHxHEIGHT format
|
||||
* @param pointer to image width
|
||||
* @param pointer to image height
|
||||
* @return bool status True(success) or False(fail)
|
||||
*/
|
||||
bool is_supported_image_size(const char *size_str, size_t *width, size_t *height) {
|
||||
const char *_size = size_str;
|
||||
size_t _width = 0, _height = 0;
|
||||
while (_size && *_size != 'x' && *_size != '\0') {
|
||||
@ -112,10 +151,10 @@ size_t parse_image_size(const char *size_str, size_t *width, size_t *height) {
|
||||
if (_width % 2 == 0 && _height % 2 == 0) {
|
||||
*width = _width;
|
||||
*height = _height;
|
||||
return 0;
|
||||
return true;
|
||||
} else {
|
||||
printf("Unsupported image size, width and height must be even numbers \n");
|
||||
return -1;
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
goto err;
|
||||
@ -123,7 +162,7 @@ size_t parse_image_size(const char *size_str, size_t *width, size_t *height) {
|
||||
err:
|
||||
printf("Incorrect format of image size parameter, expected WIDTHxHEIGHT, "
|
||||
"actual: %s\n", size_str);
|
||||
return -1;
|
||||
return false;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
@ -134,7 +173,7 @@ int main(int argc, char **argv) {
|
||||
}
|
||||
|
||||
size_t input_width = 0, input_height = 0, img_size = 0;
|
||||
if (parse_image_size(argv[3], &input_width, &input_height) == -1)
|
||||
if (!is_supported_image_size(argv[3], &input_width, &input_height))
|
||||
return EXIT_FAILURE;
|
||||
|
||||
const char *input_model = argv[1];
|
||||
@ -149,28 +188,30 @@ int main(int argc, char **argv) {
|
||||
ie_blob_t *y_blob = NULL, *uv_blob = NULL, *nv12_blob = NULL, *output_blob = NULL;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 1. Load inference engine instance -------------------------------------
|
||||
// --------------------------- Step 1. Initialize inference engine core -------------------------------------
|
||||
IEStatusCode status = ie_core_create("", &core);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
status = ie_core_read_network(core, input_model, NULL, &network);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 3. Configure input & output ---------------------------------------------
|
||||
// --------------------------- Step 3. Configure input & output ---------------------------------------------
|
||||
// --------------------------- Prepare input blobs -----------------------------------------------------
|
||||
status = ie_network_get_input_name(network, 0, &input_name);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
|
||||
/* Mark input as resizable by setting of a resize algorithm.
|
||||
* In this case we will be able to set an input blob of any shape to an infer request.
|
||||
* Resize and layout conversions are executed automatically during inference */
|
||||
status |= ie_network_set_input_resize_algorithm(network, input_name, RESIZE_BILINEAR);
|
||||
status |= ie_network_set_input_layout(network, input_name, NCHW);
|
||||
status |= ie_network_set_input_precision(network, input_name, U8);
|
||||
// set input resize algorithm to enable input autoresize
|
||||
status |= ie_network_set_input_resize_algorithm(network, input_name, RESIZE_BILINEAR);
|
||||
// set input color format to NV12 to enable automatic input color format pre-processing
|
||||
status |= ie_network_set_color_format(network, input_name, NV12);
|
||||
|
||||
@ -185,20 +226,20 @@ int main(int argc, char **argv) {
|
||||
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 4. Loading model to the device ------------------------------------------
|
||||
// --------------------------- Step 4. Loading model to the device ------------------------------------------
|
||||
ie_config_t config = {NULL, NULL, NULL};
|
||||
status = ie_core_load_network(core, network, device_name, &config, &exe_network);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 5. Create infer request -------------------------------------------------
|
||||
// --------------------------- Step 5. Create infer request -------------------------------------------------
|
||||
status = ie_exec_network_create_infer_request(exe_network, &infer_request);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 6. Prepare input --------------------------------------------------------
|
||||
// --------------------------- Step 6. Prepare input --------------------------------------------------------
|
||||
// read image with size converted to NV12 data size: height(NV12) = 3 / 2 * logical height
|
||||
img_size = input_width * (input_height * 3 / 2);
|
||||
img_data = (unsigned char *)calloc(img_size, sizeof(unsigned char));
|
||||
@ -230,14 +271,14 @@ int main(int argc, char **argv) {
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 7. Do inference --------------------------------------------------------
|
||||
// --------------------------- Step 7. Do inference --------------------------------------------------------
|
||||
/* Running the request synchronously */
|
||||
status = ie_infer_request_infer(infer_request);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 8. Process output ------------------------------------------------------
|
||||
// --------------------------- Step 8. Process output ------------------------------------------------------
|
||||
status = ie_infer_request_get_blob(infer_request, output_name, &output_blob);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
|
@ -1,21 +1,50 @@
|
||||
# Object Detection C Sample SSD {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README}
|
||||
|
||||
This topic demonstrates how to run the Object Detection C sample application, which does inference using object detection
|
||||
networks like SSD-VGG on Intel® Processors and Intel® HD Graphics.
|
||||
Inference of object detection networks like SSD-VGG using Asynchronous Inference Request API and [input reshape feature](../../../../../docs/IE_DG/ShapeInference.md).
|
||||
|
||||
> **NOTE:** This topic describes usage of C implementation of the Object Detection Sample SSD. For the C++* implementation, refer to [Object Detection C++* Sample SSD](../../../../samples/object_detection_sample_ssd/README.md) and for the Python* implementation, refer to [Object Detection Python* Sample SSD](../../../python/sample/object_detection_sample_ssd/README.md).
|
||||
Object Detection C sample SSD application demonstrates how to use the following Inference Engine C API in applications:
|
||||
|
||||
| Feature | API | Description |
|
||||
|:--- |:--- |:---
|
||||
|Asynchronous Infer |[ie_infer_request_infer_async][ie_infer_request_wait]| Do Asynchronous inference
|
||||
|Inference Engine Version| [ie_c_api_version] | Get Inference Engine API version
|
||||
|Available Devices| [ie_core_get_versions] | Get version information of the devices for inference
|
||||
|Custom Extension Kernels|[ie_core_add_extension] [ie_core_set_config]| Load extension library and config to the device
|
||||
|Network Operations|[ie_network_get_inputs_number] [ie_network_get_input_dims] [ie_network_get_input_shapes] [ie_network_get_outputs_number] [ie_network_get_output_dims]| Managing of network
|
||||
|Blob Operations|[ie_blob_get_buffer]| Work with memory container for storing inputs, outputs of the network, weights and biases of the layers
|
||||
|Input Reshape|[ie_network_reshape]| Set the batch size equal to the number of input images
|
||||
|
||||
Basic Inference Engine API is covered by [Hello Classification C sample](../hello_classification/README.md).
|
||||
|
||||
> **NOTE**: This sample uses `ie_network_reshape()` to set the batch size. While supported by SSD networks, reshape may not work with arbitrary topologies. See [Shape Inference Guide](../../../../../docs/IE_DG/ShapeInference.md) for more info.
|
||||
|
||||
| Options | Values |
|
||||
|:--- |:---
|
||||
| Validated Models | Person detection SSD (object detection network)
|
||||
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)
|
||||
| Validated images | The sample uses OpenCV* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (.bmp, .png, .jpg)
|
||||
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../../samples/object_detection_sample_ssd/README.md), [Python](../../../python/sample/object_detection_sample_ssd/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
Upon the start-up the sample application reads command line parameters and loads a network and an image to the Inference
|
||||
Engine device. When inference is done, the application creates output images and outputs data to the standard output stream.
|
||||
Upon the start-up the sample application reads command line parameters, loads specified network and image(s) to the Inference
|
||||
Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the application creates output image(s) and output data to the standard output stream.
|
||||
|
||||
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
You can see the explicit description of
|
||||
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
|
||||
> **NOTE**: This sample uses `ie_network_reshape()` to set the batch size. While supported by SSD networks, reshape may not work with arbitrary topologies. See [Shape Inference Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_ShapeInference.html) for more info.
|
||||
## Building
|
||||
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
|
||||
## Running
|
||||
|
||||
To run the sample, you need specify a model and image:
|
||||
|
||||
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
|
||||
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
|
||||
|
||||
Running the application with the <code>-h</code> option yields the following usage message:
|
||||
|
||||
```sh
|
||||
@ -28,39 +57,43 @@ object_detection_sample_ssd_c [OPTION]
|
||||
Options:
|
||||
|
||||
-h Print a usage message.
|
||||
-i "<path>" Required. Path to one or more .bmp images.
|
||||
-m "<path>" Required. Path to an .xml file with a trained model.
|
||||
-l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
|
||||
-i "<path>" Required. Path to one or more images or folder with images.
|
||||
-l "<absolute_path>" Required for CPU plugin custom layers. Absolute path to a shared library with the kernels implementations.
|
||||
Or
|
||||
-c "<absolute_path>" Required for GPU custom kernels. Absolute path to the .xml file with the kernels descriptions.
|
||||
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. Sample will look for a suitable plugin for device specified
|
||||
-c "<absolute_path>" Required for GPU, MYRIAD, HDDL custom kernels. Absolute path to the .xml config file
|
||||
with the kernels descriptions.
|
||||
-d "<device>" Optional. Specify the target device to infer. Default value is CPU.
|
||||
Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. Sample will look for a suitable plugin for device specified
|
||||
-g Path to the configuration file. Default value: "config".
|
||||
```
|
||||
|
||||
Running the application with the empty list of options yields the usage message given above and an error message.
|
||||
|
||||
To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
|
||||
|
||||
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
> **NOTES**:
|
||||
>
|
||||
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
>
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
For example, to do inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands:
|
||||
|
||||
- with one image and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model
|
||||
|
||||
```sh
|
||||
./object_detection_sample_ssd_c -i <path_to_image>/inputImage.bmp -m <path_to_model>person-detection-retail-0013.xml -d CPU
|
||||
./object_detection_sample_ssd_c -i <path_to_image>/inputImage.bmp -m <path_to_model>/person-detection-retail-0013.xml -d CPU
|
||||
```
|
||||
|
||||
or
|
||||
- with some images and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model
|
||||
|
||||
```sh
|
||||
./object_detection_sample_ssd_c -i <path_to_image>/inputImage1.bmp <path_to_image>/inputImage2.bmp ... -m <path_to_model>person-detection-retail-0013.xml -d CPU
|
||||
./object_detection_sample_ssd_c -i <path_to_image>/inputImage1.bmp <path_to_image>/inputImage2.bmp ... -m <path_to_model>/person-detection-retail-0013.xml -d CPU
|
||||
```
|
||||
|
||||
or
|
||||
- with [person-detection-retail-0002](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0002_description_person_detection_retail_0002.html) model
|
||||
|
||||
```sh
|
||||
./object_detection_sample_ssd_c -i <path_to_image>/inputImage.jpg -m <path_to_model>person-detection-retail-0002.xml -d CPU
|
||||
./object_detection_sample_ssd_c -i <path_to_folder_with_images> -m <path_to_model>/person-detection-retail-0002.xml -d CPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
@ -68,7 +101,59 @@ or
|
||||
The application outputs several images (`out_0.bmp`, `out_1.bmp`, ... ) with detected objects enclosed in rectangles. It outputs the list of
|
||||
classes of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream.
|
||||
|
||||
```sh
|
||||
object_detection_sample_ssd_c -m person-detection-retail-0013.xml -i image_1.png image_2.jpg
|
||||
|
||||
[ INFO ] InferenceEngine:
|
||||
<version><number>
|
||||
[ INFO ] Parsing input parameters
|
||||
[ INFO ] Files were added: 2
|
||||
[ INFO ] image_1.png
|
||||
[ INFO ] image_2.jpg
|
||||
[ INFO ] Loading Inference Engine
|
||||
[ INFO ] Device info:
|
||||
CPU
|
||||
MKLDNNPlugin version ......... <version><number>
|
||||
Build ......... <version><number>
|
||||
[ INFO ] Loading network:
|
||||
person-detection-retail-0013.xml
|
||||
[ INFO ] Preparing input blobs
|
||||
[ WARNING ] Image is resized from (1699, 960) to (544, 320)
|
||||
[ WARNING ] Image is resized from (614, 346) to (544, 320)
|
||||
[ INFO ] Batch size is 2
|
||||
[ INFO ] Preparing output blobs
|
||||
[ INFO ] Loading model to the device
|
||||
[ INFO ] Create infer request
|
||||
[ INFO ] Start inference
|
||||
[ INFO ] Processing output blobs
|
||||
[0, 1] element, prob = 0.999090 (370, 201)-(634, 762) batch id : 0 WILL BE PRINTED!
|
||||
[1, 1] element, prob = 0.997386 (836, 192)-(999, 663) batch id : 0 WILL BE PRINTED!
|
||||
[2, 1] element, prob = 0.314753 (192, 2)-(265, 172) batch id : 0
|
||||
...
|
||||
[ INFO ] Image out_0.bmp created!
|
||||
[ INFO ] Image out_1.bmp created!
|
||||
[ INFO ] Execution successful
|
||||
|
||||
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
|
||||
```
|
||||
|
||||
## See Also
|
||||
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
* [Model Downloader](@ref omz_tools_downloader_README)
|
||||
|
||||
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Model Downloader](@ref omz_tools_downloader_README)
|
||||
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
|
||||
[ie_infer_request_infer_async]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#gad2351010e292b6faec959a3d5a8fb60e
|
||||
[ie_infer_request_wait]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#ga0c05e63e63c8d9cdd92900e82b0137c9
|
||||
[ie_c_api_version]:https://docs.openvinotoolkit.org/latest/ie_c_api/ie__c__api_8h.html#a8fe3efe9cc606dcc7bec203102043e68
|
||||
[ie_core_get_versions]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#ga2932e188a690393f5d594572ac5d237b
|
||||
[ie_core_add_extension]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gadded2444ba81d2d396516b72c2478f8e
|
||||
[ie_core_set_config]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaf09d1e77cc264067e4e22ddf99f21ec1
|
||||
[ie_network_get_inputs_number]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga6a3349bca66c4ba8b41a434061fccf52
|
||||
[ie_network_get_input_dims]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#gac621a654b89d413041cbc2288627f6a5
|
||||
[ie_network_get_input_shapes]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga5409734f25ffbb1379e876217c0bc6f3
|
||||
[ie_network_get_outputs_number]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga869b8c309797f1e09f73ddffd1b57509
|
||||
[ie_network_get_output_dims]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga8de7bf2f626f19eba08a2f043fc1b5d2
|
||||
[ie_network_reshape]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#gac4f690afd0c2221f7db2ff9be4aa0637
|
||||
[ie_blob_get_buffer]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#ga948e0186cea6a393c113d5c399cfcb4c
|
||||
|
@ -42,10 +42,18 @@
|
||||
#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG)
|
||||
#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR)
|
||||
|
||||
/// @brief structure to store directory names
|
||||
typedef struct dirent {
|
||||
char *d_name;
|
||||
}dirent;
|
||||
|
||||
/**
|
||||
* @brief Add directory to directory names struct
|
||||
* @param int argc - count of args
|
||||
* @param char *argv[] - array values of args
|
||||
* @param char *opts - array of options
|
||||
* @return pointer to directory names struct
|
||||
*/
|
||||
static dirent *createDirent(const wchar_t *wsFilePath) {
|
||||
dirent *d = (dirent *)malloc(sizeof(dirent));
|
||||
size_t i;
|
||||
@ -55,6 +63,11 @@ static dirent *createDirent(const wchar_t *wsFilePath) {
|
||||
return d;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Free directory names struct
|
||||
* @param point to directory names structure
|
||||
* @return none
|
||||
*/
|
||||
static void freeDirent(dirent **d) {
|
||||
free((*d)->d_name);
|
||||
(*d)->d_name = NULL;
|
||||
@ -62,12 +75,19 @@ static void freeDirent(dirent **d) {
|
||||
*d = NULL;
|
||||
}
|
||||
|
||||
/// @brief structure to store directory data (files meta)
|
||||
typedef struct DIR {
|
||||
WIN32_FIND_DATAA FindFileData;
|
||||
HANDLE hFind;
|
||||
dirent *next;
|
||||
}DIR;
|
||||
|
||||
/**
|
||||
* @brief Compare two string, second string is the end of the first
|
||||
* @param string to compare
|
||||
* @param end string to find
|
||||
* @return status 1(success) or 0(fail)
|
||||
*/
|
||||
static int endsWith(const char *src, const char *with) {
|
||||
int wl = (int)(strlen(with));
|
||||
int so = (int)(strlen(with)) - wl;
|
||||
@ -77,6 +97,12 @@ static int endsWith(const char *src, const char *with) {
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Check file handler is valid
|
||||
* @param struct of directory data
|
||||
* @return status 1(success) or 0(fail)
|
||||
*/
|
||||
static int isValid(DIR* dp) {
|
||||
if (dp->hFind != INVALID_HANDLE_VALUE && dp->FindFileData.dwReserved0) {
|
||||
return 1;
|
||||
@ -84,6 +110,12 @@ static int isValid(DIR* dp) {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Create directory data struct element
|
||||
* @param string directory path
|
||||
* @return pointer to directory data struct element
|
||||
*/
|
||||
static DIR *opendir(const char *dirPath) {
|
||||
DIR *dp = (DIR *)malloc(sizeof(DIR));
|
||||
dp->next = NULL;
|
||||
@ -103,6 +135,11 @@ static DIR *opendir(const char *dirPath) {
|
||||
return dp;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Walk throw directory data struct
|
||||
* @param pointer to directory data struct
|
||||
* @return pointer to directory data struct next element
|
||||
*/
|
||||
static struct dirent *readdir(DIR *dp) {
|
||||
if (dp->next != NULL) freeDirent(&(dp->next));
|
||||
|
||||
@ -117,6 +154,11 @@ static struct dirent *readdir(DIR *dp) {
|
||||
return dp->next;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Remove directory data struct
|
||||
* @param pointer to struct directory data
|
||||
* @return none
|
||||
*/
|
||||
static void closedir(DIR *dp){
|
||||
if (dp->next) {
|
||||
freeDirent(&(dp->next));
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/stat.h>
|
||||
|
||||
#include <c_api/ie_c_api.h>
|
||||
#include "object_detection_sample_ssd.h"
|
||||
#include <opencv_c_wraper.h>
|
||||
@ -21,8 +22,8 @@
|
||||
static const char *img_msg = NULL;
|
||||
static const char *input_model = NULL;
|
||||
static const char *device_name = "CPU";
|
||||
static const char *custom_cldnn_msg = NULL;
|
||||
static const char *custom_cpu_library_msg = NULL;
|
||||
static const char *custom_plugin_cfg_msg = NULL;
|
||||
static const char *custom_ex_library_msg = NULL;
|
||||
static const char *config_msg = NULL;
|
||||
static int file_num = 0;
|
||||
static char **file_paths = NULL;
|
||||
@ -30,6 +31,12 @@ static char **file_paths = NULL;
|
||||
const char *info = "[ INFO ] ";
|
||||
const char *warn = "[ WARNING ] ";
|
||||
|
||||
/**
|
||||
* @brief Parse and check command line arguments
|
||||
* @param int argc - count of args
|
||||
* @param char *argv[] - array values of args
|
||||
* @return int - status 1(success) or -1(fail)
|
||||
*/
|
||||
int ParseAndCheckCommandLine(int argc, char *argv[]) {
|
||||
int opt = 0;
|
||||
int help = 0;
|
||||
@ -53,12 +60,12 @@ int ParseAndCheckCommandLine(int argc, char *argv[]) {
|
||||
device_name = optarg;
|
||||
break;
|
||||
case 'c':
|
||||
custom_cldnn_msg = optarg;
|
||||
custom_plugin_cfg_msg = optarg;
|
||||
break;
|
||||
case 'l':
|
||||
custom_cpu_library_msg = optarg;
|
||||
custom_ex_library_msg = optarg;
|
||||
break;
|
||||
case 'f':
|
||||
case 'g':
|
||||
config_msg = optarg;
|
||||
break;
|
||||
default:
|
||||
@ -69,11 +76,11 @@ int ParseAndCheckCommandLine(int argc, char *argv[]) {
|
||||
if (help)
|
||||
return -1;
|
||||
if (input_model == NULL) {
|
||||
printf("Model is required but not set. Please set -m option. \n");
|
||||
printf("Model is required but not set. Please set -m option.\n");
|
||||
return -1;
|
||||
}
|
||||
if (img_msg == NULL) {
|
||||
printf("Input is required but not set.Please set - i option.\n");
|
||||
printf("Input is required but not set.Please set -i option.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
@ -138,15 +145,6 @@ void readInputFilesArgument(const char *arg) {
|
||||
}
|
||||
file_paths[file_num++] = file_path;
|
||||
}
|
||||
|
||||
if (file_num) {
|
||||
printf("%sFiles were added: %d\n", info, file_num);
|
||||
for (i = 0; i < file_num; ++i) {
|
||||
printf("%s %s\n", info, file_paths[i]);
|
||||
}
|
||||
} else {
|
||||
printf("%sFiles were added: %d. Too many to display each of them.\n", info, file_num);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -168,10 +166,19 @@ void parseInputFilesArguments(int argc, char **argv) {
|
||||
}
|
||||
readInputFilesArgument(argv[i]);
|
||||
}
|
||||
|
||||
if (file_num) {
|
||||
printf("%sFiles were added: %d\n", info, file_num);
|
||||
for (i = 0; i < file_num; ++i) {
|
||||
printf("%s %s\n", info, file_paths[i]);
|
||||
}
|
||||
} else {
|
||||
printf("%sFiles were added: %d. Too many to display each of them.\n", info, file_num);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Convert the contents of configuration file to the ie_config_t type.
|
||||
* @brief Convert the contents of configuration file to the ie_config_t struct.
|
||||
* @param config_file File path.
|
||||
* @param comment Separator symbol.
|
||||
* @return A pointer to the ie_config_t instance.
|
||||
@ -274,11 +281,14 @@ void int2str(char *str, int num) {
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
/** This sample covers certain topology and cannot be generalized for any object detection one **/
|
||||
// ------------------------------ Get Inference Engine API version ---------------------------------
|
||||
ie_version_t version = ie_c_api_version();
|
||||
printf("%sInferenceEngine: \n", info);
|
||||
printf("%s\n", version.api_version);
|
||||
ie_version_free(&version);
|
||||
|
||||
// ------------------------------ Parsing and validation of input args ---------------------------------
|
||||
|
||||
char **argv_temp =(char **)calloc(argc, sizeof(char *));
|
||||
if (!argv_temp) {
|
||||
return EXIT_FAILURE;
|
||||
@ -296,14 +306,13 @@ int main(int argc, char **argv) {
|
||||
ie_infer_request_t *infer_request = NULL;
|
||||
ie_blob_t *imageInput = NULL, *output_blob = NULL;
|
||||
|
||||
// --------------------------- 1. Parsing and validation of input args ---------------------------------
|
||||
if (ParseAndCheckCommandLine(argc, argv) < 0) {
|
||||
free(argv_temp);
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 2. Read input -----------------------------------------------------------
|
||||
// --------------------------- Read input -----------------------------------------------------------
|
||||
/** This file_paths stores paths to the processed images **/
|
||||
parseInputFilesArguments(argc, argv_temp);
|
||||
if (!file_num) {
|
||||
@ -313,12 +322,14 @@ int main(int argc, char **argv) {
|
||||
}
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 3. Load inference engine ------------------------------------------------
|
||||
// --------------------------- Step 1. Initialize inference engine core -------------------------------------
|
||||
|
||||
printf("%sLoading Inference Engine\n", info);
|
||||
IEStatusCode status = ie_core_create("", &core);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
|
||||
// ------------------------------ Get Available Devices ------------------------------------------------------
|
||||
ie_core_versions_t ver;
|
||||
printf("%sDevice info: \n", info);
|
||||
status = ie_core_get_versions(core, device_name, &ver);
|
||||
@ -331,25 +342,25 @@ int main(int argc, char **argv) {
|
||||
}
|
||||
ie_core_versions_free(&ver);
|
||||
|
||||
if (custom_cpu_library_msg) {
|
||||
// CPU(MKLDNN) extensions are loaded as a shared library and passed as a pointer to base extension
|
||||
status = ie_core_add_extension(core, custom_cpu_library_msg, "CPU");
|
||||
if (custom_ex_library_msg) {
|
||||
// Custom CPU extension is loaded as a shared library and passed as a pointer to base extension
|
||||
status = ie_core_add_extension(core, custom_ex_library_msg, "CPU");
|
||||
if (status != OK)
|
||||
goto err;
|
||||
printf("%sCPU Extension loaded: %s\n", info, custom_cpu_library_msg);
|
||||
printf("%sCustom extension loaded: %s\n", info, custom_ex_library_msg);
|
||||
}
|
||||
|
||||
if (custom_cldnn_msg) {
|
||||
// clDNN Extensions are loaded from an .xml description and OpenCL kernel files
|
||||
ie_config_t cfg = {"CONFIG_FILE", custom_cldnn_msg, NULL};
|
||||
status = ie_core_set_config(core, &cfg, "GPU");
|
||||
if (custom_plugin_cfg_msg && (device_name == "GPU" || device_name == "MYRIAD" || device_name == "HDDL")) {
|
||||
// Config for device plugin custom extension is loaded from an .xml description
|
||||
ie_config_t cfg = {"CONFIG_FILE", custom_plugin_cfg_msg, NULL};
|
||||
status = ie_core_set_config(core, &cfg, device_name);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
printf("%sGPU Extension loaded: %s\n", info, custom_cldnn_msg);
|
||||
printf("%sConfig for device plugin custom extension loaded: %s\n", info, custom_plugin_cfg_msg);
|
||||
}
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// 4. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
printf("%sLoading network:\n", info);
|
||||
printf("\t%s\n", input_model);
|
||||
status = ie_core_read_network(core, input_model, NULL, &network);
|
||||
@ -357,7 +368,8 @@ int main(int argc, char **argv) {
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 5. Prepare input blobs --------------------------------------------------
|
||||
// --------------------------- Step 3. Configure input & output ---------------------------------------------
|
||||
// --------------------------- Prepare input blobs -----------------------------------------------------
|
||||
printf("%sPreparing input blobs\n", info);
|
||||
|
||||
/** SSD network has one input and one output **/
|
||||
@ -494,9 +506,8 @@ int main(int argc, char **argv) {
|
||||
size_t batchSize = shapes2.shapes[0].shape.dims[0];
|
||||
ie_network_input_shapes_free(&shapes2);
|
||||
printf("%sBatch size is %zu\n", info, batchSize);
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 6. Prepare output blobs -------------------------------------------------
|
||||
// --------------------------- Prepare output blobs ----------------------------------------------------
|
||||
printf("%sPreparing output blobs\n", info);
|
||||
|
||||
size_t output_num = 0;
|
||||
@ -534,7 +545,7 @@ int main(int argc, char **argv) {
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 7. Loading model to the device ------------------------------------------
|
||||
// --------------------------- Step 4. Loading model to the device ------------------------------------------
|
||||
printf("%sLoading model to the device\n", info);
|
||||
if (config_msg) {
|
||||
ie_config_t * config = parseConfig(config_msg, '#');
|
||||
@ -552,15 +563,14 @@ int main(int argc, char **argv) {
|
||||
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 8. Create infer request -------------------------------------------------
|
||||
// --------------------------- Step 5. Create infer request -------------------------------------------------
|
||||
printf("%sCreate infer request\n", info);
|
||||
status = ie_exec_network_create_infer_request(exe_network, &infer_request);
|
||||
if (status != OK)
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 9. Prepare input --------------------------------------------------------
|
||||
|
||||
// --------------------------- Step 6. Prepare input --------------------------------------------------------
|
||||
|
||||
/** Creating input blob **/
|
||||
status = ie_infer_request_get_blob(infer_request, imageInputName, &imageInput);
|
||||
@ -624,7 +634,7 @@ int main(int argc, char **argv) {
|
||||
}
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 10. Do inference ---------------------------------------------------------
|
||||
// --------------------------- Step 7. Do inference --------------------------------------------------------
|
||||
printf("%sStart inference\n", info);
|
||||
status = ie_infer_request_infer_async(infer_request);
|
||||
status |= ie_infer_request_wait(infer_request, -1);
|
||||
@ -632,7 +642,7 @@ int main(int argc, char **argv) {
|
||||
goto err;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 11. Process output -------------------------------------------------------
|
||||
// --------------------------- Step 8. Process output ------------------------------------------------------
|
||||
printf("%sProcessing output blobs\n", info);
|
||||
|
||||
status = ie_infer_request_get_blob(infer_request, output_name, &output_blob);
|
||||
@ -706,6 +716,7 @@ int main(int argc, char **argv) {
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
printf("%sExecution successful\n", info);
|
||||
printf("\nThis sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n");
|
||||
|
||||
for (i = 0; i < image_num; ++i) {
|
||||
free(classes[i]);
|
||||
|
@ -13,19 +13,19 @@ static const char *help_message = "Print a usage message.";
|
||||
static const char* model_message = "Required. Path to an .xml file with a trained model.";
|
||||
|
||||
/// @brief message for images argument
|
||||
static const char *image_message = "Required. Path to one or more .bmp images.";
|
||||
static const char *image_message = "Required. Path to one or more images or folder with images.";
|
||||
|
||||
/// @brief message for assigning cnn calculation to device
|
||||
static const char *target_device_message = "Optional. Specify the target device to infer on (the list of available devices is shown below). " \
|
||||
static const char *target_device_message = "Optional. Specify the target device to infer. " \
|
||||
"Default value is CPU. Use \"-d HETERO:<comma-separated_devices_list>\" format to specify HETERO plugin. " \
|
||||
"Sample will look for a suitable plugin for device specified";
|
||||
"Sample will look for a suitable plugin for device specified.";
|
||||
|
||||
/// @brief message for clDNN custom kernels desc
|
||||
static const char *custom_cldnn_message = "Required for GPU custom kernels. "\
|
||||
"Absolute path to the .xml file with the kernels descriptions.";
|
||||
/// @brief message for plugin custom kernels desc
|
||||
static const char *custom_plugin_config_message = "Required for GPU, MYRIAD, HDDL custom kernels. "\
|
||||
"Absolute path to the .xml config file with the kernels descriptions.";
|
||||
|
||||
/// @brief message for user library argument
|
||||
static const char *custom_cpu_library_message = "Required for CPU custom layers. " \
|
||||
/// @brief message for user extension library argument
|
||||
static const char *custom_ex_library_message = "Required for CPU plugin custom layers. " \
|
||||
"Absolute path to a shared library with the kernels implementations.";
|
||||
|
||||
/// @brief message for config argument
|
||||
@ -34,14 +34,14 @@ static const char *config_message = "Path to the configuration file. Default val
|
||||
* \brief This function show a help message
|
||||
*/
|
||||
static void showUsage() {
|
||||
printf("\nobject_detection_sample_ssd [OPTION]\n");
|
||||
printf("\nobject_detection_sample_ssd_c [OPTION]\n");
|
||||
printf("Options:\n\n");
|
||||
printf(" -h %s\n", help_message);
|
||||
printf(" -m \"<path>\" %s\n", model_message);
|
||||
printf(" -i \"<path>\" %s\n", image_message);
|
||||
printf(" -l \"<absolute_path>\" %s\n", custom_cpu_library_message);
|
||||
printf(" -l \"<absolute_path>\" %s\n", custom_ex_library_message);
|
||||
printf(" Or\n");
|
||||
printf(" -c \"<absolute_path>\" %s\n", custom_cldnn_message);
|
||||
printf(" -c \"<absolute_path>\" %s\n", custom_plugin_config_message);
|
||||
printf(" -d \"<device>\" %s\n", target_device_message);
|
||||
printf(" -g %s\n", config_message);
|
||||
}
|
||||
@ -58,6 +58,13 @@ char *optarg;
|
||||
fputc(c, stderr);\
|
||||
fputs("\'\n", stderr);}
|
||||
|
||||
/**
|
||||
* @brief Check command line arguments with available options
|
||||
* @param int argc - count of args
|
||||
* @param char *argv[] - array values of args
|
||||
* @param char *opts - array of options
|
||||
* @return option name or -1(fail)
|
||||
*/
|
||||
static int getopt(int argc, char **argv, char *opts) {
|
||||
static int sp = 1;
|
||||
register int c = 0;
|
||||
|
Loading…
Reference in New Issue
Block a user