[C API] Enable hello_nv12_input_classification samples for C APIs of OV API 2.0 (#12031)

* Define new ppp API for nv12

* Add new ppp API function

* Add new ppp API unit test

* Add hello nv12 input classification ov

* Define new ppp API for nv12

* Add new ppp API function

* Add new ppp API unit test

* Add hello nv12 input classification ov

* Fix the clang -formate issue

* Modify the function called is_supported_image_size

* Update code as suggested

* Add hello_nv12_input_classification e2e test

* clang-format openvinotoolkit

* Fix the doc error in CI

Co-authored-by: River Li <river.li@intel.com>
This commit is contained in:
RICKIE777 2022-07-07 11:36:55 +08:00 committed by GitHub
parent e8bd70f273
commit 70d967ffb6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 675 additions and 1 deletions

View File

@ -0,0 +1,6 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
ie_add_sample(NAME hello_nv12_input_classification_ov_c
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.c")

View File

@ -0,0 +1,90 @@
# Hello NV12 Input Classification C Sample for OpenVINO 2.0 C-API
This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API.
## How It Works
Upon the start-up, the sample application reads command-line parameters, loads specified network and an
image in the NV12 color format to an Inference Engine plugin. Then, the sample creates a synchronous inference request object. When inference is done, the
application outputs data to the standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/integrate_with_your_application.md) section of "Integrate OpenVINO™ Runtime with Your Application" guide.
## Building
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/OV_Runtime_UG/Samples_Overview.md) section in Inference Engine Samples guide.
## Running
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
The sample accepts an uncompressed image in the NV12 color format. To run the sample, you need to
convert your BGR/RGB image to NV12. To do this, you can use one of the widely available tools such
as FFmpeg\* or GStreamer\*. The following command shows how to convert an ordinary image into an
uncompressed NV12 image using FFmpeg:
```sh
ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
```
> **NOTES**:
>
> - Because the sample reads raw image files, you should provide a correct image size along with the
> image path. The sample expects the logical size of the image, not the buffer size. For example,
> for 640x480 BGR/RGB image the corresponding NV12 logical image size is also 640x480, whereas the
> buffer size is 640x720.
> - By default, this sample expects that network input has BGR channels order. If you trained your
> model to work with RGB order, you need to reconvert your model using the Model Optimizer tool
> with `--reverse_input_channels` argument specified. For more information about the argument,
> refer to **When to Reverse Input Channels** section of
> [Embedding Preprocessing Computation](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name alexnet
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name alexnet
```
3. Perform inference of NV12 image using `alexnet` model on a `CPU`, for example:
```
<path_to_sample>/hello_nv12_input_classification_ov_c <path_to_model>/alexnet.xml <path_to_image>/cat.yuv 300x300 CPU
```
## Sample Output
The application outputs top-10 inference results.
```
Top 10 results:
Image <path_to_image>/cat.yuv
classid probability
------- -----------
876 0.125426
435 0.120252
285 0.068099
282 0.056738
281 0.032151
36 0.027748
94 0.027691
999 0.026507
335 0.021384
186 0.017978
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```

View File

@ -0,0 +1,335 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "c_api/ov_c_api.h"
/**
* @brief Struct to store infer results
*/
struct infer_result {
size_t class_id;
float probability;
};
/**
* @brief Sort result by probability
* @param struct with infer results to sort
* @param result_size of the struct
* @return none
*/
int compare(const void* a, const void* b) {
const struct infer_result* sa = (const struct infer_result*)a;
const struct infer_result* sb = (const struct infer_result*)b;
if (sa->probability < sb->probability) {
return 1;
} else if ((sa->probability == sb->probability) && (sa->class_id > sb->class_id)) {
return 1;
} else if (sa->probability > sb->probability) {
return -1;
}
return 0;
}
void infer_result_sort(struct infer_result* results, size_t result_size) {
qsort(results, result_size, sizeof(struct infer_result), compare);
}
/**
* @brief Convert output tensor to infer result struct for processing results
* @param tensor of output tensor
* @param result_size of the infer result
* @return struct infer_result
*/
struct infer_result* tensor_to_infer_result(ov_tensor_t* tensor, size_t* result_size) {
ov_status_e status = ov_tensor_get_size(tensor, result_size);
if (status != OK)
return NULL;
struct infer_result* results = (struct infer_result*)malloc(sizeof(struct infer_result) * (*result_size));
if (!results)
return NULL;
void* data = NULL;
status = ov_tensor_get_data(tensor, &data);
if (status != OK) {
free(results);
return NULL;
}
float* float_data = (float*)(data);
size_t i;
for (i = 0; i < *result_size; ++i) {
results[i].class_id = i;
results[i].probability = float_data[i];
}
return results;
}
/**
* @brief Print results of infer
* @param results of the infer results
* @param result_size of the struct of classification results
* @param img_path image path
* @return none
*/
void print_infer_result(struct infer_result* results, size_t result_size, const char* img_path) {
printf("\nImage %s\n", img_path);
printf("\nclassid probability\n");
printf("------- -----------\n");
size_t i;
for (i = 0; i < result_size; ++i) {
printf("%zu %f\n", results[i].class_id, results[i].probability);
}
}
void print_model_input_output_info(ov_model_t* model) {
char* friendly_name = NULL;
ov_model_get_friendly_name(model, &friendly_name);
printf("[INFO] model name: %s \n", friendly_name);
ov_free(friendly_name);
}
/**
* @brief Check image has supported width and height
* @param string image size in WIDTHxHEIGHT format
* @param pointer to image width
* @param pointer to image height
* @return bool status True(success) or False(fail)
*/
bool is_supported_image_size(const char* size_str, size_t* width, size_t* height) {
char* p_end = NULL;
size_t _width = 0, _height = 0;
_width = strtoul(size_str, &p_end, 10);
_height = strtoul(p_end + 1, NULL, 10);
if (_width > 0 && _height > 0) {
if (_width % 2 == 0 && _height % 2 == 0) {
*width = _width;
*height = _height;
return true;
} else {
printf("Unsupported image size, width and height must be even numbers \n");
return false;
}
} else {
printf("Incorrect format of image size parameter, expected WIDTHxHEIGHT, "
"actual: %s\n",
size_str);
return false;
}
}
size_t read_image_from_file(const char* img_path, unsigned char* img_data, size_t size) {
FILE* fp = fopen(img_path, "rb");
size_t read_size = 0;
if (fp) {
fseek(fp, 0, SEEK_END);
if (ftell(fp) >= size) {
fseek(fp, 0, SEEK_SET);
read_size = fread(img_data, 1, size, fp);
}
fclose(fp);
}
return read_size;
}
#define CHECK_STATUS(return_status) \
if (return_status != OK) { \
fprintf(stderr, "[ERROR] return status %d, line %d\n", return_status, __LINE__); \
goto err; \
}
int main(int argc, char** argv) {
// -------- Check input parameters --------
if (argc != 5) {
printf("Usage : ./hello_classification_ov_c <path_to_model> <path_to_image> "
"<device_name>\n");
return EXIT_FAILURE;
}
size_t input_width = 0, input_height = 0, img_size = 0;
if (!is_supported_image_size(argv[3], &input_width, &input_height)) {
fprintf(stderr, "ERROR is_supported_image_size, line %d\n", __LINE__);
return EXIT_FAILURE;
}
unsigned char* img_data = NULL;
ov_core_t* core = NULL;
ov_model_t* model = NULL;
ov_tensor_t* tensor = NULL;
ov_preprocess_t* preprocess = NULL;
ov_preprocess_input_info_t* input_info = NULL;
ov_model_t* new_model = NULL;
ov_preprocess_input_tensor_info_t* input_tensor_info = NULL;
ov_preprocess_input_process_steps_t* input_process = NULL;
ov_preprocess_input_model_info_t* p_input_model = NULL;
ov_preprocess_output_info_t* output_info = NULL;
ov_preprocess_output_tensor_info_t* output_tensor_info = NULL;
ov_compiled_model_t* compiled_model = NULL;
ov_infer_request_t* infer_request = NULL;
ov_tensor_t* output_tensor = NULL;
struct infer_result* results = NULL;
char* input_tensor_name;
char* output_tensor_name;
ov_output_node_list_t input_nodes;
ov_output_node_list_t output_nodes;
// -------- Get OpenVINO runtime version --------
ov_version_t version;
CHECK_STATUS(ov_get_version(&version));
printf("---- OpenVINO INFO----\n");
printf("description : %s \n", version.description);
printf("build number: %s \n", version.buildNumber);
ov_version_free(&version);
// -------- Parsing and validation of input arguments --------
const char* input_model = argv[1];
const char* input_image_path = argv[2];
const char* device_name = argv[4];
// -------- Step 1. Initialize OpenVINO Runtime Core --------
CHECK_STATUS(ov_core_create("", &core));
// -------- Step 2. Read a model --------
printf("[INFO] Loading model files: %s\n", input_model);
CHECK_STATUS(ov_core_read_model(core, input_model, NULL, &model));
print_model_input_output_info(model);
CHECK_STATUS(ov_model_get_outputs(model, &output_nodes));
if (output_nodes.num != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 output only %d\n", __LINE__);
goto err;
}
CHECK_STATUS(ov_model_get_inputs(model, &input_nodes));
if (input_nodes.num != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 input only %d\n", __LINE__);
goto err;
}
CHECK_STATUS(ov_node_get_tensor_name(&input_nodes, 0, &input_tensor_name));
CHECK_STATUS(ov_node_get_tensor_name(&output_nodes, 0, &output_tensor_name));
// -------- Step 3. Configure preprocessing --------
CHECK_STATUS(ov_preprocess_create(model, &preprocess));
// 1) Select input with 'input_tensor_name' tensor name
CHECK_STATUS(ov_preprocess_get_input_info_by_name(preprocess, input_tensor_name, &input_info));
// 2) Set input type
// - as 'u8' precision
// - set color format to NV12 (single plane)
// - static spatial dimensions for resize preprocessing operation
CHECK_STATUS(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
CHECK_STATUS(ov_preprocess_input_tensor_info_set_element_type(input_tensor_info, U8));
CHECK_STATUS(ov_preprocess_input_tensor_info_set_color_format(input_tensor_info, NV12_SINGLE_PLANE));
CHECK_STATUS(
ov_preprocess_input_tensor_info_set_spatial_static_shape(input_tensor_info, input_height, input_width));
// 3) Pre-processing steps:
// a) Convert to 'float'. This is to have color conversion more accurate
// b) Convert to BGR: Assumes that model accepts images in BGR format. For RGB, change it manually
// c) Resize image from tensor's dimensions to model ones
CHECK_STATUS(ov_preprocess_input_get_preprocess_steps(input_info, &input_process));
CHECK_STATUS(ov_preprocess_input_convert_element_type(input_process, F32));
CHECK_STATUS(ov_preprocess_input_convert_color(input_process, BGR));
CHECK_STATUS(ov_preprocess_input_resize(input_process, RESIZE_LINEAR));
// 4) Set model data layout (Assuming model accepts images in NCHW layout)
CHECK_STATUS(ov_preprocess_input_get_model_info(input_info, &p_input_model));
ov_layout_t model_layout = {'N', 'C', 'H', 'W'};
CHECK_STATUS(ov_preprocess_input_model_set_layout(p_input_model, model_layout));
// 5) Apply preprocessing to an input with 'input_tensor_name' name of loaded model
CHECK_STATUS(ov_preprocess_build(preprocess, &new_model));
// -------- Step 4. Loading a model to the device --------
ov_property_t property;
CHECK_STATUS(ov_core_compile_model(core, new_model, device_name, &compiled_model, &property));
// -------- Step 5. Create an infer request --------
CHECK_STATUS(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
// -------- Step 6. Prepare input data --------
img_size = input_width * (input_height * 3 / 2);
img_data = (unsigned char*)calloc(img_size, sizeof(unsigned char));
if (NULL == img_data) {
fprintf(stderr, "[ERROR] calloc returned NULL, line %d\n", __LINE__);
goto err;
}
if (img_size != read_image_from_file(input_image_path, img_data, img_size)) {
fprintf(stderr, "[ERROR] Image dimensions not match with NV12 file size, line %d\n", __LINE__);
goto err;
}
ov_element_type_e input_type = U8;
size_t batch = 1;
ov_shape_t input_shape = {4, {batch, input_height * 3 / 2, input_width, 1}};
CHECK_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, img_data, &tensor));
// -------- Step 6. Set input tensor --------
// Set the input tensor by tensor name to the InferRequest
CHECK_STATUS(ov_infer_request_set_tensor(infer_request, input_tensor_name, tensor));
// -------- Step 7. Do inference --------
// Running the request synchronously
CHECK_STATUS(ov_infer_request_infer(infer_request));
// -------- Step 8. Process output --------
CHECK_STATUS(ov_infer_request_get_out_tensor(infer_request, 0, &output_tensor));
// Print classification results
size_t results_num;
results = tensor_to_infer_result(output_tensor, &results_num);
if (!results) {
goto err;
}
infer_result_sort(results, results_num);
size_t top = 10;
if (top > results_num) {
top = results_num;
}
printf("\nTop %zu results:\n", top);
print_infer_result(results, top, input_image_path);
// -------- free allocated resources --------
err:
free(results);
free(img_data);
ov_output_node_list_free(&output_nodes);
ov_output_node_list_free(&input_nodes);
if (output_tensor)
ov_tensor_free(output_tensor);
if (infer_request)
ov_infer_request_free(infer_request);
if (compiled_model)
ov_compiled_model_free(compiled_model);
if (output_tensor_info)
ov_preprocess_output_tensor_info_free(output_tensor_info);
if (output_info)
ov_preprocess_output_info_free(output_info);
if (p_input_model)
ov_preprocess_input_model_info_free(p_input_model);
if (input_process)
ov_preprocess_input_process_steps_free(input_process);
if (input_tensor_info)
ov_preprocess_input_tensor_info_free(input_tensor_info);
if (input_info)
ov_preprocess_input_info_free(input_info);
if (preprocess)
ov_preprocess_free(preprocess);
if (new_model)
ov_model_free(new_model);
if (tensor)
ov_tensor_free(tensor);
if (model)
ov_model_free(model);
if (core)
ov_core_free(core);
return EXIT_SUCCESS;
}

View File

@ -269,6 +269,21 @@ typedef enum {
U64 //!< u64 element type
} ov_element_type_e;
/**
* @struct ov_color_format_e
*/
typedef enum {
UNDEFINE = 0, //!< Undefine color format
NV12_SINGLE_PLANE, // Image in NV12 format as single tensor
NV12_TWO_PLANES, // Image in NV12 format represented as separate tensors for Y and UV planes.
I420_SINGLE_PLANE, // Image in I420 (YUV) format as single tensor
I420_THREE_PLANES, // Image in I420 format represented as separate tensors for Y, U and V planes.
RGB,
BGR,
RGBX, // Image in RGBX interleaved format (4 channels)
BGRX // Image in BGRX interleaved format (4 channels)
}ov_color_format_e;
/**
* @struct ov_layout_t
*/
@ -791,6 +806,39 @@ OPENVINO_C_API(ov_status_e) ov_preprocess_input_resize(ov_preprocess_input_proce
OPENVINO_C_API(ov_status_e) ov_preprocess_input_tensor_info_set_element_type(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_element_type_e element_type);
/**
* @brief Set ov_preprocess_input_tensor_info_t color format.
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_input_tensor_info_t.
* @param colorFormat The enumerate of colorFormat
*/
OPENVINO_C_API(ov_status_e) ov_preprocess_input_tensor_info_set_color_format(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_color_format_e colorFormat);
/**
* @brief Set ov_preprocess_input_tensor_info_t spatial_static_shape.
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_input_tensor_info_t.
* @param input_height The height of input
* @param input_width The width of input
*/
OPENVINO_C_API(ov_status_e) ov_preprocess_input_tensor_info_set_spatial_static_shape(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const size_t input_height, const size_t input_width);
/**
* @brief Convert ov_preprocess_input_process_steps_t element type.
* @param preprocess_input_steps A pointer to the ov_preprocess_input_process_steps_t.
* @param element_type preprocess input element type.
*/
OPENVINO_C_API(ov_status_e) ov_preprocess_input_convert_element_type(ov_preprocess_input_process_steps_t* preprocess_input_process_steps,
const ov_element_type_e element_type);
/**
* @brief Convert ov_preprocess_input_process_steps_t color.
* @param preprocess_input_steps A pointer to the ov_preprocess_input_process_steps_t.
* @param colorFormat The enumerate of colorFormat.
*/
OPENVINO_C_API(ov_status_e) ov_preprocess_input_convert_color(ov_preprocess_input_process_steps_t* preprocess_input_process_steps,
const ov_color_format_e colorFormat);
/**
* @brief Helper function to reuse element type and shape from user's created tensor.
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_input_tensor_info_t.

View File

@ -154,6 +154,17 @@ std::map<ov_preprocess_resize_algorithm_e, ov::preprocess::ResizeAlgorithm> resi
{ov_preprocess_resize_algorithm_e::RESIZE_LINEAR, ov::preprocess::ResizeAlgorithm::RESIZE_LINEAR},
{ov_preprocess_resize_algorithm_e::RESIZE_NEAREST, ov::preprocess::ResizeAlgorithm::RESIZE_NEAREST}};
std::map<ov_color_format_e, ov::preprocess::ColorFormat> color_format_map = {
{ov_color_format_e::UNDEFINE, ov::preprocess::ColorFormat::UNDEFINED},
{ov_color_format_e::NV12_SINGLE_PLANE, ov::preprocess::ColorFormat::NV12_SINGLE_PLANE},
{ov_color_format_e::NV12_TWO_PLANES, ov::preprocess::ColorFormat::NV12_TWO_PLANES},
{ov_color_format_e::I420_SINGLE_PLANE, ov::preprocess::ColorFormat::I420_SINGLE_PLANE},
{ov_color_format_e::I420_THREE_PLANES, ov::preprocess::ColorFormat::I420_THREE_PLANES},
{ov_color_format_e::RGB, ov::preprocess::ColorFormat::RGB},
{ov_color_format_e::BGR, ov::preprocess::ColorFormat::BGR},
{ov_color_format_e::RGBX, ov::preprocess::ColorFormat::RGBX},
{ov_color_format_e::BGRX, ov::preprocess::ColorFormat::BGRX}};
std::map<ov_element_type_e, ov::element::Type> element_type_map = {
{ov_element_type_e::UNDEFINED, ov::element::undefined},
{ov_element_type_e::DYNAMIC, ov::element::dynamic},
@ -186,6 +197,8 @@ ov_element_type_e find_ov_element_type_e(ov::element::Type type) {
#define GET_OV_ELEMENT_TYPE(a) element_type_map[a]
#define GET_CAPI_ELEMENT_TYPE(a) find_ov_element_type_e(a)
#define GET_OV_COLOR_FARMAT(a) (color_format_map.find(a) == color_format_map.end()?ov::preprocess::ColorFormat::UNDEFINED:color_format_map[a])
#define CATCH_OV_EXCEPTION(StatusCode, ExceptionType) catch (const InferenceEngine::ExceptionType&) {return ov_status_e::StatusCode;}
#define CATCH_OV_EXCEPTIONS \
@ -940,6 +953,54 @@ ov_status_e ov_preprocess_input_tensor_info_set_layout(ov_preprocess_input_tenso
return ov_status_e::OK;
}
ov_status_e ov_preprocess_input_tensor_info_set_color_format(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const ov_color_format_e colorFormat) {
if (!preprocess_input_tensor_info) {
return ov_status_e::GENERAL_ERROR;
}
try {
preprocess_input_tensor_info->object->set_color_format(GET_OV_COLOR_FARMAT(colorFormat));
} CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_input_tensor_info_set_spatial_static_shape(ov_preprocess_input_tensor_info_t* preprocess_input_tensor_info,
const size_t input_height, const size_t input_width) {
if (!preprocess_input_tensor_info) {
return ov_status_e::GENERAL_ERROR;
}
try {
preprocess_input_tensor_info->object->set_spatial_static_shape(input_height, input_width);
} CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_input_convert_element_type(ov_preprocess_input_process_steps_t* preprocess_input_process_steps,
const ov_element_type_e element_type) {
if (!preprocess_input_process_steps) {
return ov_status_e::GENERAL_ERROR;
}
try {
preprocess_input_process_steps->object->convert_element_type(GET_OV_ELEMENT_TYPE(element_type));
} CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_input_convert_color(ov_preprocess_input_process_steps_t* preprocess_input_process_steps,
const ov_color_format_e colorFormat) {
if (!preprocess_input_process_steps) {
return ov_status_e::GENERAL_ERROR;
}
try {
preprocess_input_process_steps->object->convert_color(GET_OV_COLOR_FARMAT(colorFormat));
} CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_get_output_info(const ov_preprocess_t* preprocess,
ov_preprocess_output_info_t **preprocess_output_info) {
if (!preprocess || !preprocess_output_info) {

View File

@ -595,6 +595,140 @@ TEST(ov_preprocess, ov_preprocess_input_tensor_info_set_layout) {
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_input_tensor_info_set_color_format) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create("", &core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_input_info_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_get_input_info_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_input_tensor_info_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(ov_preprocess_input_tensor_info_set_color_format(input_tensor_info, ov_color_format_e::NV12_SINGLE_PLANE));
ov_preprocess_input_tensor_info_free(input_tensor_info);
ov_preprocess_input_info_free(input_info);
ov_preprocess_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_input_tensor_info_set_spatial_static_shape) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create("", &core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_input_info_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_get_input_info_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_input_tensor_info_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
size_t input_height = 500;
size_t input_width = 500;
OV_ASSERT_OK(ov_preprocess_input_tensor_info_set_spatial_static_shape(input_tensor_info, input_height, input_width));
ov_preprocess_input_tensor_info_free(input_tensor_info);
ov_preprocess_input_info_free(input_info);
ov_preprocess_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_input_convert_element_type) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create("", &core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_input_info_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_get_input_info_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_input_process_steps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_input_get_preprocess_steps(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
ov_preprocess_input_tensor_info_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(ov_preprocess_input_tensor_info_set_element_type(input_tensor_info, ov_element_type_e::U8));
OV_ASSERT_OK(ov_preprocess_input_convert_element_type(input_process, ov_element_type_e::F32));
ov_preprocess_input_tensor_info_free(input_tensor_info);
ov_preprocess_input_process_steps_free(input_process);
ov_preprocess_input_info_free(input_info);
ov_preprocess_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_input_convert_color) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create("", &core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_input_info_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_get_input_info_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_input_process_steps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_input_get_preprocess_steps(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
ov_preprocess_input_tensor_info_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(ov_preprocess_input_tensor_info_set_color_format(input_tensor_info, ov_color_format_e::NV12_SINGLE_PLANE));
OV_ASSERT_OK(ov_preprocess_input_convert_color(input_process, ov_color_format_e::BGR));
ov_preprocess_input_tensor_info_free(input_tensor_info);
ov_preprocess_input_process_steps_free(input_process);
ov_preprocess_input_info_free(input_info);
ov_preprocess_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_get_output_info) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create("", &core));

View File

@ -23,7 +23,7 @@ log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=s
test_data_fp32 = get_tests(cmd_params={'i': [os.path.join('224x224', 'dog6.yuv')],
'm': [os.path.join('squeezenet1.1', 'FP32', 'squeezenet1.1.xml')],
'size': ['224x224'],
'sample_type': ['C++', 'C'],
'sample_type': ['C++', 'C', 'C2.0'],
'd': ['CPU']},
use_device=['d']
)