[C API] Refine OV 2.0 C APIs for better expansibility and compatibility (#12187)

* [C API 2.0]Redifine partial shape and property wrapper

1. Use dimension object to initialize partial_shape rather than string
2. Use void* to unify property value rather than union
3. rename some C api name to strict align with C++ method.

Change-Id: I64b5c521461264dba2d23543808584632fbd6d4b

* [C API 2.0]Memory check and implement all reshape interface

1. Memory safe create and free
2. Implement all reshape interface align with C++ interface
3. rename some api to align with C++ interface

Change-Id: Ib5e4192bdbd8a11cdd7e30b1dc84881ba3f2d505

* Rename prepostprocess to strict align with C++ name

Change-Id: I7a4d0a6e835b2d6ed01cd218ac81b1b621f600bf

* [C API 2.0]redefine ov_node and ov_model interface

1. redefine ov_node and ov_model interface
2. rename some api to aligne with C++ interface
3. remove some redundant code
4. align CMakeLists.txt with OpenVINO 2.0 convention

Change-Id: I4d5e92157e7891319c9754da8e70b9c6150ae2e3

* Redefine ov_layout to support more than one char

Change-Id: I39e5389246cf3edcc2f4734d13157457773d89b8

* Add interface to get partial_shape from node

Change-Id: I8cef77db581b43d2f0a9ac48cfdc09a86e39b694

* Use unique_ptr prevent memory leaks in case of exception

Change-Id: I150b375108a3eded400bdde087ab5c858958c25f

* Put legacy C API and 2.0 C API into a library

Change-Id: I067a55a00e78b80cdede5ae7adad316ee98cabd1

* Only keep OV 2.0 C sample and move legacy C sample to legacy directory

1. Move legacy C samples to tools/legacy/c directory
2. Keep OV 2.0 C samples in samples/c directory

Change-Id: I05880d17ee7cb7eafc6853ebb5394f3969258592

* Fix format and log issues

Change-Id: I05d909b3d7046d41b807e35808a993bb09672e68

* Restore documents update

Change-Id: I82dd92081c0aa1a2d7dca7f114cf6a35131d6f92

* Change config map data be const

Change-Id: I9043859e8308c01d80794dc8280ae236947f3bbb

* Update api document

Change-Id: I35bc149bad0de17424d95f48c3027030b708e147

* Add clang enable

Change-Id: I335639c05fb5fb38e682dbb72bfaf78380c0adaf

* Fix clang issue after enable clang for ie_c_api.c

Change-Id: Idcb4dda9d66e47a169eb79a9c4fe7a7d4df838db

* split header file and c file into multiple files

Change-Id: I7c3398966809ef70d7fcb799f2d612a33b471e31

* Fix clang format issue

Change-Id: Ibd18b45537c8f3bcbb5b995c90ae28999161d54d

* Add single ov_dimension_create method

Change-Id: Icd06b50e4f4df8f7897c7c4327edb67178162544

* Remove all legacy c samples completely

Change-Id: I098360a0d9002340e8769074181f7997b43bce8f

* Update ov_property_value to replace only ptr

Change-Id: I9f5a11b4cf07e759c1998e78e2624f0a1266d9b0

* Split more header files, add static dimension api

Change-Id: I14e4fb8585fc629480c06b86bd8219e75a9682f7

* Change ov_dimensions_create to be ov_dimensions_create_dynamic

Change-Id: I50c02749cea96f12bcea702b53a89c65b289550e

* rename status and get_out_tensor

Change-Id: I762c1d0c5a069454506fe3c04283c63ddbfacf31

* Split ov_c_api_test.cpp

* Split OV2.0 CAPI tests

* move var into Setup

* Merge legacy and 2.0 C API test

* Merge InferenceEngineCAPITests into openvino_capi_test

1. put InferenceEngineCAPITests into openvino_capi_test
2. resolve some format issues

Change-Id: I47bbba6bd70a871ee063becbd80eb57919fa9fb0

* legacy api test skips clang format

Change-Id: Id54ecdba827cf98c99b92295c0a0772123098b63

* Fix clang format issue

Change-Id: I7ed510d8178971fe04a895e812c261db99d8b9f2

* Restore InferenceEngineCAPITests

Change-Id: I4d641ffb1de9ce4d20ebecf35fc036fa7bd73e55

* rename openvino_capi_test to ov_capi_test

Change-Id: I6b6fe0cdb89aab7210abb17f32dbfdcdce72ba25

* unify list size name and refine ov_core_version_t

Change-Id: I137fc6d990c7b07f597ee94fa3b98d07ae843cb6

* align header file path to be openvino/c/openvino.h

Change-Id: I1a4552e1d558098af704942fe45488b0d6d53a90

* Fix path issue

Change-Id: I84d425d25e3b08c1516cbcc842fb9cb75574bf17

* move ov_color_format and remove opencv depenency

Change-Id: I486145f9e92e8bbf2e937d3572334aa9f0e68841

* Resolve some memory allocation error handling issues and read model with empty weight issue

Change-Id: Icd8e3b6de9741147993fa215a0c7cfd7debd5500

* Add GPU test cases

Change-Id: I13324ef019b5b1af79259ca932a36a0cec792c27

* Fix clang issue

Change-Id: I9bb4c47de301d142b5e2a77a39f667689ad9fe38

* Resolve CI test failure

Change-Id: Ia327d5edab19d8dd44ac369670f190d5c57aca79

* Redefine ov_shape and add default ov_core_create

Change-Id: I3e47d607f8aad65cb99cdddacaecf7bf34b1361b

* Remove some unnecessary api of node

Remove the unnecessary node api:
     ov_node_get_any_name(ov_output_const_node_t* node, char** tensor_name)
     ov_node_get_element_type(ov_output_const_node_t* node, ov_element_type_e* tensor_type)

Change-Id: I80a3243676800263a9e56afa3cfffce7b4bd2ae7

* Rename reshape api

ov_model_reshape should be common case which allow to reshape any models with different number of input.

Change-Id: I26bafeeb8a3dda7cd5164cda15fdb338db8668cb

* Rename ov_node api

Change-Id: I03114ecb6de5c46b6d02c909b6f6fb6c8bfd5cba

* Remove subfolder out of source code

Change-Id: Ib033ae7712cc0460d6fc21a0f89818381ae503c0

* apply absolute path for all header files

Change-Id: I8024c897d424b407025e21460ed4b62829b853d2

* Fix CI issue ov_capi_test failed to find libgna

Change-Id: I166e79a818498c6721fe956f43873f36d9ae1e07

* Resolve build issue to align with PR12214

Change-Id: I9e6094db213b431ee1b46e0d64199131db33bb36

Co-authored-by: ruiqi <ruiqi.yang@intel.com>
This commit is contained in:
River Li 2022-08-14 23:51:34 +08:00 committed by GitHub
parent a887306465
commit cbbac125f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
70 changed files with 7512 additions and 6521 deletions

View File

@ -400,7 +400,7 @@ jobs:
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_filter=-*ov_core_read_model_from_memory* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OpenVinoCAPITests.xml
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_capi_test --gtest_filter=-*ov_core_read_model_from_memory* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
displayName: 'OV CAPITests'
continueOnError: false

View File

@ -364,7 +364,7 @@ jobs:
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
$(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OpenVinoCAPITests.xml
$(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
displayName: 'OV CAPITests'
continueOnError: false

View File

@ -218,7 +218,7 @@ jobs:
- script: |
export DATA_PATH=$(MODELS_PATH)
export MODELS_PATH=$(MODELS_PATH)
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/OpenVinoCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OpenVinoCAPITests.xml
. $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
displayName: 'IE CAPITests'
continueOnError: false
enabled: false

View File

@ -295,7 +295,7 @@ jobs:
- script: |
set DATA_PATH=$(MODELS_PATH)
set MODELS_PATH=$(MODELS_PATH)
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\OpenVinoCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OpenVinoCAPITests.xml
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_capi_test.xml
displayName: 'OV CAPITests'
continueOnError: false

View File

@ -2,294 +2,270 @@
// SPDX-License-Identifier: Apache-2.0
//
#include <c_api/ie_c_api.h>
#include <opencv_c_wrapper.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "openvino/c/openvino.h"
/**
* @brief Struct to store classification results
* @brief Struct to store infer results
*/
struct classify_res {
struct infer_result {
size_t class_id;
float probability;
};
/**
* @brief Sort result of image classification by probability
* @param struct with classification results to sort
* @param size of the struct
* @brief Sort result by probability
* @param struct with infer results to sort
* @param result_size of the struct
* @return none
*/
void classify_res_sort(struct classify_res* res, size_t n) {
size_t i, j;
for (i = 0; i < n; ++i) {
for (j = i + 1; j < n; ++j) {
if (res[i].probability < res[j].probability) {
struct classify_res temp = res[i];
res[i] = res[j];
res[j] = temp;
} else if (res[i].probability == res[j].probability && res[i].class_id > res[j].class_id) {
struct classify_res temp = res[i];
res[i] = res[j];
res[j] = temp;
}
}
int compare(const void* a, const void* b) {
const struct infer_result* sa = (const struct infer_result*)a;
const struct infer_result* sb = (const struct infer_result*)b;
if (sa->probability < sb->probability) {
return 1;
} else if ((sa->probability == sb->probability) && (sa->class_id > sb->class_id)) {
return 1;
} else if (sa->probability > sb->probability) {
return -1;
}
return 0;
}
void infer_result_sort(struct infer_result* results, size_t result_size) {
qsort(results, result_size, sizeof(struct infer_result), compare);
}
/**
* @brief Convert output blob to classify struct for processing results
* @param blob of output data
* @param size of the blob
* @return struct classify_res
* @brief Convert output tensor to infer result struct for processing results
* @param tensor of output tensor
* @param result_size of the infer result
* @return struct infer_result
*/
struct classify_res* output_blob_to_classify_res(ie_blob_t* blob, size_t* n) {
dimensions_t output_dim;
IEStatusCode status = ie_blob_get_dims(blob, &output_dim);
struct infer_result* tensor_to_infer_result(ov_tensor_t* tensor, size_t* result_size) {
ov_shape_t output_shape = {0};
ov_status_e status = ov_tensor_get_shape(tensor, &output_shape);
if (status != OK)
return NULL;
*n = output_dim.dims[1];
*result_size = output_shape.dims[1];
struct classify_res* cls = (struct classify_res*)malloc(sizeof(struct classify_res) * (*n));
if (!cls) {
struct infer_result* results = (struct infer_result*)malloc(sizeof(struct infer_result) * (*result_size));
if (!results)
return NULL;
}
ie_blob_buffer_t blob_cbuffer;
status = ie_blob_get_cbuffer(blob, &blob_cbuffer);
void* data = NULL;
status = ov_tensor_data(tensor, &data);
if (status != OK) {
free(cls);
free(results);
return NULL;
}
float* blob_data = (float*)(blob_cbuffer.cbuffer);
float* float_data = (float*)(data);
size_t i;
for (i = 0; i < *n; ++i) {
cls[i].class_id = i;
cls[i].probability = blob_data[i];
for (i = 0; i < *result_size; ++i) {
results[i].class_id = i;
results[i].probability = float_data[i];
}
return cls;
return results;
}
/**
* @brief Print results of classification
* @param struct of the classification results
* @param size of the struct of classification results
* @param string image path
* @brief Print results of infer
* @param results of the infer results
* @param result_size of the struct of classification results
* @param img_path image path
* @return none
*/
void print_classify_res(struct classify_res* cls, size_t n, const char* img_path) {
void print_infer_result(struct infer_result* results, size_t result_size, const char* img_path) {
printf("\nImage %s\n", img_path);
printf("\nclassid probability\n");
printf("------- -----------\n");
size_t i;
for (i = 0; i < n; ++i) {
printf("%zu %f\n", cls[i].class_id, cls[i].probability);
for (i = 0; i < result_size; ++i) {
printf("%zu %f\n", results[i].class_id, results[i].probability);
}
printf("\nThis sample is an API example,"
" for any performance measurements please use the dedicated benchmark_"
"app tool\n");
}
void print_model_input_output_info(ov_model_t* model) {
char* friendly_name = NULL;
ov_model_get_friendly_name(model, &friendly_name);
printf("[INFO] model name: %s \n", friendly_name);
ov_free(friendly_name);
}
#define CHECK_STATUS(return_status) \
if (return_status != OK) { \
fprintf(stderr, "[ERROR] return status %d, line %d\n", return_status, __LINE__); \
goto err; \
}
int main(int argc, char** argv) {
// ------------------------------ Parsing and validation of input args
// ---------------------------------
// -------- Check input parameters --------
if (argc != 4) {
printf("Usage : ./hello_classification <path_to_model> <path_to_image> "
printf("Usage : ./hello_classification_c <path_to_model> <path_to_image> "
"<device_name>\n");
return EXIT_FAILURE;
}
ov_core_t* core = NULL;
ov_model_t* model = NULL;
ov_tensor_t* tensor = NULL;
ov_preprocess_prepostprocessor_t* preprocess = NULL;
ov_preprocess_inputinfo_t* input_info = NULL;
ov_model_t* new_model = NULL;
ov_preprocess_inputtensorinfo_t* input_tensor_info = NULL;
ov_preprocess_preprocesssteps_t* input_process = NULL;
ov_preprocess_inputmodelinfo_t* p_input_model = NULL;
ov_preprocess_outputinfo_t* output_info = NULL;
ov_preprocess_outputtensorinfo_t* output_tensor_info = NULL;
ov_compiled_model_t* compiled_model = NULL;
ov_infer_request_t* infer_request = NULL;
ov_tensor_t* output_tensor = NULL;
struct infer_result* results = NULL;
ov_layout_t* input_layout = NULL;
ov_layout_t* model_layout = NULL;
ov_shape_t input_shape;
// -------- Get OpenVINO runtime version --------
ov_version_t version;
CHECK_STATUS(ov_get_openvino_version(&version));
printf("---- OpenVINO INFO----\n");
printf("Description : %s \n", version.description);
printf("Build number: %s \n", version.buildNumber);
ov_version_free(&version);
// -------- Parsing and validation of input arguments --------
const char* input_model = argv[1];
const char* input_image_path = argv[2];
const char* device_name = argv[3];
ie_core_t* core = NULL;
ie_network_t* network = NULL;
ie_executable_network_t* exe_network = NULL;
ie_infer_request_t* infer_request = NULL;
char *input_name = NULL, *output_name = NULL;
ie_blob_t *imgBlob = NULL, *output_blob = NULL;
size_t network_input_size;
size_t network_output_size;
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 1. Initialize inference engine core
// -------------------------------------
// -------- Step 1. Initialize OpenVINO Runtime Core --------
CHECK_STATUS(ov_core_create(&core));
IEStatusCode status = ie_core_create("", &core);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_create status %d, line %d\n", status, __LINE__);
// -------- Step 2. Read a model --------
printf("[INFO] Loading model files: %s\n", input_model);
CHECK_STATUS(ov_core_read_model(core, input_model, NULL, &model));
print_model_input_output_info(model);
ov_output_node_list_t output_nodes;
CHECK_STATUS(ov_model_outputs(model, &output_nodes));
if (output_nodes.size != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 output only %d\n", __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin
// files) or ONNX (.onnx file) format
status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_read_network status %d, line %d\n", status, __LINE__);
goto err;
}
// check the network topology
status = ie_network_get_inputs_number(network, &network_input_size);
if (status != OK || network_input_size != 1) {
printf("Sample supports topologies with 1 input only\n");
ov_output_node_list_t input_nodes;
CHECK_STATUS(ov_model_inputs(model, &input_nodes));
if (input_nodes.size != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 input only %d\n", __LINE__);
goto err;
}
status = ie_network_get_outputs_number(network, &network_output_size);
if (status != OK || network_output_size != 1) {
fprintf(stderr, "Sample supports topologies with 1 output only\n");
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 3. Configure input & output
// ---------------------------------------------
// --------------------------- Prepare input blobs
// -----------------------------------------------------
status = ie_network_get_input_name(network, 0, &input_name);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_input_name status %d, line %d\n", status, __LINE__);
goto err;
}
/* Mark input as resizable by setting of a resize algorithm.
* In this case we will be able to set an input blob of any shape to an infer
* request. Resize and layout conversions are executed automatically during
* inference */
status |= ie_network_set_input_resize_algorithm(network, input_name, RESIZE_BILINEAR);
status |= ie_network_set_input_layout(network, input_name, NHWC);
status |= ie_network_set_input_precision(network, input_name, U8);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_set_input_* status %d, line %d\n", status, __LINE__);
goto err;
}
// --------------------------- Prepare output blobs
// ----------------------------------------------------
status |= ie_network_get_output_name(network, 0, &output_name);
status |= ie_network_set_output_precision(network, output_name, FP32);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_output_* status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 4. Loading model to the device
// ------------------------------------------
ie_config_t config = {NULL, NULL, NULL};
status = ie_core_load_network(core, network, device_name, &config, &exe_network);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_load_network status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 5. Create infer request
// -------------------------------------------------
status = ie_exec_network_create_infer_request(exe_network, &infer_request);
if (status != OK) {
fprintf(stderr, "ERROR ie_exec_network_create_infer_request status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 6. Prepare input
// --------------------------------------------------------
/* Read input image to a blob and set it to an infer request without resize
* and layout conversions. */
// -------- Step 3. Set up input
c_mat_t img;
image_read(input_image_path, &img);
ov_element_type_e input_type = U8;
ov_shape_init(&input_shape, 4);
input_shape.dims[0] = 1;
input_shape.dims[1] = (size_t)img.mat_height;
input_shape.dims[2] = (size_t)img.mat_width;
input_shape.dims[3] = 3;
CHECK_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, img.mat_data, &tensor));
dimensions_t dimens = {4, {1, (size_t)img.mat_channels, (size_t)img.mat_height, (size_t)img.mat_width}};
tensor_desc_t tensorDesc = {NHWC, dimens, U8};
size_t size = img.mat_data_size;
// just wrap IplImage data to ie_blob_t pointer without allocating of new
// memory
status = ie_blob_make_memory_from_preallocated(&tensorDesc, img.mat_data, size, &imgBlob);
if (status != OK) {
fprintf(stderr, "ERROR ie_blob_make_memory_from_preallocated status %d, line %d\n", status, __LINE__);
image_free(&img);
goto err;
}
// infer_request accepts input blob of any size
// -------- Step 4. Configure preprocessing --------
CHECK_STATUS(ov_preprocess_prepostprocessor_create(model, &preprocess));
CHECK_STATUS(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
status = ie_infer_request_set_blob(infer_request, input_name, imgBlob);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_set_blob status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
CHECK_STATUS(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
CHECK_STATUS(ov_preprocess_inputtensorinfo_set_from(input_tensor_info, tensor));
// --------------------------- Step 7. Do inference
// --------------------------------------------------------
/* Running the request synchronously */
status = ie_infer_request_infer(infer_request);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_infer status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
const char* input_layout_desc = "NHWC";
CHECK_STATUS(ov_layout_create(&input_layout, input_layout_desc));
CHECK_STATUS(ov_preprocess_inputtensorinfo_set_layout(input_tensor_info, input_layout));
// --------------------------- Step 8. Process output
// ------------------------------------------------------
status = ie_infer_request_get_blob(infer_request, output_name, &output_blob);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_get_blob status %d, line %d\n", status, __LINE__);
image_free(&img);
goto err;
}
size_t class_num;
struct classify_res* cls = output_blob_to_classify_res(output_blob, &class_num);
CHECK_STATUS(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
CHECK_STATUS(ov_preprocess_preprocesssteps_resize(input_process, RESIZE_LINEAR));
CHECK_STATUS(ov_preprocess_inputinfo_model(input_info, &p_input_model));
classify_res_sort(cls, class_num);
const char* model_layout_desc = "NCHW";
CHECK_STATUS(ov_layout_create(&model_layout, model_layout_desc));
CHECK_STATUS(ov_preprocess_inputmodelinfo_set_layout(p_input_model, model_layout));
CHECK_STATUS(ov_preprocess_prepostprocessor_output_by_index(preprocess, 0, &output_info));
CHECK_STATUS(ov_preprocess_outputinfo_tensor(output_info, &output_tensor_info));
CHECK_STATUS(ov_preprocess_output_set_element_type(output_tensor_info, F32));
CHECK_STATUS(ov_preprocess_prepostprocessor_build(preprocess, &new_model));
// -------- Step 5. Loading a model to the device --------
ov_property_t* property = NULL;
CHECK_STATUS(ov_core_compile_model(core, new_model, device_name, &compiled_model, property));
// -------- Step 6. Create an infer request --------
CHECK_STATUS(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
// -------- Step 7. Prepare input --------
CHECK_STATUS(ov_infer_request_set_input_tensor(infer_request, 0, tensor));
// -------- Step 8. Do inference synchronously --------
CHECK_STATUS(ov_infer_request_infer(infer_request));
// -------- Step 9. Process output
CHECK_STATUS(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
// Print classification results
size_t results_num;
results = tensor_to_infer_result(output_tensor, &results_num);
infer_result_sort(results, results_num);
size_t top = 10;
if (top > class_num) {
top = class_num;
if (top > results_num) {
top = results_num;
}
printf("\nTop %zu results:\n", top);
print_classify_res(cls, top, input_image_path);
print_infer_result(results, top, input_image_path);
// -----------------------------------------------------------------------------------------------------
free(cls);
ie_blob_free(&output_blob);
ie_blob_free(&imgBlob);
image_free(&img);
ie_infer_request_free(&infer_request);
ie_exec_network_free(&exe_network);
ie_network_name_free(&input_name);
ie_network_name_free(&output_name);
ie_network_free(&network);
ie_core_free(&core);
return EXIT_SUCCESS;
// -------- free allocated resources --------
err:
if (core)
ie_core_free(&core);
if (network)
ie_network_free(&network);
if (input_name)
ie_network_name_free(&input_name);
if (output_name)
ie_network_name_free(&output_name);
if (exe_network)
ie_exec_network_free(&exe_network);
free(results);
image_free(&img);
ov_shape_deinit(&input_shape);
ov_output_node_list_free(&output_nodes);
ov_output_node_list_free(&input_nodes);
if (output_tensor)
ov_tensor_free(output_tensor);
if (infer_request)
ie_infer_request_free(&infer_request);
if (imgBlob)
ie_blob_free(&imgBlob);
if (output_blob)
ie_blob_free(&output_blob);
return EXIT_FAILURE;
ov_infer_request_free(infer_request);
if (compiled_model)
ov_compiled_model_free(compiled_model);
if (input_layout)
ov_layout_free(input_layout);
if (model_layout)
ov_layout_free(model_layout);
if (output_tensor_info)
ov_preprocess_outputtensorinfo_free(output_tensor_info);
if (output_info)
ov_preprocess_outputinfo_free(output_info);
if (p_input_model)
ov_preprocess_inputmodelinfo_free(p_input_model);
if (input_process)
ov_preprocess_preprocesssteps_free(input_process);
if (input_tensor_info)
ov_preprocess_inputtensorinfo_free(input_tensor_info);
if (input_info)
ov_preprocess_inputinfo_free(input_info);
if (preprocess)
ov_preprocess_prepostprocessor_free(preprocess);
if (new_model)
ov_model_free(new_model);
if (tensor)
ov_tensor_free(tensor);
if (model)
ov_model_free(model);
if (core)
ov_core_free(core);
return EXIT_SUCCESS;
}

View File

@ -1,7 +0,0 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
ie_add_sample(NAME hello_classification_ov_c
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.c"
DEPENDENCIES opencv_c_wrapper)

View File

@ -1,70 +0,0 @@
# Hello Classification C Sample for OpenVINO 2.0 C-API
This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature.
## How It Works
Upon the start-up, the sample application reads command line parameters, loads specified network and an image to the OpenVINO plugin.
Then, the sample creates an synchronous inference request object. When inference is done, the application outputs data to the standard output stream.
## Building
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/OV_Runtime_UG/Samples_Overview.md) section in Inference Engine Samples guide.
## Running
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - By default, OpenVINO™ Toolkit Samples and Demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Embedding Preprocessing Computation](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (\*.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name alexnet
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name alexnet
```
3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example:
```
<path_to_sample>/hello_classification_c <path_to_model>/alexnet.xml <path_to_image>/car.bmp GPU
```
## Sample Output
The application outputs top-10 inference results.
```
Top 10 results:
Image /opt/intel/openvino/samples/scripts/car.png
classid probability
------- -----------
656 0.666479
654 0.112940
581 0.068487
874 0.033385
436 0.026132
817 0.016731
675 0.010980
511 0.010592
569 0.008178
717 0.006336
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```

View File

@ -1,256 +0,0 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <opencv_c_wrapper.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "c_api/ov_c_api.h"
/**
* @brief Struct to store infer results
*/
struct infer_result {
size_t class_id;
float probability;
};
/**
* @brief Sort result by probability
* @param struct with infer results to sort
* @param result_size of the struct
* @return none
*/
int compare(const void* a, const void* b) {
const struct infer_result* sa = (const struct infer_result*)a;
const struct infer_result* sb = (const struct infer_result*)b;
if (sa->probability < sb->probability) {
return 1;
} else if ((sa->probability == sb->probability) && (sa->class_id > sb->class_id)) {
return 1;
} else if (sa->probability > sb->probability) {
return -1;
}
return 0;
}
void infer_result_sort(struct infer_result* results, size_t result_size) {
qsort(results, result_size, sizeof(struct infer_result), compare);
}
/**
* @brief Convert output tensor to infer result struct for processing results
* @param tensor of output tensor
* @param result_size of the infer result
* @return struct infer_result
*/
struct infer_result* tensor_to_infer_result(ov_tensor_t* tensor, size_t* result_size) {
ov_shape_t output_shape = {0};
ov_status_e status = ov_tensor_get_shape(tensor, &output_shape);
if (status != OK)
return NULL;
*result_size = output_shape.dims[1];
struct infer_result* results = (struct infer_result*)malloc(sizeof(struct infer_result) * (*result_size));
if (!results)
return NULL;
void* data = NULL;
status = ov_tensor_get_data(tensor, &data);
if (status != OK) {
free(results);
return NULL;
}
float* float_data = (float*)(data);
size_t i;
for (i = 0; i < *result_size; ++i) {
results[i].class_id = i;
results[i].probability = float_data[i];
}
return results;
}
/**
* @brief Print results of infer
* @param results of the infer results
* @param result_size of the struct of classification results
* @param img_path image path
* @return none
*/
void print_infer_result(struct infer_result* results, size_t result_size, const char* img_path) {
printf("\nImage %s\n", img_path);
printf("\nclassid probability\n");
printf("------- -----------\n");
size_t i;
for (i = 0; i < result_size; ++i) {
printf("%zu %f\n", results[i].class_id, results[i].probability);
}
}
void print_model_input_output_info(ov_model_t* model) {
char* friendly_name = NULL;
ov_model_get_friendly_name(model, &friendly_name);
printf("[INFO] model name: %s \n", friendly_name);
ov_free(friendly_name);
}
#define CHECK_STATUS(return_status) \
if (return_status != OK) { \
fprintf(stderr, "[ERROR] return status %d, line %d\n", return_status, __LINE__); \
goto err; \
}
int main(int argc, char** argv) {
// -------- Check input parameters --------
if (argc != 4) {
printf("Usage : ./hello_classification_ov_c <path_to_model> <path_to_image> "
"<device_name>\n");
return EXIT_FAILURE;
}
ov_core_t* core = NULL;
ov_model_t* model = NULL;
ov_tensor_t* tensor = NULL;
ov_preprocess_t* preprocess = NULL;
ov_preprocess_input_info_t* input_info = NULL;
ov_model_t* new_model = NULL;
ov_preprocess_input_tensor_info_t* input_tensor_info = NULL;
ov_preprocess_input_process_steps_t* input_process = NULL;
ov_preprocess_input_model_info_t* p_input_model = NULL;
ov_preprocess_output_info_t* output_info = NULL;
ov_preprocess_output_tensor_info_t* output_tensor_info = NULL;
ov_compiled_model_t* compiled_model = NULL;
ov_infer_request_t* infer_request = NULL;
ov_tensor_t* output_tensor = NULL;
struct infer_result* results = NULL;
// -------- Get OpenVINO runtime version --------
ov_version_t version;
CHECK_STATUS(ov_get_version(&version));
printf("---- OpenVINO INFO----\n");
printf("Description : %s \n", version.description);
printf("Build number: %s \n", version.buildNumber);
ov_version_free(&version);
// -------- Parsing and validation of input arguments --------
const char* input_model = argv[1];
const char* input_image_path = argv[2];
const char* device_name = argv[3];
// -------- Step 1. Initialize OpenVINO Runtime Core --------
CHECK_STATUS(ov_core_create("", &core));
// -------- Step 2. Read a model --------
printf("[INFO] Loading model files: %s\n", input_model);
CHECK_STATUS(ov_core_read_model(core, input_model, NULL, &model));
print_model_input_output_info(model);
ov_output_node_list_t output_nodes;
CHECK_STATUS(ov_model_get_outputs(model, &output_nodes));
if (output_nodes.num != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 output only %d\n", __LINE__);
goto err;
}
ov_output_node_list_t input_nodes;
CHECK_STATUS(ov_model_get_inputs(model, &input_nodes));
if (input_nodes.num != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 input only %d\n", __LINE__);
goto err;
}
// -------- Step 3. Set up input
c_mat_t img;
image_read(input_image_path, &img);
ov_element_type_e input_type = U8;
ov_shape_t input_shape = {4, {1, (size_t)img.mat_height, (size_t)img.mat_width, 3}};
CHECK_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, img.mat_data, &tensor));
// -------- Step 4. Configure preprocessing --------
CHECK_STATUS(ov_preprocess_create(model, &preprocess));
CHECK_STATUS(ov_preprocess_get_input_info_by_index(preprocess, 0, &input_info));
CHECK_STATUS(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
CHECK_STATUS(ov_preprocess_input_tensor_info_set_tensor(input_tensor_info, tensor));
ov_layout_t tensor_layout = {'N', 'H', 'W', 'C'};
CHECK_STATUS(ov_preprocess_input_tensor_info_set_layout(input_tensor_info, tensor_layout));
CHECK_STATUS(ov_preprocess_input_get_preprocess_steps(input_info, &input_process));
CHECK_STATUS(ov_preprocess_input_resize(input_process, RESIZE_LINEAR));
CHECK_STATUS(ov_preprocess_input_get_model_info(input_info, &p_input_model));
ov_layout_t model_layout = {'N', 'C', 'H', 'W'};
CHECK_STATUS(ov_preprocess_input_model_set_layout(p_input_model, model_layout));
CHECK_STATUS(ov_preprocess_get_output_info_by_index(preprocess, 0, &output_info));
CHECK_STATUS(ov_preprocess_output_get_tensor_info(output_info, &output_tensor_info));
CHECK_STATUS(ov_preprocess_output_set_element_type(output_tensor_info, F32));
CHECK_STATUS(ov_preprocess_build(preprocess, &new_model));
// -------- Step 5. Loading a model to the device --------
ov_property_t property;
CHECK_STATUS(ov_core_compile_model(core, new_model, device_name, &compiled_model, &property));
// -------- Step 6. Create an infer request --------
CHECK_STATUS(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
// -------- Step 7. Prepare input --------
CHECK_STATUS(ov_infer_request_set_input_tensor(infer_request, 0, tensor));
// -------- Step 8. Do inference synchronously --------
CHECK_STATUS(ov_infer_request_infer(infer_request));
// -------- Step 9. Process output
CHECK_STATUS(ov_infer_request_get_out_tensor(infer_request, 0, &output_tensor));
// Print classification results
size_t results_num;
results = tensor_to_infer_result(output_tensor, &results_num);
infer_result_sort(results, results_num);
size_t top = 10;
if (top > results_num) {
top = results_num;
}
printf("\nTop %zu results:\n", top);
print_infer_result(results, top, input_image_path);
// -------- free allocated resources --------
err:
free(results);
image_free(&img);
ov_output_node_list_free(&output_nodes);
ov_output_node_list_free(&input_nodes);
if (output_tensor)
ov_tensor_free(output_tensor);
if (infer_request)
ov_infer_request_free(infer_request);
if (compiled_model)
ov_compiled_model_free(compiled_model);
if (output_tensor_info)
ov_preprocess_output_tensor_info_free(output_tensor_info);
if (output_info)
ov_preprocess_output_info_free(output_info);
if (p_input_model)
ov_preprocess_input_model_info_free(p_input_model);
if (input_process)
ov_preprocess_input_process_steps_free(input_process);
if (input_tensor_info)
ov_preprocess_input_tensor_info_free(input_tensor_info);
if (input_info)
ov_preprocess_input_info_free(input_info);
if (preprocess)
ov_preprocess_free(preprocess);
if (new_model)
ov_model_free(new_model);
if (tensor)
ov_tensor_free(tensor);
if (model)
ov_model_free(model);
if (core)
ov_core_free(core);
return EXIT_SUCCESS;
}

View File

@ -2,120 +2,95 @@
// SPDX-License-Identifier: Apache-2.0
//
#include <c_api/ie_c_api.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "openvino/c/openvino.h"
/**
* @brief Struct to store classification results
* @brief Struct to store infer results
*/
struct classify_res {
struct infer_result {
size_t class_id;
float probability;
};
/**
* @brief Sort result of image classification by probability
* @param struct with classification results to sort
* @param size of the struct
* @brief Sort result by probability
* @param struct with infer results to sort
* @param result_size of the struct
* @return none
*/
void classify_res_sort(struct classify_res* res, size_t n) {
size_t i, j;
for (i = 0; i < n; ++i) {
for (j = i + 1; j < n; ++j) {
if (res[i].probability < res[j].probability) {
struct classify_res temp = res[i];
res[i] = res[j];
res[j] = temp;
} else if (res[i].probability == res[j].probability && res[i].class_id > res[j].class_id) {
struct classify_res temp = res[i];
res[i] = res[j];
res[j] = temp;
}
}
int compare(const void* a, const void* b) {
const struct infer_result* sa = (const struct infer_result*)a;
const struct infer_result* sb = (const struct infer_result*)b;
if (sa->probability < sb->probability) {
return 1;
} else if ((sa->probability == sb->probability) && (sa->class_id > sb->class_id)) {
return 1;
} else if (sa->probability > sb->probability) {
return -1;
}
return 0;
}
void infer_result_sort(struct infer_result* results, size_t result_size) {
qsort(results, result_size, sizeof(struct infer_result), compare);
}
/**
* @brief Convert output blob to classify struct for processing results
* @param blob of output data
* @param size of the blob
* @return struct classify_res
* @brief Convert output tensor to infer result struct for processing results
* @param tensor of output tensor
* @param result_size of the infer result
* @return struct infer_result
*/
struct classify_res* output_blob_to_classify_res(ie_blob_t* blob, size_t* n) {
dimensions_t output_dim;
IEStatusCode status = ie_blob_get_dims(blob, &output_dim);
struct infer_result* tensor_to_infer_result(ov_tensor_t* tensor, size_t* result_size) {
ov_status_e status = ov_tensor_get_size(tensor, result_size);
if (status != OK)
return NULL;
*n = output_dim.dims[1];
struct classify_res* cls = (struct classify_res*)malloc(sizeof(struct classify_res) * (*n));
if (!cls) {
struct infer_result* results = (struct infer_result*)malloc(sizeof(struct infer_result) * (*result_size));
if (!results)
return NULL;
}
ie_blob_buffer_t blob_cbuffer;
status = ie_blob_get_cbuffer(blob, &blob_cbuffer);
void* data = NULL;
status = ov_tensor_data(tensor, &data);
if (status != OK) {
free(cls);
free(results);
return NULL;
}
float* blob_data = (float*)(blob_cbuffer.cbuffer);
size_t i;
for (i = 0; i < *n; ++i) {
cls[i].class_id = i;
cls[i].probability = blob_data[i];
float* float_data = (float*)(data);
for (size_t i = 0; i < *result_size; ++i) {
results[i].class_id = i;
results[i].probability = float_data[i];
}
return cls;
return results;
}
/**
* @brief Print results of classification
* @param struct of the classification results
* @param size of the struct of classification results
* @param string image path
* @brief Print results of infer
* @param results of the infer results
* @param result_size of the struct of classification results
* @param img_path image path
* @return none
*/
void print_classify_res(struct classify_res* cls, size_t n, const char* img_path) {
void print_infer_result(struct infer_result* results, size_t result_size, const char* img_path) {
printf("\nImage %s\n", img_path);
printf("\nclassid probability\n");
printf("------- -----------\n");
size_t i;
for (i = 0; i < n; ++i) {
printf("%zu %f\n", cls[i].class_id, cls[i].probability);
for (size_t i = 0; i < result_size; ++i) {
printf("%zu %f\n", results[i].class_id, results[i].probability);
}
printf("\nThis sample is an API example,"
" for any performance measurements please use the dedicated benchmark_"
"app tool\n");
}
/**
* @brief Read image data
* @param string image path
* @param pointer to store image data
* @param size bytes of image
* @return total number of elements successfully read, in case of error it
* doesn't equal to size param
*/
size_t read_image_from_file(const char* img_path, unsigned char* img_data, size_t size) {
FILE* fp = fopen(img_path, "rb");
size_t read_size = 0;
if (fp) {
fseek(fp, 0, SEEK_END);
if (ftell(fp) >= size) {
fseek(fp, 0, SEEK_SET);
read_size = fread(img_data, 1, size, fp);
}
fclose(fp);
}
return read_size;
void print_model_input_output_info(ov_model_t* model) {
char* friendly_name = NULL;
ov_model_get_friendly_name(model, &friendly_name);
printf("[INFO] model name: %s \n", friendly_name);
ov_free(friendly_name);
}
/**
@ -125,6 +100,7 @@ size_t read_image_from_file(const char* img_path, unsigned char* img_data, size_
* @param pointer to image height
* @return bool status True(success) or False(fail)
*/
bool is_supported_image_size(const char* size_str, size_t* width, size_t* height) {
const char* _size = size_str;
size_t _width = 0, _height = 0;
@ -168,12 +144,33 @@ err:
return false;
}
size_t read_image_from_file(const char* img_path, unsigned char* img_data, size_t size) {
FILE* fp = fopen(img_path, "rb");
size_t read_size = 0;
if (fp) {
fseek(fp, 0, SEEK_END);
if (ftell(fp) >= size) {
fseek(fp, 0, SEEK_SET);
read_size = fread(img_data, 1, size, fp);
}
fclose(fp);
}
return read_size;
}
#define CHECK_STATUS(return_status) \
if (return_status != OK) { \
fprintf(stderr, "[ERROR] return status %d, line %d\n", return_status, __LINE__); \
goto err; \
}
int main(int argc, char** argv) {
// ------------------------------ Parsing and validation of input args
// ---------------------------------
// -------- Check input parameters --------
if (argc != 5) {
printf("Usage : ./hello_classification <path_to_model> <path_to_image> "
"<image_size> <device_name>\n");
printf("Usage : ./hello_nv12_input_classification_c <path_to_model> <path_to_image> "
"<WIDTHxHEIGHT> <device_name>\n");
return EXIT_FAILURE;
}
@ -182,203 +179,185 @@ int main(int argc, char** argv) {
fprintf(stderr, "ERROR is_supported_image_size, line %d\n", __LINE__);
return EXIT_FAILURE;
}
unsigned char* img_data = NULL;
ov_core_t* core = NULL;
ov_model_t* model = NULL;
ov_tensor_t* tensor = NULL;
ov_preprocess_prepostprocessor_t* preprocess = NULL;
ov_preprocess_inputinfo_t* input_info = NULL;
ov_model_t* new_model = NULL;
ov_preprocess_inputtensorinfo_t* input_tensor_info = NULL;
ov_preprocess_preprocesssteps_t* input_process = NULL;
ov_preprocess_inputmodelinfo_t* p_input_model = NULL;
ov_compiled_model_t* compiled_model = NULL;
ov_infer_request_t* infer_request = NULL;
ov_tensor_t* output_tensor = NULL;
struct infer_result* results = NULL;
char* input_tensor_name = NULL;
char* output_tensor_name = NULL;
ov_output_node_list_t input_nodes = {.size = 0, .output_nodes = NULL};
ov_output_node_list_t output_nodes = {.size = 0, .output_nodes = NULL};
ov_layout_t* model_layout = NULL;
ov_shape_t input_shape;
// -------- Get OpenVINO runtime version --------
ov_version_t version = {.description = NULL, .buildNumber = NULL};
CHECK_STATUS(ov_get_openvino_version(&version));
printf("---- OpenVINO INFO----\n");
printf("description : %s \n", version.description);
printf("build number: %s \n", version.buildNumber);
ov_version_free(&version);
// -------- Parsing and validation of input arguments --------
const char* input_model = argv[1];
const char* input_image_path = argv[2];
const char* device_name = argv[4];
unsigned char* img_data = NULL;
ie_core_t* core = NULL;
ie_network_t* network = NULL;
ie_executable_network_t* exe_network = NULL;
ie_infer_request_t* infer_request = NULL;
char *input_name = NULL, *output_name = NULL;
ie_blob_t *y_blob = NULL, *uv_blob = NULL, *nv12_blob = NULL, *output_blob = NULL;
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 1. Initialize inference engine core
// -------------------------------------
IEStatusCode status = ie_core_create("", &core);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_create status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// -------- Step 1. Initialize OpenVINO Runtime Core --------
CHECK_STATUS(ov_core_create(&core));
// Step 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin
// files) or ONNX (.onnx file) format
status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_read_network status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// -------- Step 2. Read a model --------
printf("[INFO] Loading model files: %s\n", input_model);
CHECK_STATUS(ov_core_read_model(core, input_model, NULL, &model));
print_model_input_output_info(model);
// --------------------------- Step 3. Configure input & output
// ---------------------------------------------
// --------------------------- Prepare input blobs
// -----------------------------------------------------
status = ie_network_get_input_name(network, 0, &input_name);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_get_input_name status %d, line %d\n", status, __LINE__);
CHECK_STATUS(ov_model_outputs(model, &output_nodes));
if (output_nodes.size != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 output only %d\n", __LINE__);
goto err;
}
/* Mark input as resizable by setting of a resize algorithm.
* In this case we will be able to set an input blob of any shape to an infer
* request. Resize and layout conversions are executed automatically during
* inference */
status |= ie_network_set_input_resize_algorithm(network, input_name, RESIZE_BILINEAR);
status |= ie_network_set_input_layout(network, input_name, NCHW);
status |= ie_network_set_input_precision(network, input_name, U8);
// set input color format to NV12 to enable automatic input color format
// pre-processing
status |= ie_network_set_color_format(network, input_name, NV12);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_set_input_* status %d, line %d\n", status, __LINE__);
CHECK_STATUS(ov_model_inputs(model, &input_nodes));
if (input_nodes.size != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 input only %d\n", __LINE__);
goto err;
}
// --------------------------- Prepare output blobs
// ----------------------------------------------------
status |= ie_network_get_output_name(network, 0, &output_name);
status |= ie_network_set_output_precision(network, output_name, FP32);
if (status != OK) {
fprintf(stderr, "ERROR ie_network_set_output_* status %d, line %d\n", status, __LINE__);
goto err;
}
CHECK_STATUS(ov_node_list_get_any_name_by_index(&input_nodes, 0, &input_tensor_name));
CHECK_STATUS(ov_node_list_get_any_name_by_index(&output_nodes, 0, &output_tensor_name));
// -----------------------------------------------------------------------------------------------------
// -------- Step 3. Configure preprocessing --------
CHECK_STATUS(ov_preprocess_prepostprocessor_create(model, &preprocess));
// --------------------------- Step 4. Loading model to the device
// ------------------------------------------
ie_config_t config = {NULL, NULL, NULL};
status = ie_core_load_network(core, network, device_name, &config, &exe_network);
if (status != OK) {
fprintf(stderr, "ERROR ie_core_load_network status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// 1) Select input with 'input_tensor_name' tensor name
CHECK_STATUS(ov_preprocess_prepostprocessor_input_by_name(preprocess, input_tensor_name, &input_info));
// --------------------------- Step 5. Create infer request
// -------------------------------------------------
status = ie_exec_network_create_infer_request(exe_network, &infer_request);
if (status != OK) {
fprintf(stderr, "ERROR ie_exec_network_create_infer_request status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// 2) Set input type
// - as 'u8' precision
// - set color format to NV12 (single plane)
// - static spatial dimensions for resize preprocessing operation
CHECK_STATUS(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
CHECK_STATUS(ov_preprocess_inputtensorinfo_set_element_type(input_tensor_info, U8));
CHECK_STATUS(ov_preprocess_inputtensorinfo_set_color_format(input_tensor_info, NV12_SINGLE_PLANE));
CHECK_STATUS(ov_preprocess_inputtensorinfo_set_spatial_static_shape(input_tensor_info, input_height, input_width));
// --------------------------- Step 6. Prepare input
// -------------------------------------------------------- read image with
// size converted to NV12 data size: height(NV12) = 3 / 2 * logical height
// 3) Pre-processing steps:
// a) Convert to 'float'. This is to have color conversion more accurate
// b) Convert to BGR: Assumes that model accepts images in BGR format. For RGB, change it manually
// c) Resize image from tensor's dimensions to model ones
CHECK_STATUS(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
CHECK_STATUS(ov_preprocess_preprocesssteps_convert_element_type(input_process, F32));
CHECK_STATUS(ov_preprocess_preprocesssteps_convert_color(input_process, BGR));
CHECK_STATUS(ov_preprocess_preprocesssteps_resize(input_process, RESIZE_LINEAR));
// 4) Set model data layout (Assuming model accepts images in NCHW layout)
CHECK_STATUS(ov_preprocess_inputinfo_model(input_info, &p_input_model));
const char* model_layout_desc = "NCHW";
CHECK_STATUS(ov_layout_create(&model_layout, model_layout_desc));
CHECK_STATUS(ov_preprocess_inputmodelinfo_set_layout(p_input_model, model_layout));
// 5) Apply preprocessing to an input with 'input_tensor_name' name of loaded model
CHECK_STATUS(ov_preprocess_prepostprocessor_build(preprocess, &new_model));
// -------- Step 4. Loading a model to the device --------
CHECK_STATUS(ov_core_compile_model(core, new_model, device_name, &compiled_model, NULL));
// -------- Step 5. Create an infer request --------
CHECK_STATUS(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
// -------- Step 6. Prepare input data --------
img_size = input_width * (input_height * 3 / 2);
if (!img_size) {
fprintf(stderr, "[ERROR] Invalid Image size, line %d\n", __LINE__);
goto err;
}
img_data = (unsigned char*)calloc(img_size, sizeof(unsigned char));
if (NULL == img_data) {
fprintf(stderr, "ERROR calloc returned NULL, line %d\n", __LINE__);
if (!img_data) {
fprintf(stderr, "[ERROR] calloc returned NULL, line %d\n", __LINE__);
goto err;
}
if (img_size != read_image_from_file(input_image_path, img_data, img_size)) {
fprintf(stderr, "ERROR ie_exec_network_create_infer_request `img_size` missmatch, line %d\n", __LINE__);
fprintf(stderr, "[ERROR] Image dimensions not match with NV12 file size, line %d\n", __LINE__);
goto err;
}
ov_element_type_e input_type = U8;
size_t batch = 1;
ov_shape_init(&input_shape, 4);
input_shape.dims[0] = batch;
input_shape.dims[1] = input_height * 3 / 2;
input_shape.dims[2] = input_width;
input_shape.dims[3] = 1;
CHECK_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, img_data, &tensor));
// --------------------------- Create a blob to hold the NV12 input data
// ------------------------------- Create tensor descriptors for Y and UV
// blobs
dimensions_t y_dimens = {4, {1, 1, input_height, input_width}};
dimensions_t uv_dimens = {4, {1, 2, input_height / 2, input_width / 2}};
tensor_desc_t y_tensor = {NHWC, y_dimens, U8};
tensor_desc_t uv_tensor = {NHWC, uv_dimens, U8};
size_t y_plane_size = input_height * input_width;
size_t uv_plane_size = input_width * (input_height / 2);
// -------- Step 6. Set input tensor --------
// Set the input tensor by tensor name to the InferRequest
CHECK_STATUS(ov_infer_request_set_tensor(infer_request, input_tensor_name, tensor));
// Create blob for Y plane from raw data
status |= ie_blob_make_memory_from_preallocated(&y_tensor, img_data, y_plane_size, &y_blob);
// Create blob for UV plane from raw data
status |= ie_blob_make_memory_from_preallocated(&uv_tensor, img_data + y_plane_size, uv_plane_size, &uv_blob);
// Create NV12Blob from Y and UV blobs
status |= ie_blob_make_memory_nv12(y_blob, uv_blob, &nv12_blob);
if (status != OK) {
fprintf(stderr, "ERROR ie_blob_make_memory_* status %d, line %d\n", status, __LINE__);
goto err;
}
status = ie_infer_request_set_blob(infer_request, input_name, nv12_blob);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_set_blob status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 7. Do inference
// --------------------------------------------------------
/* Running the request synchronously */
status = ie_infer_request_infer(infer_request);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_infer status %d, line %d\n", status, __LINE__);
goto err;
}
// -----------------------------------------------------------------------------------------------------
// --------------------------- Step 8. Process output
// ------------------------------------------------------
status = ie_infer_request_get_blob(infer_request, output_name, &output_blob);
if (status != OK) {
fprintf(stderr, "ERROR ie_infer_request_get_blob status %d, line %d\n", status, __LINE__);
goto err;
}
size_t class_num;
struct classify_res* cls = output_blob_to_classify_res(output_blob, &class_num);
classify_res_sort(cls, class_num);
// -------- Step 7. Do inference --------
// Running the request synchronously
CHECK_STATUS(ov_infer_request_infer(infer_request));
// -------- Step 8. Process output --------
CHECK_STATUS(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
// Print classification results
size_t results_num = 0;
results = tensor_to_infer_result(output_tensor, &results_num);
if (!results) {
goto err;
}
infer_result_sort(results, results_num);
size_t top = 10;
if (top > class_num) {
top = class_num;
if (top > results_num) {
top = results_num;
}
printf("\nTop %zu results:\n", top);
print_classify_res(cls, top, input_image_path);
// -----------------------------------------------------------------------------------------------------
print_infer_result(results, top, input_image_path);
free(cls);
ie_blob_free(&output_blob);
ie_blob_free(&nv12_blob);
ie_blob_free(&uv_blob);
ie_blob_free(&y_blob);
ie_infer_request_free(&infer_request);
ie_exec_network_free(&exe_network);
ie_network_name_free(&input_name);
ie_network_name_free(&output_name);
ie_network_free(&network);
ie_core_free(&core);
free(img_data);
return EXIT_SUCCESS;
// -------- free allocated resources --------
err:
if (core)
ie_core_free(&core);
if (network)
ie_network_free(&network);
if (input_name)
ie_network_name_free(&input_name);
if (output_name)
ie_network_name_free(&output_name);
if (exe_network)
ie_exec_network_free(&exe_network);
free(results);
free(img_data);
ov_shape_deinit(&input_shape);
ov_free(input_tensor_name);
ov_free(output_tensor_name);
ov_output_node_list_free(&output_nodes);
ov_output_node_list_free(&input_nodes);
if (output_tensor)
ov_tensor_free(output_tensor);
if (infer_request)
ie_infer_request_free(&infer_request);
if (nv12_blob)
ie_blob_free(&nv12_blob);
if (uv_blob)
ie_blob_free(&uv_blob);
if (y_blob)
ie_blob_free(&y_blob);
if (output_blob)
ie_blob_free(&output_blob);
if (img_data)
free(img_data);
return EXIT_FAILURE;
ov_infer_request_free(infer_request);
if (compiled_model)
ov_compiled_model_free(compiled_model);
if (p_input_model)
ov_preprocess_inputmodelinfo_free(p_input_model);
if (input_process)
ov_preprocess_preprocesssteps_free(input_process);
if (model_layout)
ov_layout_free(model_layout);
if (input_tensor_info)
ov_preprocess_inputtensorinfo_free(input_tensor_info);
if (input_info)
ov_preprocess_inputinfo_free(input_info);
if (preprocess)
ov_preprocess_prepostprocessor_free(preprocess);
if (new_model)
ov_model_free(new_model);
if (tensor)
ov_tensor_free(tensor);
if (model)
ov_model_free(model);
if (core)
ov_core_free(core);
return EXIT_SUCCESS;
}

View File

@ -1,6 +0,0 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
ie_add_sample(NAME hello_nv12_input_classification_ov_c
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.c")

View File

@ -1,90 +0,0 @@
# Hello NV12 Input Classification C Sample for OpenVINO 2.0 C-API
This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API.
## How It Works
Upon the start-up, the sample application reads command-line parameters, loads specified network and an
image in the NV12 color format to an Inference Engine plugin. Then, the sample creates a synchronous inference request object. When inference is done, the
application outputs data to the standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/integrate_with_your_application.md) section of "Integrate OpenVINO™ Runtime with Your Application" guide.
## Building
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/OV_Runtime_UG/Samples_Overview.md) section in Inference Engine Samples guide.
## Running
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
The sample accepts an uncompressed image in the NV12 color format. To run the sample, you need to
convert your BGR/RGB image to NV12. To do this, you can use one of the widely available tools such
as FFmpeg\* or GStreamer\*. The following command shows how to convert an ordinary image into an
uncompressed NV12 image using FFmpeg:
```sh
ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
```
> **NOTES**:
>
> - Because the sample reads raw image files, you should provide a correct image size along with the
> image path. The sample expects the logical size of the image, not the buffer size. For example,
> for 640x480 BGR/RGB image the corresponding NV12 logical image size is also 640x480, whereas the
> buffer size is 640x720.
> - By default, this sample expects that network input has BGR channels order. If you trained your
> model to work with RGB order, you need to reconvert your model using the Model Optimizer tool
> with `--reverse_input_channels` argument specified. For more information about the argument,
> refer to **When to Reverse Input Channels** section of
> [Embedding Preprocessing Computation](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name alexnet
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
```
python <path_to_omz_tools>/converter.py --name alexnet
```
3. Perform inference of NV12 image using `alexnet` model on a `CPU`, for example:
```
<path_to_sample>/hello_nv12_input_classification_ov_c <path_to_model>/alexnet.xml <path_to_image>/cat.yuv 300x300 CPU
```
## Sample Output
The application outputs top-10 inference results.
```
Top 10 results:
Image <path_to_image>/cat.yuv
classid probability
------- -----------
876 0.125426
435 0.120252
285 0.068099
282 0.056738
281 0.032151
36 0.027748
94 0.027691
999 0.026507
335 0.021384
186 0.017978
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```

View File

@ -1,353 +0,0 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "c_api/ov_c_api.h"
/**
* @brief Struct to store infer results
*/
struct infer_result {
size_t class_id;
float probability;
};
/**
* @brief Sort result by probability
* @param struct with infer results to sort
* @param result_size of the struct
* @return none
*/
int compare(const void* a, const void* b) {
const struct infer_result* sa = (const struct infer_result*)a;
const struct infer_result* sb = (const struct infer_result*)b;
if (sa->probability < sb->probability) {
return 1;
} else if ((sa->probability == sb->probability) && (sa->class_id > sb->class_id)) {
return 1;
} else if (sa->probability > sb->probability) {
return -1;
}
return 0;
}
void infer_result_sort(struct infer_result* results, size_t result_size) {
qsort(results, result_size, sizeof(struct infer_result), compare);
}
/**
* @brief Convert output tensor to infer result struct for processing results
* @param tensor of output tensor
* @param result_size of the infer result
* @return struct infer_result
*/
struct infer_result* tensor_to_infer_result(ov_tensor_t* tensor, size_t* result_size) {
ov_status_e status = ov_tensor_get_size(tensor, result_size);
if (status != OK)
return NULL;
struct infer_result* results = (struct infer_result*)malloc(sizeof(struct infer_result) * (*result_size));
if (!results)
return NULL;
void* data = NULL;
status = ov_tensor_get_data(tensor, &data);
if (status != OK) {
free(results);
return NULL;
}
float* float_data = (float*)(data);
for (size_t i = 0; i < *result_size; ++i) {
results[i].class_id = i;
results[i].probability = float_data[i];
}
return results;
}
/**
* @brief Print results of infer
* @param results of the infer results
* @param result_size of the struct of classification results
* @param img_path image path
* @return none
*/
void print_infer_result(struct infer_result* results, size_t result_size, const char* img_path) {
printf("\nImage %s\n", img_path);
printf("\nclassid probability\n");
printf("------- -----------\n");
for (size_t i = 0; i < result_size; ++i) {
printf("%zu %f\n", results[i].class_id, results[i].probability);
}
}
void print_model_input_output_info(ov_model_t* model) {
char* friendly_name = NULL;
ov_model_get_friendly_name(model, &friendly_name);
printf("[INFO] model name: %s \n", friendly_name);
ov_free(friendly_name);
}
/**
* @brief Check image has supported width and height
* @param string image size in WIDTHxHEIGHT format
* @param pointer to image width
* @param pointer to image height
* @return bool status True(success) or False(fail)
*/
bool is_supported_image_size(const char* size_str, size_t* width, size_t* height) {
const char* _size = size_str;
size_t _width = 0, _height = 0;
while (_size && *_size != 'x' && *_size != '\0') {
if ((*_size <= '9') && (*_size >= '0')) {
_width = (_width * 10) + (*_size - '0');
_size++;
} else {
goto err;
}
}
if (_size)
_size++;
while (_size && *_size != '\0') {
if ((*_size <= '9') && (*_size >= '0')) {
_height = (_height * 10) + (*_size - '0');
_size++;
} else {
goto err;
}
}
if (_width > 0 && _height > 0) {
if (_width % 2 == 0 && _height % 2 == 0) {
*width = _width;
*height = _height;
return true;
} else {
printf("Unsupported image size, width and height must be even numbers \n");
return false;
}
} else {
goto err;
}
err:
printf("Incorrect format of image size parameter, expected WIDTHxHEIGHT, "
"actual: %s\n",
size_str);
return false;
}
size_t read_image_from_file(const char* img_path, unsigned char* img_data, size_t size) {
FILE* fp = fopen(img_path, "rb");
size_t read_size = 0;
if (fp) {
fseek(fp, 0, SEEK_END);
if (ftell(fp) >= size) {
fseek(fp, 0, SEEK_SET);
read_size = fread(img_data, 1, size, fp);
}
fclose(fp);
}
return read_size;
}
#define CHECK_STATUS(return_status) \
if (return_status != OK) { \
fprintf(stderr, "[ERROR] return status %d, line %d\n", return_status, __LINE__); \
goto err; \
}
int main(int argc, char** argv) {
// -------- Check input parameters --------
if (argc != 5) {
printf("Usage : ./hello_classification_ov_c <path_to_model> <path_to_image> "
"<WIDTHxHEIGHT> <device_name>\n");
return EXIT_FAILURE;
}
size_t input_width = 0, input_height = 0, img_size = 0;
if (!is_supported_image_size(argv[3], &input_width, &input_height)) {
fprintf(stderr, "ERROR is_supported_image_size, line %d\n", __LINE__);
return EXIT_FAILURE;
}
unsigned char* img_data = NULL;
ov_core_t* core = NULL;
ov_model_t* model = NULL;
ov_tensor_t* tensor = NULL;
ov_preprocess_t* preprocess = NULL;
ov_preprocess_input_info_t* input_info = NULL;
ov_model_t* new_model = NULL;
ov_preprocess_input_tensor_info_t* input_tensor_info = NULL;
ov_preprocess_input_process_steps_t* input_process = NULL;
ov_preprocess_input_model_info_t* p_input_model = NULL;
ov_compiled_model_t* compiled_model = NULL;
ov_infer_request_t* infer_request = NULL;
ov_tensor_t* output_tensor = NULL;
struct infer_result* results = NULL;
char* input_tensor_name = NULL;
char* output_tensor_name = NULL;
ov_output_node_list_t input_nodes = {.num = 0, .output_nodes = NULL};
ov_output_node_list_t output_nodes = {.num = 0, .output_nodes = NULL};
// -------- Get OpenVINO runtime version --------
ov_version_t version = {.description = NULL, .buildNumber = NULL};
CHECK_STATUS(ov_get_version(&version));
printf("---- OpenVINO INFO----\n");
printf("description : %s \n", version.description);
printf("build number: %s \n", version.buildNumber);
ov_version_free(&version);
// -------- Parsing and validation of input arguments --------
const char* input_model = argv[1];
const char* input_image_path = argv[2];
const char* device_name = argv[4];
// -------- Step 1. Initialize OpenVINO Runtime Core --------
CHECK_STATUS(ov_core_create("", &core));
// -------- Step 2. Read a model --------
printf("[INFO] Loading model files: %s\n", input_model);
CHECK_STATUS(ov_core_read_model(core, input_model, NULL, &model));
print_model_input_output_info(model);
CHECK_STATUS(ov_model_get_outputs(model, &output_nodes));
if (output_nodes.num != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 output only %d\n", __LINE__);
goto err;
}
CHECK_STATUS(ov_model_get_inputs(model, &input_nodes));
if (input_nodes.num != 1) {
fprintf(stderr, "[ERROR] Sample supports models with 1 input only %d\n", __LINE__);
goto err;
}
CHECK_STATUS(ov_node_get_tensor_name(&input_nodes, 0, &input_tensor_name));
CHECK_STATUS(ov_node_get_tensor_name(&output_nodes, 0, &output_tensor_name));
// -------- Step 3. Configure preprocessing --------
CHECK_STATUS(ov_preprocess_create(model, &preprocess));
// 1) Select input with 'input_tensor_name' tensor name
CHECK_STATUS(ov_preprocess_get_input_info_by_name(preprocess, input_tensor_name, &input_info));
// 2) Set input type
// - as 'u8' precision
// - set color format to NV12 (single plane)
// - static spatial dimensions for resize preprocessing operation
CHECK_STATUS(ov_preprocess_input_get_tensor_info(input_info, &input_tensor_info));
CHECK_STATUS(ov_preprocess_input_tensor_info_set_element_type(input_tensor_info, U8));
CHECK_STATUS(ov_preprocess_input_tensor_info_set_color_format(input_tensor_info, NV12_SINGLE_PLANE));
CHECK_STATUS(
ov_preprocess_input_tensor_info_set_spatial_static_shape(input_tensor_info, input_height, input_width));
// 3) Pre-processing steps:
// a) Convert to 'float'. This is to have color conversion more accurate
// b) Convert to BGR: Assumes that model accepts images in BGR format. For RGB, change it manually
// c) Resize image from tensor's dimensions to model ones
CHECK_STATUS(ov_preprocess_input_get_preprocess_steps(input_info, &input_process));
CHECK_STATUS(ov_preprocess_input_convert_element_type(input_process, F32));
CHECK_STATUS(ov_preprocess_input_convert_color(input_process, BGR));
CHECK_STATUS(ov_preprocess_input_resize(input_process, RESIZE_LINEAR));
// 4) Set model data layout (Assuming model accepts images in NCHW layout)
CHECK_STATUS(ov_preprocess_input_get_model_info(input_info, &p_input_model));
ov_layout_t model_layout = {'N', 'C', 'H', 'W'};
CHECK_STATUS(ov_preprocess_input_model_set_layout(p_input_model, model_layout));
// 5) Apply preprocessing to an input with 'input_tensor_name' name of loaded model
CHECK_STATUS(ov_preprocess_build(preprocess, &new_model));
// -------- Step 4. Loading a model to the device --------
CHECK_STATUS(ov_core_compile_model(core, new_model, device_name, &compiled_model, NULL));
// -------- Step 5. Create an infer request --------
CHECK_STATUS(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
// -------- Step 6. Prepare input data --------
img_size = input_width * (input_height * 3 / 2);
if (!img_size) {
fprintf(stderr, "[ERROR] Invalid Image size, line %d\n", __LINE__);
goto err;
}
img_data = (unsigned char*)calloc(img_size, sizeof(unsigned char));
if (!img_data) {
fprintf(stderr, "[ERROR] calloc returned NULL, line %d\n", __LINE__);
goto err;
}
if (img_size != read_image_from_file(input_image_path, img_data, img_size)) {
fprintf(stderr, "[ERROR] Image dimensions not match with NV12 file size, line %d\n", __LINE__);
goto err;
}
ov_element_type_e input_type = U8;
size_t batch = 1;
ov_shape_t input_shape = {.rank = 4, .dims = {batch, input_height * 3 / 2, input_width, 1}};
CHECK_STATUS(ov_tensor_create_from_host_ptr(input_type, input_shape, img_data, &tensor));
// -------- Step 6. Set input tensor --------
// Set the input tensor by tensor name to the InferRequest
CHECK_STATUS(ov_infer_request_set_tensor(infer_request, input_tensor_name, tensor));
// -------- Step 7. Do inference --------
// Running the request synchronously
CHECK_STATUS(ov_infer_request_infer(infer_request));
// -------- Step 8. Process output --------
CHECK_STATUS(ov_infer_request_get_out_tensor(infer_request, 0, &output_tensor));
// Print classification results
size_t results_num = 0;
results = tensor_to_infer_result(output_tensor, &results_num);
if (!results) {
goto err;
}
infer_result_sort(results, results_num);
size_t top = 10;
if (top > results_num) {
top = results_num;
}
printf("\nTop %zu results:\n", top);
print_infer_result(results, top, input_image_path);
// -------- free allocated resources --------
err:
free(results);
free(img_data);
ov_free(input_tensor_name);
ov_free(output_tensor_name);
ov_output_node_list_free(&output_nodes);
ov_output_node_list_free(&input_nodes);
if (output_tensor)
ov_tensor_free(output_tensor);
if (infer_request)
ov_infer_request_free(infer_request);
if (compiled_model)
ov_compiled_model_free(compiled_model);
if (p_input_model)
ov_preprocess_input_model_info_free(p_input_model);
if (input_process)
ov_preprocess_input_process_steps_free(input_process);
if (input_tensor_info)
ov_preprocess_input_tensor_info_free(input_tensor_info);
if (input_info)
ov_preprocess_input_info_free(input_info);
if (preprocess)
ov_preprocess_free(preprocess);
if (new_model)
ov_model_free(new_model);
if (tensor)
ov_tensor_free(tensor);
if (model)
ov_model_free(model);
if (core)
ov_core_free(core);
return EXIT_SUCCESS;
}

View File

@ -230,7 +230,7 @@ macro(ie_add_sample)
find_package(OpenVINO REQUIRED COMPONENTS Runtime)
if(c_sample)
set(ov_link_libraries openvino::runtime::ov openvino::runtime::c)
set(ov_link_libraries openvino::runtime::c)
else()
set(ov_link_libraries openvino::runtime)
endif()

View File

@ -2,7 +2,6 @@
# SPDX-License-Identifier: Apache-2.0
#
add_subdirectory(c)
add_subdirectory(c/ov)
if(ENABLE_PYTHON)
add_subdirectory(python)

View File

@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
#
project(InferenceEngine_C_API)
project(OpenVINO_C_API)
add_subdirectory(src)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,30 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @file openvino.h
* C API of OpenVINO 2.0 bridge unlocks using of OpenVINO 2.0
* library and all its plugins in native applications disabling usage
* of C++ API. The scope of API covers significant part of C++ API and includes
* an ability to read model from the disk, modify input and output information
* to correspond their runtime representation like data types or memory layout,
* load in-memory model to different devices including
* heterogeneous and multi-device modes, manage memory where input and output
* is allocated and manage inference flow.
**/
#pragma once
#include "openvino/c/ov_compiled_model.h"
#include "openvino/c/ov_core.h"
#include "openvino/c/ov_dimension.h"
#include "openvino/c/ov_infer_request.h"
#include "openvino/c/ov_layout.h"
#include "openvino/c/ov_model.h"
#include "openvino/c/ov_node.h"
#include "openvino/c/ov_partial_shape.h"
#include "openvino/c/ov_prepostprocess.h"
#include "openvino/c/ov_property.h"
#include "openvino/c/ov_rank.h"
#include "openvino/c/ov_shape.h"
#include "openvino/c/ov_tensor.h"

View File

@ -0,0 +1,111 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a common header file for the C API
*
* @file ov_common.h
*/
#pragma once
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#ifdef __cplusplus
# define OPENVINO_C_API_EXTERN extern "C"
#else
# define OPENVINO_C_API_EXTERN
#endif
#if defined(OPENVINO_STATIC_LIBRARY) || defined(__GNUC__) && (__GNUC__ < 4)
# define OPENVINO_C_API(...) OPENVINO_C_API_EXTERN __VA_ARGS__
# define OV_NODISCARD
#else
# if defined(_WIN32)
# define OPENVINO_C_API_CALLBACK __cdecl
# ifdef openvino_c_EXPORTS
# define OPENVINO_C_API(...) OPENVINO_C_API_EXTERN __declspec(dllexport) __VA_ARGS__ __cdecl
# else
# define OPENVINO_C_API(...) OPENVINO_C_API_EXTERN __declspec(dllimport) __VA_ARGS__ __cdecl
# endif
# define OV_NODISCARD
# else
# define OPENVINO_C_API(...) OPENVINO_C_API_EXTERN __attribute__((visibility("default"))) __VA_ARGS__
# define OV_NODISCARD __attribute__((warn_unused_result))
# endif
#endif
#ifndef OPENVINO_C_API_CALLBACK
# define OPENVINO_C_API_CALLBACK
#endif
/**
* @enum ov_status_code_e
* @brief This enum contains codes for all possible return values of the interface functions
*/
typedef enum {
OK = 0,
/*
* @brief map exception to C++ interface
*/
GENERAL_ERROR = -1,
NOT_IMPLEMENTED = -2,
NETWORK_NOT_LOADED = -3,
PARAMETER_MISMATCH = -4,
NOT_FOUND = -5,
OUT_OF_BOUNDS = -6,
/*
* @brief exception not of std::exception derived type was thrown
*/
UNEXPECTED = -7,
REQUEST_BUSY = -8,
RESULT_NOT_READY = -9,
NOT_ALLOCATED = -10,
INFER_NOT_STARTED = -11,
NETWORK_NOT_READ = -12,
INFER_CANCELLED = -13,
/*
* @brief exception in C wrapper
*/
INVALID_C_PARAM = -14,
UNKNOWN_C_ERROR = -15,
} ov_status_e;
/**
* @enum ov_element_type_e
* @brief This enum contains codes for element type.
*/
typedef enum {
UNDEFINED = 0U, //!< Undefined element type
DYNAMIC, //!< Dynamic element type
BOOLEAN, //!< boolean element type
BF16, //!< bf16 element type
F16, //!< f16 element type
F32, //!< f32 element type
F64, //!< f64 element type
I4, //!< i4 element type
I8, //!< i8 element type
I16, //!< i16 element type
I32, //!< i32 element type
I64, //!< i64 element type
U1, //!< binary element type
U4, //!< u4 element type
U8, //!< u8 element type
U16, //!< u16 element type
U32, //!< u32 element type
U64, //!< u64 element type
} ov_element_type_e;
/**
* @brief Print the error info.
* @param ov_status_e a status code.
*/
OPENVINO_C_API(const char*) ov_get_error_info(ov_status_e status);
/**
* @brief free char
* @param content The pointer to the char to free.
*/
OPENVINO_C_API(void) ov_free(const char* content);

View File

@ -0,0 +1,111 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a C header file for the ov_compiled_model API, which is a C wrapper for ov::CompiledModel class.
* A compiled model is compiled by a specific device by applying multiple optimization
* transformations, then mapping to compute kernels.
* @file ov_compiled_model.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_infer_request.h"
#include "openvino/c/ov_model.h"
#include "openvino/c/ov_node.h"
#include "openvino/c/ov_property.h"
typedef struct ov_compiled_model ov_compiled_model_t;
// Compiled Model
/**
* @defgroup compiled_model compiled_model
* @ingroup openvino_c
* Set of functions representing of Compiled Model.
* @{
*/
/**
* @brief Gets runtime model information from a device.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param model A pointer to the ov_model_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_get_runtime_model(const ov_compiled_model_t* compiled_model, ov_model_t** model);
/**
* @brief Gets all inputs of a compiled model.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param input_nodes A pointer to the ov_input_nodes.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_inputs(const ov_compiled_model_t* compiled_model, ov_output_node_list_t* input_nodes);
/**
* @brief Get all outputs of a compiled model.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param output_nodes A pointer to the ov_output_node_list_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_outputs(const ov_compiled_model_t* compiled_model, ov_output_node_list_t* output_nodes);
/**
* @brief Creates an inference request object used to infer the compiled model.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param infer_request A pointer to the ov_infer_request_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_create_infer_request(const ov_compiled_model_t* compiled_model, ov_infer_request_t** infer_request);
/**
* @brief Sets properties for the current compiled model.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param property ov_property_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_set_property(const ov_compiled_model_t* compiled_model, const ov_property_t* property);
/**
* @brief Gets properties for current compiled model.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param property_name Property name.
* @param property_value A pointer to property value.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_get_property(const ov_compiled_model_t* compiled_model,
const ov_property_key_e key,
ov_property_value_t* value);
/**
* @brief Exports the current compiled model to an output stream `std::ostream`.
* The exported model can also be imported via the ov::Core::import_model method.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t.
* @param export_model_path Path to the file.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_compiled_model_export_model(const ov_compiled_model_t* compiled_model, const char* export_model_path);
/**
* @brief Release the memory allocated by ov_compiled_model_t.
* @ingroup compiled_model
* @param compiled_model A pointer to the ov_compiled_model_t to free memory.
*/
OPENVINO_C_API(void) ov_compiled_model_free(ov_compiled_model_t* compiled_model);
/** @} */ // end of compiled_model

View File

@ -0,0 +1,268 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for the ov_core C API, which is a C wrapper for ov::Core class.
* This class represents an OpenVINO runtime Core entity.
* @file ov_core.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_compiled_model.h"
#include "openvino/c/ov_model.h"
#include "openvino/c/ov_node.h"
#include "openvino/c/ov_property.h"
#include "openvino/c/ov_tensor.h"
typedef struct ov_core ov_core_t;
/**
* @struct ov_version
* @brief Represents OpenVINO version information
*/
typedef struct ov_version {
const char* buildNumber; //!< A string representing OpenVINO version
const char* description; //!< A string representing OpenVINO description
} ov_version_t;
/**
* @struct ov_core_version
* @brief Represents version information that describes device and ov runtime library
*/
typedef struct {
const char* device_name; //!< A device name
ov_version_t version; //!< Version
} ov_core_version_t;
/**
* @struct ov_core_version_list
* @brief Represents version information that describes all devices and ov runtime library
*/
typedef struct {
ov_core_version_t* versions; //!< An array of device versions
size_t size; //!< A number of versions in the array
} ov_core_version_list_t;
/**
* @struct ov_available_devices_t
* @brief Represent all available devices.
*/
typedef struct {
char** devices;
size_t size;
} ov_available_devices_t;
/**
* @brief Get version of OpenVINO.
* @param ov_version_t a pointer to the version
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_get_openvino_version(ov_version_t* version);
/**
* @brief Release the memory allocated by ov_version_t.
* @param version A pointer to the ov_version_t to free memory.
*/
OPENVINO_C_API(void) ov_version_free(ov_version_t* version);
// OV Core
/**
* @defgroup Core Core
* @ingroup openvino_c
* Set of functions dedicated to working with registered plugins and loading
* model to the registered devices.
* @{
*/
/**
* @brief Constructs OpenVINO Core instance by default.
* See RegisterPlugins for more details.
* @ingroup Core
* @param core A pointer to the newly created ov_core_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_core_create(ov_core_t** core);
/**
* @brief Constructs OpenVINO Core instance using XML configuration file with devices description.
* See RegisterPlugins for more details.
* @ingroup Core
* @param xml_config_file A path to .xml file with devices to load from. If XML configuration file is not specified,
* then default plugin.xml file will be used.
* @param core A pointer to the newly created ov_core_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_core_create_with_config(const char* xml_config_file, ov_core_t** core);
/**
* @brief Release the memory allocated by ov_core_t.
* @ingroup Core
* @param core A pointer to the ov_core_t to free memory.
*/
OPENVINO_C_API(void) ov_core_free(ov_core_t* core);
/**
* @brief Reads models from IR/ONNX/PDPD formats.
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param model_path Path to a model.
* @param bin_path Path to a data file.
* For IR format (*.bin):
* * if path is empty, will try to read a bin file with the same name as xml and
* * if the bin file with the same name is not found, will load IR without weights.
* For ONNX format (*.onnx):
* * the bin_path parameter is not used.
* For PDPD format (*.pdmodel)
* * the bin_path parameter is not used.
* @param model A pointer to the newly created model.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_read_model(const ov_core_t* core, const char* model_path, const char* bin_path, ov_model_t** model);
/**
* @brief Reads models from IR/ONNX/PDPD formats.
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param model_str String with a model in IR/ONNX/PDPD format.
* @param weights Shared pointer to a constant tensor with weights.
* @param model A pointer to the newly created model.
* Reading ONNX/PDPD models does not support loading weights from the @p weights tensors.
* @note Created model object shares the weights with the @p weights object.
* Thus, do not create @p weights on temporary data that can be freed later, since the model
* constant data will point to an invalid memory.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_read_model_from_memory(const ov_core_t* core,
const char* model_str,
const ov_tensor_t* weights,
ov_model_t** model);
/**
* @brief Creates a compiled model from a source model object.
* Users can create as many compiled models as they need and use
* them simultaneously (up to the limitation of the hardware resources).
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param model Model object acquired from Core::read_model.
* @param device_name Name of a device to load a model to.
* @param compiled_model A pointer to the newly created compiled_model.
* @param property Optional pack of pairs: (property name, property value) relevant only for this load operation
* operation.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_compile_model(const ov_core_t* core,
const ov_model_t* model,
const char* device_name,
ov_compiled_model_t** compiled_model,
const ov_property_t* property);
/**
* @brief Reads a model and creates a compiled model from the IR/ONNX/PDPD file.
* This can be more efficient than using the ov_core_read_model_from_XXX + ov_core_compile_model flow,
* especially for cases when caching is enabled and a cached model is available.
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param model_path Path to a model.
* @param device_name Name of a device to load a model to.
* @param compiled_model A pointer to the newly created compiled_model.
* @param property Optional pack of pairs: (property name, property value) relevant only for this load operation
* operation.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_compile_model_from_file(const ov_core_t* core,
const char* model_path,
const char* device_name,
ov_compiled_model_t** compiled_model,
const ov_property_t* property);
/**
* @brief Sets properties for a device, acceptable keys can be found in ov_property_key_e.
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param device_name Name of a device.
* @param property ov_property propertys.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_set_property(const ov_core_t* core, const char* device_name, const ov_property_t* property);
/**
* @brief Gets properties related to device behaviour.
* The method extracts information that can be set via the set_property method.
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param device_name Name of a device to get a property value.
* @param property_name Property name.
* @param property_value A pointer to property value.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_get_property(const ov_core_t* core,
const char* device_name,
const ov_property_key_e property_name,
ov_property_value_t* property_value);
/**
* @brief Returns devices available for inference.
* @ingroup Core
* @param core A pointer to the ie_core_t instance.
* @param devices A pointer to the ov_available_devices_t instance.
* Core objects go over all registered plugins and ask about available devices.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_core_get_available_devices(const ov_core_t* core, ov_available_devices_t* devices);
/**
* @brief Releases memory occpuied by ov_available_devices_t
* @ingroup Core
* @param devices A pointer to the ov_available_devices_t instance.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(void) ov_available_devices_free(ov_available_devices_t* devices);
/**
* @brief Imports a compiled model from the previously exported one.
* @ingroup Core
* @param core A pointer to the ov_core_t instance.
* @param content A pointer to content of the exported model.
* @param content_size Number of bytes in the exported network.
* @param device_name Name of a device to import a compiled model for.
* @param compiled_model A pointer to the newly created compiled_model.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_import_model(const ov_core_t* core,
const char* content,
const size_t content_size,
const char* device_name,
ov_compiled_model_t** compiled_model);
/**
* @brief Returns device plugins version information.
* Device name can be complex and identify multiple devices at once like `HETERO:CPU,GPU`;
* in this case, std::map contains multiple entries, each per device.
* @ingroup Core
* @param core A pointer to the ov_core_t instance.
* @param device_name Device name to identify a plugin.
* @param versions A pointer to versions corresponding to device_name.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_core_get_versions_by_device_name(const ov_core_t* core, const char* device_name, ov_core_version_list_t* versions);
/**
* @brief Releases memory occupied by ov_core_version_list_t.
* @ingroup Core
* @param vers A pointer to the ie_core_versions to free memory.
*/
OPENVINO_C_API(void) ov_core_versions_free(ov_core_version_list_t* versions);
/** @} */ // end of Core

View File

@ -0,0 +1,96 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_dimension C API, which is a C wrapper for ov::Dimension class.
*
* @file ov_dimension.h
*/
#pragma once
#include "openvino/c/ov_common.h"
typedef struct ov_dimension ov_dimension_t;
typedef struct ov_dimensions ov_dimensions_t;
// Dimension
/**
* @defgroup dimension dimension
* @ingroup openvino_c
* Set of functions representing of Dimension.
* @{
*/
/**
* @brief Create a static dimension object
* @ingroup dimension
* @param dimension_value The dimension value for this object
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e) ov_dimension_create(ov_dimension_t** dimension, int64_t dimension_value);
/**
* @brief Create a dynamic dimension object
* @ingroup dimension
* @param min_dimension The lower inclusive limit for the dimension, for static object you should set same value(>=0)
* with max_dimension
* @param max_dimension The upper inclusive limit for the dimension, for static object you should set same value(>=0)
* with min_dimension
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e)
ov_dimension_create_dynamic(ov_dimension_t** dimension, int64_t min_dimension, int64_t max_dimension);
/**
* @brief Release dimension object.
* @ingroup dimension
* @param ov_status_e a status code.
*/
OPENVINO_C_API(void) ov_dimension_free(ov_dimension_t* dimension);
/**
* @brief Create a dimension vector object without any items in it
* @ingroup dimension
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e) ov_dimensions_create(ov_dimensions_t** dimensions);
/**
* @brief Release a dimension vector object
* @ingroup dimension
* @param ov_status_e a status code.
*/
OPENVINO_C_API(void) ov_dimensions_free(ov_dimensions_t* dimensions);
/**
* @brief Add a static dimension into dimensions
* @ingroup dimension
* @param value The value for the dimension, it should be not less than 0(>=0)
*
* Static dimension: min_dimension == max_dimension >= 0
* Dynamic dimension:
* min_dimension == -1 ? 0 : min_dimension
* max_dimension == -1 ? Interval::s_max : max_dimension
*
*/
OPENVINO_C_API(ov_status_e) ov_dimensions_add(ov_dimensions_t* dimension, int64_t value);
/**
* @brief Add a dynamic dimension with bounded range into dimensions
* @ingroup dimension
* @param min_dimension The lower inclusive limit for the dimension, for static object you should set same value(>=0)
* with max_dimension
* @param max_dimension The upper inclusive limit for the dimension, for static object you should set same value(>=0)
* with min_dimension
*
* Dynamic dimension:
* min_dimension == -1 ? 0 : min_dimension
* max_dimension == -1 ? Interval::s_max : max_dimension
*
*/
OPENVINO_C_API(ov_status_e)
ov_dimensions_add_dynamic(ov_dimensions_t* dimension, int64_t min_dimension, int64_t max_dimension);
/** @} */ // end of Dimension

View File

@ -0,0 +1,167 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for the ov_infer_request C API, which is a C wrapper for ov::InferRequest class
* This is a class of infer request that can be run in asynchronous or synchronous manners.
* @file ov_infer_request.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_tensor.h"
typedef struct ov_infer_request ov_infer_request_t;
/**
* @struct ov_callback_t
* @brief Completion callback definition about the function and args
*/
typedef struct {
void(OPENVINO_C_API_CALLBACK* callback_func)(void* args);
void* args;
} ov_callback_t;
/**
* @struct ov_ProfilingInfo_t
* @brief Store profiling info data
*/
typedef struct {
enum Status { //!< Defines the general status of a node.
NOT_RUN, //!< A node is not executed.
OPTIMIZED_OUT, //!< A node is optimized out during graph optimization phase.
EXECUTED //!< A node is executed.
} status;
int64_t real_time; //!< The absolute time, in microseconds, that the node ran (in total).
int64_t cpu_time; //!< The net host CPU time that the node ran.
const char* node_name; //!< Name of a node.
const char* exec_type; //!< Execution type of a unit.
const char* node_type; //!< Node type.
} ov_profiling_info_t;
/**
* @struct ov_profiling_info_list_t
* @brief A list of profiling info data
*/
typedef struct {
ov_profiling_info_t* profiling_infos;
size_t size;
} ov_profiling_info_list_t;
// infer_request
/**
* @defgroup infer_request infer_request
* @ingroup openvino_c
* Set of functions representing of infer_request.
* @{
*/
/**
* @brief Sets an input/output tensor to infer on.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
* @param tensor_name Name of the input or output tensor.
* @param tensor Reference to the tensor.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_set_tensor(ov_infer_request_t* infer_request, const char* tensor_name, const ov_tensor_t* tensor);
/**
* @brief Sets an input tensor to infer on.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
* @param idx Index of the input tensor. If @p idx is greater than the number of model inputs, an exception is thrown.
* @param tensor Reference to the tensor.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_set_input_tensor(ov_infer_request_t* infer_request, size_t idx, const ov_tensor_t* tensor);
/**
* @brief Gets an input/output tensor to infer on.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
* @param tensor_name Name of the input or output tensor.
* @param tensor Reference to the tensor.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_get_tensor(const ov_infer_request_t* infer_request, const char* tensor_name, ov_tensor_t** tensor);
/**
* @brief Gets an output tensor to infer on.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
* @param idx Index of the tensor to get.
* @param tensor Reference to the tensor.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_get_output_tensor(const ov_infer_request_t* infer_request, size_t idx, ov_tensor_t** tensor);
/**
* @brief Infers specified input(s) in synchronous mode.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
*/
OPENVINO_C_API(ov_status_e) ov_infer_request_infer(ov_infer_request_t* infer_request);
/**
* @brief Cancels inference request.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
*/
OPENVINO_C_API(ov_status_e) ov_infer_request_cancel(ov_infer_request_t* infer_request);
/**
* @brief Starts inference of specified input(s) in asynchronous mode.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
*/
OPENVINO_C_API(ov_status_e) ov_infer_request_start_async(ov_infer_request_t* infer_request);
/**
* @brief Waits for the result to become available. Blocks until the result
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
*/
OPENVINO_C_API(ov_status_e) ov_infer_request_wait(ov_infer_request_t* infer_request);
/**
* @brief Waits for the result to become available. Blocks until the result
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
* @param callback A function to be called.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_set_callback(ov_infer_request_t* infer_request, const ov_callback_t* callback);
/**
* @brief Release the memory allocated by ov_infer_request_t.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t to free memory.
*/
OPENVINO_C_API(void) ov_infer_request_free(ov_infer_request_t* infer_request);
/**
* @brief Queries performance measures per layer to identify the most time consuming operation.
* @ingroup infer_request
* @param infer_request A pointer to the ov_infer_request_t.
* @param profiling_infos Vector of profiling information for operations in a model.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_infer_request_get_profiling_info(ov_infer_request_t* infer_request, ov_profiling_info_list_t* profiling_infos);
/**
* @brief Release the memory allocated by ov_profiling_info_list_t.
* @ingroup infer_request
* @param profiling_infos A pointer to the ov_profiling_info_list_t to free memory.
*/
OPENVINO_C_API(void) ov_profiling_info_list_free(ov_profiling_info_list_t* profiling_infos);
/** @} */ // end of infer_request

View File

@ -0,0 +1,47 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_layout C API
*
* @file ov_layout.h
*/
#pragma once
#include "openvino/c/ov_common.h"
typedef struct ov_layout ov_layout_t;
// Layout
/**
* @defgroup layout layout
* @ingroup openvino_c
* Set of functions representing of Layout.
* @{
*/
/**
* @brief Create a layout object.
* @ingroup layout
* @param ov_status_e a status code, return OK if successful
*/
OPENVINO_C_API(ov_status_e) ov_layout_create(ov_layout_t** layout, const char* layout_desc);
/**
* @brief Free layout object.
* @ingroup layout
* @param layout will be released.
*/
OPENVINO_C_API(void) ov_layout_free(ov_layout_t* layout);
/**
* @brief Convert layout object to a readable string.
* @ingroup layout
* @param layout will be converted.
* @return string that describes the layout content.
*/
OPENVINO_C_API(const char*) ov_layout_to_string(ov_layout_t* layout);
/** @} */ // end of Layout

View File

@ -0,0 +1,148 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_model C API, which is a C wrapper for ov::Model class.
* A user-defined model.
* @file ov_model.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_node.h"
#include "openvino/c/ov_partial_shape.h"
typedef struct ov_model ov_model_t;
// Model
/**
* @defgroup model model
* @ingroup openvino_c
* Set of functions representing of Model and Node.
* @{
*/
/**
* @brief Release the memory allocated by ov_model_t.
* @ingroup model
* @param model A pointer to the ov_model_t to free memory.
*/
OPENVINO_C_API(void) ov_model_free(ov_model_t* model);
/**
* @brief Get the outputs of ov_model_t.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param output_nodes A pointer to the ov_output_nodes.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_model_outputs(const ov_model_t* model, ov_output_node_list_t* output_nodes);
/**
* @brief Get the outputs of ov_model_t.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param input_nodes A pointer to the ov_input_nodes.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_model_inputs(const ov_model_t* model, ov_output_node_list_t* input_nodes);
/**
* @brief Get the outputs of ov_model_t.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param tensor_name input tensor name (char *).
* @param input_node A pointer to the ov_output_const_node_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_model_input_by_name(const ov_model_t* model, const char* tensor_name, ov_output_const_node_t** input_node);
/**
* @brief Get the outputs of ov_model_t.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param index input tensor index.
* @param input_node A pointer to the ov_input_node_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_model_input_by_index(const ov_model_t* model, const size_t index, ov_output_const_node_t** input_node);
/**
* @brief Returns true if any of the op's defined in the model contains partial shape.
* @param model A pointer to the ov_model_t.
*/
OPENVINO_C_API(bool) ov_model_is_dynamic(const ov_model_t* model);
/**
* @brief Do reshape in model with a list of <name, partial shape>.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param tensor_names input tensor name (char *) list.
* @param partialShape A PartialShape list.
* @param cnt The item count in the list.
*/
OPENVINO_C_API(ov_status_e)
ov_model_reshape(const ov_model_t* model,
const char* tensor_names[],
const ov_partial_shape_t* partial_shapes[],
size_t cnt);
/**
* @brief Do reshape in model with partial shape for a specified name.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param tensor_name input tensor name (char *).
* @param partialShape A PartialShape.
*/
OPENVINO_C_API(ov_status_e)
ov_model_reshape_input_by_name(const ov_model_t* model,
const char* tensor_name,
const ov_partial_shape_t* partial_shape);
/**
* @brief Do reshape in model for one node(port 0).
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param partialShape A PartialShape.
*/
OPENVINO_C_API(ov_status_e)
ov_model_reshape_one_input(const ov_model_t* model, const ov_partial_shape_t* partial_shape);
/**
* @brief Do reshape in model with a list of <port id, partial shape>.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param ports The port list.
* @param partialShape A PartialShape list.
* @param cnt The item count in the list.
*/
OPENVINO_C_API(ov_status_e)
ov_model_reshape_by_ports(const ov_model_t* model, size_t* ports, const ov_partial_shape_t** partial_shape, size_t cnt);
/**
* @brief Do reshape in model with a list of <ov_output_node_t, partial shape>.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param output_nodes The ov_output_node_t list.
* @param partialShape A PartialShape list.
* @param cnt The item count in the list.
*/
OPENVINO_C_API(ov_status_e)
ov_model_reshape_by_nodes(const ov_model_t* model,
const ov_output_node_t* output_nodes[],
const ov_partial_shape_t* partial_shapes[],
size_t cnt);
/**
* @brief Gets the friendly name for a model.
* @ingroup model
* @param model A pointer to the ov_model_t.
* @param friendly_name the model's friendly name.
*/
OPENVINO_C_API(ov_status_e) ov_model_get_friendly_name(const ov_model_t* model, char** friendly_name);
/** @} */ // end of Model

View File

@ -0,0 +1,104 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_model C API, which is a C wrapper for ov::Node class.
*
* @file ov_node.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_partial_shape.h"
#include "openvino/c/ov_shape.h"
typedef struct ov_output_const_node ov_output_const_node_t;
typedef struct ov_output_node ov_output_node_t;
/**
* @struct ov_output_node_list_t
* @brief Reprents an array of ov_output_nodes.
*/
typedef struct {
ov_output_const_node_t* output_nodes;
size_t size;
} ov_output_node_list_t;
// Node
/**
* @defgroup node node
* @ingroup openvino_c
* Set of functions representing of Model and Node.
* @{
*/
/**
* @brief Get the shape of ov_output_node.
* @ingroup node
* @param nodes A pointer to ov_output_const_node_t.
* @param tensor_shape tensor shape.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_node_get_shape(ov_output_const_node_t* node, ov_shape_t* tensor_shape);
/**
* @brief Get the tensor name of ov_output_node list by index.
* @ingroup node
* @param nodes A pointer to the ov_output_node_list_t.
* @param idx Index of the input tensor
* @param tensor_name A pointer to the tensor name.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_node_list_get_any_name_by_index(ov_output_node_list_t* nodes, size_t idx, char** tensor_name);
/**
* @brief Get the shape of ov_output_node.
* @ingroup node
* @param nodes A pointer to the ov_output_node_list_t.
* @param idx Index of the input tensor
* @param tensor_shape tensor shape.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_node_list_get_shape_by_index(ov_output_node_list_t* nodes, size_t idx, ov_shape_t* shape);
/**
* @brief Get the partial shape of ov_output_node.
* @ingroup node
* @param nodes A pointer to the ov_output_node_list_t.
* @param idx Index of the input tensor
* @param tensor_shape tensor shape.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_node_list_get_partial_shape_by_index(ov_output_node_list_t* nodes, size_t idx, ov_partial_shape_t** partial_shape);
/**
* @brief Get the tensor type of ov_output_node.
* @ingroup node
* @param nodes A pointer to the ov_output_node_list_t.
* @param idx Index of the input tensor
* @param tensor_type tensor type.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_node_list_get_element_type_by_index(ov_output_node_list_t* nodes, size_t idx, ov_element_type_e* tensor_type);
/**
* @brief free ov_output_node_list_t
* @ingroup node
* @param output_nodes The pointer to the instance of the ov_output_node_list_t to free.
*/
OPENVINO_C_API(void) ov_output_node_list_free(ov_output_node_list_t* output_nodes);
/**
* @brief free ov_output_const_node_t
* @ingroup node
* @param output_node The pointer to the instance of the ov_output_const_node_t to free.
*/
OPENVINO_C_API(void) ov_output_node_free(ov_output_const_node_t* output_node);
/** @} */ // end of Node

View File

@ -0,0 +1,74 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for partial shape C API, which is a C wrapper for ov::PartialShape class.
*
* @file ov_partial_shape.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_dimension.h"
#include "openvino/c/ov_layout.h"
#include "openvino/c/ov_rank.h"
#include "openvino/c/ov_shape.h"
typedef struct ov_partial_shape ov_partial_shape_t;
// PartialShape
/**
* @defgroup partial_shape partial_shape
* @ingroup openvino_c
* Set of functions representing PartialShape.
* @{
*/
/**
* @brief Create a partial shape and initialze with rank and dimension.
* @ingroup partial_shape
* @param rank support dynamic and static rank
* @param dims support dynamic and static dimension
* Dynamic rank:
* Example: "?"
* Static rank, but dynamic dimensions on some or all axes.
* Examples: "{1,2,?,4}" or "{?,?,?}" or "{1,2,-1,4}""
* Static rank, and static dimensions on all axes.
* Examples: "{1,2,3,4}" or "{6}" or "{}""
*
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e)
ov_partial_shape_create(ov_partial_shape_t** partial_shape_obj, ov_rank_t* rank, ov_dimensions_t* dims);
/**
* @brief Parse the partial shape to readable string.
* @ingroup partial_shape
* @param ov_status_e a status code.
*/
OPENVINO_C_API(const char*) ov_partial_shape_to_string(ov_partial_shape_t* partial_shape);
/**
* @brief Release partial shape.
* @ingroup partial_shape
* @param partial_shape will be released.
*/
OPENVINO_C_API(void) ov_partial_shape_free(ov_partial_shape_t* partial_shape);
/**
* @brief Covert partial shape to static shape.
* @ingroup partial_shape
* @param ov_status_e a status code, return OK if successful
*/
OPENVINO_C_API(ov_status_e) ov_partial_shape_to_shape(ov_partial_shape_t* partial_shape, ov_shape_t* shape);
/**
* @brief Covert shape to partial shape.
* @ingroup shape
* @param ov_status_e a status code, return OK if successful
*/
OPENVINO_C_API(ov_status_e) ov_shape_to_partial_shape(ov_shape_t* shape, ov_partial_shape_t** partial_shape);
/** @} */ // end of partial_shape

View File

@ -0,0 +1,354 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for prepostprocess C API, which is a C wrapper for ov::preprocess class.
* Main class for adding pre- and post- processing steps to existing ov::Model
* @file ov_prepostprocess.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_layout.h"
#include "openvino/c/ov_model.h"
#include "openvino/c/ov_tensor.h"
typedef struct ov_preprocess_prepostprocessor ov_preprocess_prepostprocessor_t;
typedef struct ov_preprocess_inputinfo ov_preprocess_inputinfo_t;
typedef struct ov_preprocess_inputtensorinfo ov_preprocess_inputtensorinfo_t;
typedef struct ov_preprocess_outputinfo ov_preprocess_outputinfo_t;
typedef struct ov_preprocess_outputtensorinfo ov_preprocess_outputtensorinfo_t;
typedef struct ov_preprocess_inputmodelinfo ov_preprocess_inputmodelinfo_t;
typedef struct ov_preprocess_preprocesssteps ov_preprocess_preprocesssteps_t;
/**
* @enum ov_color_format_e
* @brief This enum contains enumerations for color format.
*/
typedef enum {
UNDEFINE = 0U, //!< Undefine color format
NV12_SINGLE_PLANE, //!< Image in NV12 format as single tensor
NV12_TWO_PLANES, //!< Image in NV12 format represented as separate tensors for Y and UV planes.
I420_SINGLE_PLANE, //!< Image in I420 (YUV) format as single tensor
I420_THREE_PLANES, //!< Image in I420 format represented as separate tensors for Y, U and V planes.
RGB, //!< Image in RGB interleaved format (3 channels)
BGR, //!< Image in BGR interleaved format (3 channels)
RGBX, //!< Image in RGBX interleaved format (4 channels)
BGRX //!< Image in BGRX interleaved format (4 channels)
} ov_color_format_e;
/**
* @enum ov_preprocess_resizealgorithm_e
* @brief This enum contains codes for all preprocess resize algorithm.
*/
typedef enum {
RESIZE_LINEAR, //!< linear algorithm
RESIZE_CUBIC, //!< cubic algorithm
RESIZE_NEAREST //!< nearest algorithm
} ov_preprocess_resizealgorithm_e;
// prepostprocess
/**
* @defgroup prepostprocess prepostprocess
* @ingroup openvino_c
* Set of functions representing of PrePostProcess.
* @{
*/
/**
* @brief Create a ov_preprocess_prepostprocessor_t instance.
* @ingroup prepostprocess
* @param model A pointer to the ov_model_t.
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_create(const ov_model_t* model, ov_preprocess_prepostprocessor_t** preprocess);
/**
* @brief Release the memory allocated by ov_preprocess_prepostprocessor_t.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t to free memory.
*/
OPENVINO_C_API(void) ov_preprocess_prepostprocessor_free(ov_preprocess_prepostprocessor_t* preprocess);
/**
* @brief Get the input info of ov_preprocess_prepostprocessor_t instance.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param tensor_name The name of input.
* @param preprocess_input_info A pointer to the ov_preprocess_inputinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_input(const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_inputinfo_t** preprocess_input_info);
/**
* @brief Get the input info of ov_preprocess_prepostprocessor_t instance by tensor name.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param tensor_name The name of input.
* @param preprocess_input_info A pointer to the ov_preprocess_inputinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_input_by_name(const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_inputinfo_t** preprocess_input_info);
/**
* @brief Get the input info of ov_preprocess_prepostprocessor_t instance by tensor order.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param tensor_index The order of input.
* @param preprocess_input_info A pointer to the ov_preprocess_inputinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_input_by_index(const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_inputinfo_t** preprocess_input_info);
/**
* @brief Release the memory allocated by ov_preprocess_inputinfo_t.
* @ingroup prepostprocess
* @param preprocess_input_info A pointer to the ov_preprocess_inputinfo_t to free memory.
*/
OPENVINO_C_API(void) ov_preprocess_inputinfo_free(ov_preprocess_inputinfo_t* preprocess_input_info);
/**
* @brief Get a ov_preprocess_inputtensorinfo_t.
* @ingroup prepostprocess
* @param preprocess_input_info A pointer to the ov_preprocess_inputinfo_t.
* @param preprocess_input_tensor_info A pointer to ov_preprocess_inputtensorinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputinfo_tensor(const ov_preprocess_inputinfo_t* preprocess_input_info,
ov_preprocess_inputtensorinfo_t** preprocess_input_tensor_info);
/**
* @brief Release the memory allocated by ov_preprocess_inputtensorinfo_t.
* @ingroup prepostprocess
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_inputtensorinfo_t to free memory.
*/
OPENVINO_C_API(void)
ov_preprocess_inputtensorinfo_free(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info);
/**
* @brief Get a ov_preprocess_preprocesssteps_t.
* @ingroup prepostprocess
* @param ov_preprocess_inputinfo_t A pointer to the ov_preprocess_inputinfo_t.
* @param preprocess_input_steps A pointer to ov_preprocess_preprocesssteps_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputinfo_preprocess(const ov_preprocess_inputinfo_t* preprocess_input_info,
ov_preprocess_preprocesssteps_t** preprocess_input_steps);
/**
* @brief Release the memory allocated by ov_preprocess_preprocesssteps_t.
* @ingroup prepostprocess
* @param preprocess_input_steps A pointer to the ov_preprocess_preprocesssteps_t to free memory.
*/
OPENVINO_C_API(void)
ov_preprocess_preprocesssteps_free(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps);
/**
* @brief Add resize operation to model's dimensions.
* @ingroup prepostprocess
* @param preprocess_input_process_steps A pointer to ov_preprocess_preprocesssteps_t.
* @param resize_algorithm A ov_preprocess_resizeAlgorithm instance
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocesssteps_resize(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps,
const ov_preprocess_resizealgorithm_e resize_algorithm);
/**
* @brief Set ov_preprocess_inputtensorinfo_t precesion.
* @ingroup prepostprocess
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_inputtensorinfo_t.
* @param element_type A point to element_type
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputtensorinfo_set_element_type(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const ov_element_type_e element_type);
/**
* @brief Set ov_preprocess_inputtensorinfo_t color format.
* @ingroup prepostprocess
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_inputtensorinfo_t.
* @param colorFormat The enumerate of colorFormat
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputtensorinfo_set_color_format(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const ov_color_format_e colorFormat);
/**
* @brief Set ov_preprocess_inputtensorinfo_t spatial_static_shape.
* @ingroup prepostprocess
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_inputtensorinfo_t.
* @param input_height The height of input
* @param input_width The width of input
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputtensorinfo_set_spatial_static_shape(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const size_t input_height,
const size_t input_width);
/**
* @brief Convert ov_preprocess_preprocesssteps_t element type.
* @ingroup prepostprocess
* @param preprocess_input_steps A pointer to the ov_preprocess_preprocesssteps_t.
* @param element_type preprocess input element type.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocesssteps_convert_element_type(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps,
const ov_element_type_e element_type);
/**
* @brief Convert ov_preprocess_preprocesssteps_t color.
* @ingroup prepostprocess
* @param preprocess_input_steps A pointer to the ov_preprocess_preprocesssteps_t.
* @param colorFormat The enumerate of colorFormat.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_preprocesssteps_convert_color(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps,
const ov_color_format_e colorFormat);
/**
* @brief Helper function to reuse element type and shape from user's created tensor.
* @ingroup prepostprocess
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_inputtensorinfo_t.
* @param tensor A point to ov_tensor_t
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputtensorinfo_set_from(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const ov_tensor_t* tensor);
/**
* @brief Set ov_preprocess_inputtensorinfo_t layout.
* @ingroup prepostprocess
* @param preprocess_input_tensor_info A pointer to the ov_preprocess_inputtensorinfo_t.
* @param layout A point to ov_layout_t
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputtensorinfo_set_layout(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
ov_layout_t* layout);
/**
* @brief Get the output info of ov_preprocess_outputinfo_t instance.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param preprocess_output_info A pointer to the ov_preprocess_outputinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_output(const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_outputinfo_t** preprocess_output_info);
/**
* @brief Get the output info of ov_preprocess_outputinfo_t instance.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param tensor_index The tensor index
* @param preprocess_output_info A pointer to the ov_preprocess_outputinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_output_by_index(const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_outputinfo_t** preprocess_output_info);
/**
* @brief Get the output info of ov_preprocess_outputinfo_t instance.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param tensor_name The name of input.
* @param preprocess_output_info A pointer to the ov_preprocess_outputinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_output_by_name(const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_outputinfo_t** preprocess_output_info);
/**
* @brief Release the memory allocated by ov_preprocess_outputinfo_t.
* @ingroup prepostprocess
* @param preprocess_output_info A pointer to the ov_preprocess_outputinfo_t to free memory.
*/
OPENVINO_C_API(void) ov_preprocess_outputinfo_free(ov_preprocess_outputinfo_t* preprocess_output_info);
/**
* @brief Get a ov_preprocess_inputtensorinfo_t.
* @ingroup prepostprocess
* @param preprocess_output_info A pointer to the ov_preprocess_outputinfo_t.
* @param preprocess_output_tensor_info A pointer to the ov_preprocess_outputtensorinfo_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_outputinfo_tensor(ov_preprocess_outputinfo_t* preprocess_output_info,
ov_preprocess_outputtensorinfo_t** preprocess_output_tensor_info);
/**
* @brief Release the memory allocated by ov_preprocess_outputtensorinfo_t.
* @ingroup prepostprocess
* @param preprocess_output_tensor_info A pointer to the ov_preprocess_outputtensorinfo_t to free memory.
*/
OPENVINO_C_API(void)
ov_preprocess_outputtensorinfo_free(ov_preprocess_outputtensorinfo_t* preprocess_output_tensor_info);
/**
* @brief Set ov_preprocess_inputtensorinfo_t precesion.
* @ingroup prepostprocess
* @param preprocess_output_tensor_info A pointer to the ov_preprocess_outputtensorinfo_t.
* @param element_type A point to element_type
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_output_set_element_type(ov_preprocess_outputtensorinfo_t* preprocess_output_tensor_info,
const ov_element_type_e element_type);
/**
* @brief Get current input model information.
* @ingroup prepostprocess
* @param preprocess_input_info A pointer to the ov_preprocess_inputinfo_t.
* @param preprocess_input_model_info A pointer to the ov_preprocess_inputmodelinfo_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputinfo_model(ov_preprocess_inputinfo_t* preprocess_input_info,
ov_preprocess_inputmodelinfo_t** preprocess_input_model_info);
/**
* @brief Release the memory allocated by ov_preprocess_inputmodelinfo_t.
* @ingroup prepostprocess
* @param preprocess_input_model_info A pointer to the ov_preprocess_inputmodelinfo_t to free memory.
*/
OPENVINO_C_API(void) ov_preprocess_inputmodelinfo_free(ov_preprocess_inputmodelinfo_t* preprocess_input_model_info);
/**
* @brief Set layout for model's input tensor.
* @ingroup prepostprocess
* @param preprocess_input_model_info A pointer to the ov_preprocess_inputmodelinfo_t
* @param layout A point to ov_layout_t
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_inputmodelinfo_set_layout(ov_preprocess_inputmodelinfo_t* preprocess_input_model_info,
ov_layout_t* layout);
/**
* @brief Adds pre/post-processing operations to function passed in constructor.
* @ingroup prepostprocess
* @param preprocess A pointer to the ov_preprocess_prepostprocessor_t.
* @param model A pointer to the ov_model_t.
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_preprocess_prepostprocessor_build(const ov_preprocess_prepostprocessor_t* preprocess, ov_model_t** model);
/** @} */ // end of prepostprocess

View File

@ -0,0 +1,150 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is header file for ov_property C API.
* A header for advanced hardware specific properties for OpenVINO runtime devices.
* To use in set_property, compile_model, import_model, get_property methods.
* @file ov_property.h
*/
#pragma once
#include "openvino/c/ov_common.h"
typedef struct ov_property ov_property_t;
/**
* @enum ov_performance_mode_e
* @brief Enum to define possible performance mode hints
* @brief This represents OpenVINO 2.0 ov::hint::PerformanceMode entity.
*
*/
typedef enum {
UNDEFINED_MODE = -1, //!< Undefined value, performance setting may vary from device to device
LATENCY = 1, //!< Optimize for latency
THROUGHPUT = 2, //!< Optimize for throughput
CUMULATIVE_THROUGHPUT = 3, //!< Optimize for cumulative throughput
} ov_performance_mode_e;
/**
* @enum ov_affinity_e
* @brief Enum to define possible affinity patterns
*/
typedef enum {
NONE = -1, //!< Disable threads affinity pinning
CORE = 0, //!< Pin threads to cores, best for static benchmarks
NUMA = 1, //!< Pin threads to NUMA nodes, best for real-life, contented cases. On the Windows and MacOS* this
//!< option behaves as CORE
HYBRID_AWARE = 2, //!< Let the runtime to do pinning to the cores types, e.g. prefer the "big" cores for latency
//!< tasks. On the hybrid CPUs this option is default
} ov_affinity_e;
/**
* @struct ov_property_key_e
* @brief Represent all available property key.
*/
typedef enum {
SUPPORTED_PROPERTIES = 0U, //!< Read-only property<char *> to get a string list of supported read-only properties.
AVAILABLE_DEVICES, //!< Read-only property<char *> to get a list of available device IDs
OPTIMAL_NUMBER_OF_INFER_REQUESTS, //!< Read-only property<uint32_t> to get an unsigned integer value of optimaln
//!< umber of compiled model infer requests.
RANGE_FOR_ASYNC_INFER_REQUESTS, //!< Read-only property<unsigned int, unsigned int, unsigned int> to provide a
//!< hint for a range for number of async infer requests. If device supports
//!< streams, the metric provides range for number of IRs per stream.
RANGE_FOR_STREAMS, //!< Read-only property<unsigned int, unsigned int> to provide information about a range for
//!< streams on platforms where streams are supported
FULL_DEVICE_NAME, //!< Read-only property<char *> to get a string value representing a full device name.
OPTIMIZATION_CAPABILITIES, //!< Read-only property<char *> to get a string list of capabilities options per
//!< device.
CACHE_DIR, //!< Read-write property<char *> to set/get the directory which will be used to store any data cached
//!< by plugins.
NUM_STREAMS, //!< Read-write property<uint32_t> to set/get the number of executor logical partitions
AFFINITY, //!< Read-write property<ov_affinity_e> to set/get the name for setting CPU affinity per thread option.
INFERENCE_NUM_THREADS, //!< Read-write property<int32_t> to set/get the maximum number of threads that can be used
//!< for inference tasks.
PERFORMANCE_HINT, //!< Read-write property<ov_performance_mode_e>, it is high-level OpenVINO Performance Hints
//!< unlike low-level properties that are individual (per-device), the hints are something that
//!< every device accepts and turns into device-specific settings detail see
//!< ov_performance_mode_e to get its hint's key name
NETWORK_NAME, //!< Read-only property<char *> to get a name of name of a model
INFERENCE_PRECISION_HINT, //!< Read-write property<ov_element_type_e> to set the hint for device to use specified
//!< precision for inference
OPTIMAL_BATCH_SIZE, //!< Read-only property<uint32_t> to query information optimal batch size for the given device
//!< and the network
MAX_BATCH_SIZE, //!< Read-only property to get maximum batch size which does not cause performance degradation due
//!< to memory swap impact.
PERFORMANCE_HINT_NUM_REQUESTS, //!< (Optional) property<uint32_t> that backs the Performance Hints by giving
//!< additional information on how many inference requests the application will be
//!< keeping in flight usually this value comes from the actual use-case (e.g.
//!< number of video-cameras, or other sources of inputs)
} ov_property_key_e;
/**
* @enum ov_property_value_type_e
* @brief Enum to define property value type.
*/
typedef enum {
BOOL = 0U, //!< boolean data
CHAR, //!< char data
ENUM, //!< enum data
INT32, //!< int32 data
UINT32, //!< uint32 data
FLOAT, //!< float data
} ov_property_value_type_e;
/**
* @struct ov_property_value_t
* @brief Represent a property value
*/
typedef struct {
void* ptr;
size_t cnt;
ov_property_value_type_e type;
} ov_property_value_t;
// Property
/**
* @defgroup Property Property
* @ingroup openvino_c
* Set of functions representing of Property.
* @{
*/
/**
* @brief Create a property object.
* @ingroup property
* @param ov_status_e a status code, return OK if successful
*/
OPENVINO_C_API(ov_status_e) ov_property_create(ov_property_t** property);
/**
* @brief Free property object.
* @ingroup property
* @param property will be released.
*/
OPENVINO_C_API(void) ov_property_free(ov_property_t* property);
/**
* @brief Create a property value object.
* @ingroup property
* @param ov_status_e a status code, return OK if successful
*/
OPENVINO_C_API(ov_status_e) ov_property_value_create(ov_property_value_t** value);
/**
* @brief Clean property data.
* @ingroup property
* @param property data will be clean.
*/
OPENVINO_C_API(void) ov_property_value_clean(ov_property_value_t* value);
/**
* @brief Put <key, value> into property object.
* @ingroup property
* @param property will be add new <key, value>.
*/
OPENVINO_C_API(ov_status_e) ov_property_put(ov_property_t* property, ov_property_key_e key, ov_property_value_t* value);
/** @} */ // end of Property

View File

@ -0,0 +1,49 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_shape C API
*
* @file ov_rank.h
*/
#pragma once
#include "openvino/c/ov_common.h"
typedef struct ov_rank ov_rank_t;
// Rank
/**
* @defgroup rank rank
* @ingroup openvino_c
* Set of functions representing of rank.
* @{
*/
/**
* @brief Create a static rank object
* @ingroup rank
* @param rank_value The rank value for this object, it should be not less than 0(>=0)
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e) ov_rank_create(ov_rank_t** rank, int64_t rank_value);
/**
* @brief Create a dynamic rank object
* @ingroup rank
* @param min_rank The lower inclusive limit for the rank
* @param max_rank The upper inclusive limit for the rank
* with min_dimension
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e) ov_rank_create_dynamic(ov_rank_t** rank, int64_t min_rank, int64_t max_rank);
/**
* @brief Release rank object.
* @ingroup rank
* @param ov_status_e a status code.
*/
OPENVINO_C_API(void) ov_rank_free(ov_rank_t* rank);
/** @} */ // end of Rank

View File

@ -0,0 +1,37 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_shape C API
*
* @file ov_shape.h
*/
#pragma once
#include "openvino/c/ov_common.h"
/**
* @struct ov_shape_t
* @brief Reprents a static shape.
*/
typedef struct {
int64_t rank;
int64_t* dims;
} ov_shape_t;
/**
* @brief Init a shape object, allocate space for its dimensions.
* @ingroup shape
* @param rank The rank value for this object, it should be more than 0(>0)
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e) ov_shape_init(ov_shape_t* shape, int64_t rank);
/**
* @brief Free a shape object's internal memory
* @ingroup shape
* @param ov_status_e a status code.
*/
OPENVINO_C_API(ov_status_e) ov_shape_deinit(ov_shape_t* shape);

View File

@ -0,0 +1,113 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
/**
* @brief This is a header file for ov_tensor C API, which is a wrapper for ov::Tensor class
* Tensor API holding host memory
* @file ov_tensor.h
*/
#pragma once
#include "openvino/c/ov_common.h"
#include "openvino/c/ov_partial_shape.h"
#include "openvino/c/ov_shape.h"
typedef struct ov_tensor ov_tensor_t;
// Tensor
/**
* @defgroup tensor tensor
* @ingroup openvino_c
* Set of functions representing of tensor.
* @{
*/
/**
* @brief Constructs Tensor using element type and shape. Allocate internal host storage using default allocator
* @ingroup tensor
* @param type Tensor element type
* @param shape Tensor shape
* @param host_ptr Pointer to pre-allocated host memory
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_tensor_create_from_host_ptr(const ov_element_type_e type,
const ov_shape_t shape,
void* host_ptr,
ov_tensor_t** tensor);
/**
* @brief Constructs Tensor using element type and shape. Allocate internal host storage using default allocator
* @ingroup tensor
* @param type Tensor element type
* @param shape Tensor shape
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e)
ov_tensor_create(const ov_element_type_e type, const ov_shape_t shape, ov_tensor_t** tensor);
/**
* @brief Set new shape for tensor, deallocate/allocate if new total size is bigger than previous one.
* @ingroup tensor
* @param shape Tensor shape
* @param tensor A point to ov_tensor_t
*/
OPENVINO_C_API(ov_status_e) ov_tensor_set_shape(ov_tensor_t* tensor, const ov_shape_t shape);
/**
* @brief Get shape for tensor.
* @ingroup tensor
* @param shape Tensor shape
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_tensor_get_shape(const ov_tensor_t* tensor, ov_shape_t* shape);
/**
* @brief Get type for tensor.
* @ingroup tensor
* @param type Tensor element type
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_tensor_get_element_type(const ov_tensor_t* tensor, ov_element_type_e* type);
/**
* @brief the total number of elements (a product of all the dims or 1 for scalar).
* @ingroup tensor
* @param elements_size number of elements
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_tensor_get_size(const ov_tensor_t* tensor, size_t* elements_size);
/**
* @brief the size of the current Tensor in bytes.
* @ingroup tensor
* @param byte_size the size of the current Tensor in bytes.
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_tensor_get_byte_size(const ov_tensor_t* tensor, size_t* byte_size);
/**
* @brief Provides an access to the underlaying host memory.
* @ingroup tensor
* @param data A point to host memory.
* @param tensor A point to ov_tensor_t
* @return Status code of the operation: OK(0) for success.
*/
OPENVINO_C_API(ov_status_e) ov_tensor_data(const ov_tensor_t* tensor, void** data);
/**
* @brief Free ov_tensor_t.
* @ingroup tensor
* @param tensor A point to ov_tensor_t
*/
OPENVINO_C_API(void) ov_tensor_free(ov_tensor_t* tensor);
/** @} */ // end of tensor

View File

@ -1,11 +0,0 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
project(InferenceEngine_C_API)
add_subdirectory(src)
if(ENABLE_TESTS)
add_subdirectory(tests)
endif()

File diff suppressed because it is too large Load Diff

View File

@ -1,56 +0,0 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(TARGET_NAME openvino_ov_c)
file(GLOB SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)
file(GLOB HEADERS ${InferenceEngine_C_API_SOURCE_DIR}/include/*.h)
# create library
add_library(${TARGET_NAME} ${HEADERS} ${SOURCES})
add_library(openvino::runtime::ov ALIAS ${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PRIVATE openvino)
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${InferenceEngine_C_API_SOURCE_DIR}/include>)
if(NOT BUILD_SHARED_LIBS)
target_compile_definitions(${TARGET_NAME} PUBLIC OPENVINO_STATIC_LIBRARY)
endif()
add_cpplint_target(${TARGET_NAME}_cpplint FOR_TARGETS ${TARGET_NAME})
set_target_properties(${TARGET_NAME} PROPERTIES INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
ie_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION "Inference Engine C API Core Runtime library")
# export
set_target_properties(${TARGET_NAME} PROPERTIES EXPORT_NAME runtime::ov)
ov_add_library_version(${TARGET_NAME})
export(TARGETS ${TARGET_NAME} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
# install
ie_cpack_add_component(${OV_CPACK_COMP_CORE_C} HIDDEN)
ie_cpack_add_component(${OV_CPACK_COMP_CORE_C_DEV} HIDDEN)
install(TARGETS ${TARGET_NAME} EXPORT OpenVINOTargets
RUNTIME DESTINATION ${OV_CPACK_RUNTIMEDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
LIBRARY DESTINATION ${OV_CPACK_LIBRARYDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
NAMELINK_COMPONENT ${OV_CPACK_COMP_CORE_C_DEV}
# TODO: fix to proper location
INCLUDES DESTINATION ${OV_CPACK_INCLUDEDIR}/ie)
install(DIRECTORY ${InferenceEngine_C_API_SOURCE_DIR}/include/
# TODO: fix to proper location
DESTINATION ${OV_CPACK_INCLUDEDIR}/ie
COMPONENT ${OV_CPACK_COMP_CORE_C_DEV})

File diff suppressed because it is too large Load Diff

View File

@ -1,41 +0,0 @@
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(TARGET_NAME "OpenVinoCAPITests")
add_executable(${TARGET_NAME} ov_c_api_test.cpp test_model_repo.hpp)
target_link_libraries(${TARGET_NAME} PRIVATE openvino_ov_c commonTestUtils gtest_main)
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${InferenceEngine_C_API_SOURCE_DIR}/include>)
target_compile_definitions(${TARGET_NAME}
PRIVATE
$<$<BOOL:${ENABLE_GAPI_PREPROCESSING}>:ENABLE_GAPI_PREPROCESSING>
DATA_PATH=\"${DATA_PATH}\"
MODELS_PATH=\"${MODELS_PATH}\" )
if(ENABLE_AUTO OR ENABLE_MULTI)
add_dependencies(${TARGET_NAME} openvino_auto_plugin)
endif()
if(ENABLE_AUTO_BATCH)
add_dependencies(${TARGET_NAME} openvino_auto_batch_plugin)
endif()
if(ENABLE_INTEL_CPU)
add_dependencies(${TARGET_NAME} openvino_intel_cpu_plugin)
endif()
if(ENABLE_INTEL_GPU)
add_dependencies(${TARGET_NAME} openvino_intel_gpu_plugin)
endif()
add_cpplint_target(${TARGET_NAME}_cpplint FOR_TARGETS ${TARGET_NAME})
install(TARGETS ${TARGET_NAME}
RUNTIME DESTINATION tests
COMPONENT tests
EXCLUDE_FROM_ALL)

File diff suppressed because it is too large Load Diff

View File

@ -1,53 +0,0 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
namespace TestDataHelpers {
static const char kPathSeparator =
#if defined _WIN32 || defined __CYGWIN__
'\\';
#else
'/';
#endif
std::string getModelPathNonFatal() noexcept {
if (const auto envVar = std::getenv("MODELS_PATH")) {
return envVar;
}
#ifdef MODELS_PATH
return MODELS_PATH;
#else
return "";
#endif
}
std::string get_models_path() {
return getModelPathNonFatal() + kPathSeparator + std::string("models");
};
std::string get_data_path() {
if (const auto envVar = std::getenv("DATA_PATH")) {
return envVar;
}
#ifdef DATA_PATH
return DATA_PATH;
#else
return "";
#endif
}
std::string generate_model_path(std::string dir, std::string filename) {
return get_models_path() + kPathSeparator + dir + kPathSeparator + filename;
}
std::string generate_image_path(std::string dir, std::string filename) {
return get_data_path() + kPathSeparator + "validation_set" + kPathSeparator + dir + kPathSeparator + filename;
}
std::string generate_ieclass_xml_path(std::string filename) {
return getModelPathNonFatal() + kPathSeparator + "ie_class" + kPathSeparator + filename;
}
} // namespace TestDataHelpers

View File

@ -5,28 +5,27 @@
set(TARGET_NAME openvino_c)
file(GLOB SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/*.cpp)
file(GLOB HEADERS ${InferenceEngine_C_API_SOURCE_DIR}/include/*.h)
file(GLOB HEADERS ${OpenVINO_C_API_SOURCE_DIR}/include/*)
# create library
add_library(${TARGET_NAME} ${HEADERS} ${SOURCES})
add_library(openvino::runtime::c ALIAS ${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PRIVATE openvino)
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${InferenceEngine_C_API_SOURCE_DIR}/include>)
$<BUILD_INTERFACE:${OpenVINO_C_API_SOURCE_DIR}/include>)
if(NOT BUILD_SHARED_LIBS)
target_compile_definitions(${TARGET_NAME} PUBLIC OPENVINO_STATIC_LIBRARY)
endif()
add_cpplint_target(${TARGET_NAME}_cpplint FOR_TARGETS ${TARGET_NAME})
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME})
set_target_properties(${TARGET_NAME} PROPERTIES INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
ie_add_vs_version_file(NAME ${TARGET_NAME}
FILEDESCRIPTION "Inference Engine C API Core Runtime library")
FILEDESCRIPTION "OpenVINO C API Core Runtime library")
# export
@ -35,20 +34,23 @@ set_target_properties(${TARGET_NAME} PROPERTIES EXPORT_NAME runtime::c)
ov_add_library_version(${TARGET_NAME})
export(TARGETS ${TARGET_NAME} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
# install
ie_cpack_add_component(${OV_CPACK_COMP_CORE_C} HIDDEN)
ie_cpack_add_component(${OV_CPACK_COMP_CORE_C_DEV} HIDDEN)
install(TARGETS ${TARGET_NAME} EXPORT OpenVINOTargets
RUNTIME DESTINATION ${OV_CPACK_RUNTIMEDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
LIBRARY DESTINATION ${OV_CPACK_LIBRARYDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
NAMELINK_COMPONENT ${OV_CPACK_COMP_CORE_C_DEV}
INCLUDES DESTINATION ${OV_CPACK_INCLUDEDIR}/ie)
RUNTIME DESTINATION ${OV_CPACK_RUNTIMEDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
LIBRARY DESTINATION ${OV_CPACK_LIBRARYDIR} COMPONENT ${OV_CPACK_COMP_CORE_C}
NAMELINK_COMPONENT ${OV_CPACK_COMP_CORE_C_DEV}
INCLUDES DESTINATION ${OV_CPACK_INCLUDEDIR})
install(DIRECTORY ${InferenceEngine_C_API_SOURCE_DIR}/include/
DESTINATION ${OV_CPACK_INCLUDEDIR}/ie
COMPONENT ${OV_CPACK_COMP_CORE_C_DEV})
install(DIRECTORY ${OpenVINO_C_API_SOURCE_DIR}/include/c_api
DESTINATION ${OV_CPACK_INCLUDEDIR}/ie
COMPONENT ${OV_CPACK_COMP_CORE_C_DEV})
install(DIRECTORY ${OpenVINO_C_API_SOURCE_DIR}/include/openvino/
DESTINATION ${OV_CPACK_INCLUDEDIR}/openvino
COMPONENT ${OV_CPACK_COMP_CORE_C_DEV})

255
src/bindings/c/src/common.h Normal file
View File

@ -0,0 +1,255 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <fstream>
#include <iterator>
#include <map>
#include <streambuf>
#include <string>
#include "openvino/openvino.hpp"
// TODO: we need to catch ov::Exception instead of ie::Exception
#include "details/ie_exception.hpp"
#define CATCH_OV_EXCEPTION(StatusCode, ExceptionType) \
catch (const InferenceEngine::ExceptionType&) { \
return ov_status_e::StatusCode; \
}
#define CATCH_OV_EXCEPTIONS \
CATCH_OV_EXCEPTION(GENERAL_ERROR, GeneralError) \
CATCH_OV_EXCEPTION(NOT_IMPLEMENTED, NotImplemented) \
CATCH_OV_EXCEPTION(NETWORK_NOT_LOADED, NetworkNotLoaded) \
CATCH_OV_EXCEPTION(PARAMETER_MISMATCH, ParameterMismatch) \
CATCH_OV_EXCEPTION(NOT_FOUND, NotFound) \
CATCH_OV_EXCEPTION(OUT_OF_BOUNDS, OutOfBounds) \
CATCH_OV_EXCEPTION(UNEXPECTED, Unexpected) \
CATCH_OV_EXCEPTION(REQUEST_BUSY, RequestBusy) \
CATCH_OV_EXCEPTION(RESULT_NOT_READY, ResultNotReady) \
CATCH_OV_EXCEPTION(NOT_ALLOCATED, NotAllocated) \
CATCH_OV_EXCEPTION(INFER_NOT_STARTED, InferNotStarted) \
CATCH_OV_EXCEPTION(NETWORK_NOT_READ, NetworkNotRead) \
CATCH_OV_EXCEPTION(INFER_CANCELLED, InferCancelled) \
catch (...) { \
return ov_status_e::UNEXPECTED; \
}
/**
* @struct ov_core
* @brief This struct represents OpenVINO Core entity.
*/
struct ov_core {
std::shared_ptr<ov::Core> object;
};
/**
* @struct ov_model
* @brief This is an interface of ov::Model
*/
struct ov_model {
std::shared_ptr<ov::Model> object;
};
/**
* @struct ov_output_const_node
* @brief This is an interface of ov::Output<const ov::Node>
*/
struct ov_output_const_node {
std::shared_ptr<ov::Output<const ov::Node>> object;
};
/**
* @struct ov_output_node
* @brief This is an interface of ov::Output<ov::Node>
*/
struct ov_output_node {
std::shared_ptr<ov::Output<ov::Node>> object;
};
/**
* @struct ov_property
* @brief This is an interface of property
*/
struct ov_property {
ov::AnyMap object;
};
/**
* @struct ov_compiled_model
* @brief This is an interface of ov::CompiledModel
*/
struct ov_compiled_model {
std::shared_ptr<ov::CompiledModel> object;
};
/**
* @struct ov_infer_request
* @brief This is an interface of ov::InferRequest
*/
struct ov_infer_request {
std::shared_ptr<ov::InferRequest> object;
};
/**
* @struct ov_layout
* @brief This is an interface of ov::Layout
*/
struct ov_layout {
ov::Layout object;
};
/**
* @struct ov_rank
* @brief This is an interface of ov::Dimension
*/
struct ov_rank {
ov::Dimension object;
};
/**
* @struct ov_dimension
* @brief This is an interface of ov::Dimension
*/
struct ov_dimension {
ov::Dimension object;
};
/**
* @struct ov_dimensions
* @brief This is an interface of std::vector<ov::Dimension>
*/
struct ov_dimensions {
std::vector<ov::Dimension> object;
};
/**
* @struct ov_partial_shape
* @brief It represents a shape that may be partially or totally dynamic.
* A PartialShape may have:
* Dynamic rank. (Informal notation: `?`)
* Static rank, but dynamic dimensions on some or all axes.
* (Informal notation examples: `{1,2,?,4}`, `{?,?,?}`)
* Static rank, and static dimensions on all axes.
* (Informal notation examples: `{1,2,3,4}`, `{6}`, `{}`)
*
* An interface to make user can initialize ov_partial_shape_t
*/
struct ov_partial_shape {
ov::Dimension rank;
std::vector<ov::Dimension> dims;
};
/**
* @struct ov_tensor
* @brief This is an interface of ov_tensor
*/
struct ov_tensor {
std::shared_ptr<ov::Tensor> object;
};
/**
* @struct ov_preprocess_prepostprocessor
* @brief This is an interface of ov::preprocess::PrePostProcessor
*/
struct ov_preprocess_prepostprocessor {
std::shared_ptr<ov::preprocess::PrePostProcessor> object;
};
/**
* @struct ov_preprocess_inputinfo
* @brief This is an interface of ov::preprocess::InputInfo
*/
struct ov_preprocess_inputinfo {
ov::preprocess::InputInfo* object;
};
/**
* @struct ov_preprocess_inputtensorinfo
* @brief This is an interface of ov::preprocess::InputTensorInfo
*/
struct ov_preprocess_inputtensorinfo {
ov::preprocess::InputTensorInfo* object;
};
/**
* @struct ov_preprocess_outputinfo
* @brief This is an interface of ov::preprocess::OutputInfo
*/
struct ov_preprocess_outputinfo {
ov::preprocess::OutputInfo* object;
};
/**
* @struct ov_preprocess_outputtensorinfo
* @brief This is an interface of ov::preprocess::OutputTensorInfo
*/
struct ov_preprocess_outputtensorinfo {
ov::preprocess::OutputTensorInfo* object;
};
/**
* @struct ov_preprocess_inputmodelinfo
* @brief This is an interface of ov::preprocess::InputModelInfo
*/
struct ov_preprocess_inputmodelinfo {
ov::preprocess::InputModelInfo* object;
};
/**
* @struct ov_preprocess_preprocesssteps
* @brief This is an interface of ov::preprocess::PreProcessSteps
*/
struct ov_preprocess_preprocesssteps {
ov::preprocess::PreProcessSteps* object;
};
/**
* @struct mem_stringbuf
* @brief This struct puts memory buffer to stringbuf.
*/
struct mem_stringbuf : std::streambuf {
mem_stringbuf(const char* buffer, size_t sz) {
char* bptr(const_cast<char*>(buffer));
setg(bptr, bptr, bptr + sz);
}
pos_type seekoff(off_type off,
std::ios_base::seekdir dir,
std::ios_base::openmode which = std::ios_base::in) override {
switch (dir) {
case std::ios_base::beg:
setg(eback(), eback() + off, egptr());
break;
case std::ios_base::end:
setg(eback(), egptr() + off, egptr());
break;
case std::ios_base::cur:
setg(eback(), gptr() + off, egptr());
break;
default:
return pos_type(off_type(-1));
}
return (gptr() < eback() || gptr() > egptr()) ? pos_type(off_type(-1)) : pos_type(gptr() - eback());
}
pos_type seekpos(pos_type pos, std::ios_base::openmode which) override {
return seekoff(pos, std::ios_base::beg, which);
}
};
/**
* @struct mem_istream
* @brief This struct puts stringbuf buffer to istream.
*/
struct mem_istream : virtual mem_stringbuf, std::istream {
mem_istream(const char* buffer, size_t sz)
: mem_stringbuf(buffer, sz),
std::istream(static_cast<std::streambuf*>(this)) {}
};
char* str_to_char_array(const std::string& str);
ov_element_type_e find_ov_element_type_e(ov::element::Type type);
ov::element::Type get_element_type(ov_element_type_e type);

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,144 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_compiled_model.h"
#include "common.h"
ov_status_e ov_compiled_model_get_runtime_model(const ov_compiled_model_t* compiled_model, ov_model_t** model) {
if (!compiled_model || !model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_model_t> _model(new ov_model_t);
auto runtime_model = compiled_model->object->get_runtime_model();
_model->object = std::const_pointer_cast<ov::Model>(std::move(runtime_model));
*model = _model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_compiled_model_inputs(const ov_compiled_model_t* compiled_model, ov_output_node_list_t* input_nodes) {
if (!compiled_model || !input_nodes) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto inputs = compiled_model->object->inputs();
int num = inputs.size();
input_nodes->size = num;
std::unique_ptr<ov_output_const_node_t[]> _output_nodes(new ov_output_const_node_t[num]);
for (int i = 0; i < num; i++) {
_output_nodes[i].object = std::make_shared<ov::Output<const ov::Node>>(std::move(inputs[i]));
}
input_nodes->output_nodes = _output_nodes.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_compiled_model_outputs(const ov_compiled_model_t* compiled_model, ov_output_node_list_t* output_nodes) {
if (!compiled_model || !output_nodes) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto outputs = compiled_model->object->outputs();
int num = outputs.size();
output_nodes->size = num;
std::unique_ptr<ov_output_const_node_t[]> _output_nodes(new ov_output_const_node_t[num]);
for (int i = 0; i < num; i++) {
_output_nodes[i].object = std::make_shared<ov::Output<const ov::Node>>(std::move(outputs[i]));
}
output_nodes->output_nodes = _output_nodes.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_compiled_model_create_infer_request(const ov_compiled_model_t* compiled_model,
ov_infer_request_t** infer_request) {
if (!compiled_model || !infer_request) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_infer_request_t> _infer_request(new ov_infer_request_t);
auto infer_req = compiled_model->object->create_infer_request();
_infer_request->object = std::make_shared<ov::InferRequest>(std::move(infer_req));
*infer_request = _infer_request.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_compiled_model_set_property(const ov_compiled_model_t* compiled_model, const ov_property_t* property) {
if (!compiled_model || !property) {
return ov_status_e::INVALID_C_PARAM;
}
try {
compiled_model->object->set_property(property->object);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_compiled_model_get_property(const ov_compiled_model_t* compiled_model,
const ov_property_key_e key,
ov_property_value_t* value) {
if (!compiled_model || !value) {
return ov_status_e::INVALID_C_PARAM;
}
try {
switch (key) {
case ov_property_key_e::SUPPORTED_PROPERTIES: {
auto supported_properties = compiled_model->object->get_property(ov::supported_properties);
std::string tmp_s;
for (const auto& i : supported_properties) {
tmp_s = tmp_s + "\n" + i;
}
char* temp = new char[tmp_s.length() + 1];
std::copy_n(tmp_s.c_str(), tmp_s.length() + 1, temp);
value->ptr = static_cast<void*>(temp);
value->cnt = tmp_s.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
default:
break;
}
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_compiled_model_export_model(const ov_compiled_model_t* compiled_model, const char* export_model_path) {
if (!compiled_model || !export_model_path) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::ofstream model_file(export_model_path, std::ios::out | std::ios::binary);
if (model_file.is_open()) {
compiled_model->object->export_model(model_file);
} else {
return ov_status_e::GENERAL_ERROR;
}
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_compiled_model_free(ov_compiled_model_t* compiled_model) {
if (compiled_model)
delete compiled_model;
}

View File

@ -0,0 +1,494 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_core.h"
#include "common.h"
/**
* @variable global value for error info.
* Don't change its order.
*/
char const* error_infos[] = {"success",
"general error",
"it's not implement",
"failed to network",
"input parameter mismatch",
"cannot find the value",
"out of bounds",
"run with unexpected error",
"request is busy",
"result is not ready",
"it is not allocated",
"inference start with error",
"network is not ready",
"inference is canceled",
"invalid c input parameters",
"unknown c error"};
const char* ov_get_error_info(ov_status_e status) {
auto index = -status;
auto max_index = sizeof(error_infos) / sizeof(error_infos[0]) - 1;
if (index > max_index)
return error_infos[max_index];
return error_infos[index];
}
char* str_to_char_array(const std::string& str) {
std::unique_ptr<char> _char_array(new char[str.length() + 1]);
char* char_array = _char_array.release();
std::copy_n(str.begin(), str.length() + 1, char_array);
return char_array;
}
ov_status_e ov_get_openvino_version(ov_version_t* version) {
if (!version) {
return ov_status_e::INVALID_C_PARAM;
}
try {
ov::Version object = ov::get_openvino_version();
std::string version_builderNumber = object.buildNumber;
version->buildNumber = str_to_char_array(version_builderNumber);
std::string version_description = object.description;
version->description = str_to_char_array(version_description);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_version_free(ov_version_t* version) {
if (!version) {
return;
}
delete[] version->buildNumber;
version->buildNumber = nullptr;
delete[] version->description;
version->description = nullptr;
}
ov_status_e ov_core_create_with_config(const char* xml_config_file, ov_core_t** core) {
if (!core || !xml_config_file) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_core_t> _core(new ov_core_t);
_core->object = std::make_shared<ov::Core>(xml_config_file);
*core = _core.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_create(ov_core_t** core) {
return ov_core_create_with_config("", core);
}
void ov_core_free(ov_core_t* core) {
if (core)
delete core;
}
ov_status_e ov_core_read_model(const ov_core_t* core,
const char* model_path,
const char* bin_path,
ov_model_t** model) {
if (!core || !model_path || !model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::string bin = "";
if (bin_path) {
bin = bin_path;
}
std::unique_ptr<ov_model_t> _model(new ov_model_t);
_model->object = core->object->read_model(model_path, bin);
*model = _model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_read_model_from_memory(const ov_core_t* core,
const char* model_str,
const ov_tensor_t* weights,
ov_model_t** model) {
if (!core || !model_str || !model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_model_t> _model(new ov_model_t);
if (weights) {
_model->object = core->object->read_model(model_str, *(weights->object));
} else {
_model->object = core->object->read_model(model_str, ov::Tensor());
}
*model = _model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_compile_model(const ov_core_t* core,
const ov_model_t* model,
const char* device_name,
ov_compiled_model_t** compiled_model,
const ov_property_t* property) {
if (!core || !model || !compiled_model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::string dev_name = "";
ov::CompiledModel object;
if (device_name) {
dev_name = device_name;
object = core->object->compile_model(model->object, dev_name);
} else {
object = core->object->compile_model(model->object);
}
std::unique_ptr<ov_compiled_model_t> _compiled_model(new ov_compiled_model_t);
_compiled_model->object = std::make_shared<ov::CompiledModel>(std::move(object));
*compiled_model = _compiled_model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_compile_model_from_file(const ov_core_t* core,
const char* model_path,
const char* device_name,
ov_compiled_model_t** compiled_model,
const ov_property_t* property) {
if (!core || !model_path || !compiled_model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
ov::CompiledModel object;
std::string dev_name = "";
if (device_name) {
dev_name = device_name;
object = core->object->compile_model(model_path, dev_name);
} else {
object = core->object->compile_model(model_path);
}
std::unique_ptr<ov_compiled_model_t> _compiled_model(new ov_compiled_model_t);
_compiled_model->object = std::make_shared<ov::CompiledModel>(std::move(object));
*compiled_model = _compiled_model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_set_property(const ov_core_t* core, const char* device_name, const ov_property_t* property) {
if (!core || !property) {
return ov_status_e::INVALID_C_PARAM;
}
try {
if (device_name) {
core->object->set_property(device_name, property->object);
} else {
core->object->set_property(property->object);
}
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_get_property(const ov_core_t* core,
const char* device_name,
const ov_property_key_e key,
ov_property_value_t* value) {
if (!core || !device_name || !value) {
return ov_status_e::INVALID_C_PARAM;
}
try {
switch (key) {
case ov_property_key_e::SUPPORTED_PROPERTIES: {
auto supported_properties = core->object->get_property(device_name, ov::supported_properties);
std::string tmp_s;
for (const auto& i : supported_properties) {
tmp_s = tmp_s + "\n" + i;
}
char* tmp = new char[tmp_s.length() + 1];
std::copy_n(tmp_s.begin(), tmp_s.length() + 1, tmp);
value->ptr = static_cast<void*>(tmp);
value->cnt = tmp_s.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
case ov_property_key_e::AVAILABLE_DEVICES: {
auto available_devices = core->object->get_property(device_name, ov::available_devices);
std::string tmp_s;
for (const auto& i : available_devices) {
tmp_s = tmp_s + "\n" + i;
}
char* tmp = new char[tmp_s.length() + 1];
std::copy_n(tmp_s.begin(), tmp_s.length() + 1, tmp);
value->ptr = static_cast<void*>(tmp);
value->cnt = tmp_s.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
case ov_property_key_e::OPTIMAL_NUMBER_OF_INFER_REQUESTS: {
auto optimal_number_of_infer_requests =
core->object->get_property(device_name, ov::optimal_number_of_infer_requests);
uint32_t* temp = new uint32_t;
*temp = optimal_number_of_infer_requests;
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::UINT32;
break;
}
case ov_property_key_e::RANGE_FOR_ASYNC_INFER_REQUESTS: {
auto range = core->object->get_property(device_name, ov::range_for_async_infer_requests);
uint32_t* temp = new uint32_t[3];
temp[0] = std::get<0>(range);
temp[1] = std::get<1>(range);
temp[2] = std::get<2>(range);
value->ptr = static_cast<void*>(temp);
value->cnt = 3;
value->type = ov_property_value_type_e::UINT32;
break;
}
case ov_property_key_e::RANGE_FOR_STREAMS: {
auto range = core->object->get_property(device_name, ov::range_for_streams);
uint32_t* temp = new uint32_t[2];
temp[0] = std::get<0>(range);
temp[1] = std::get<1>(range);
value->ptr = static_cast<void*>(temp);
value->cnt = 2;
value->type = ov_property_value_type_e::UINT32;
break;
}
case ov_property_key_e::FULL_DEVICE_NAME: {
auto name = core->object->get_property(device_name, ov::device::full_name);
char* tmp = new char[name.length() + 1];
std::copy_n(name.begin(), name.length() + 1, tmp);
value->ptr = static_cast<void*>(tmp);
value->cnt = name.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
case ov_property_key_e::OPTIMIZATION_CAPABILITIES: {
auto capabilities = core->object->get_property(device_name, ov::device::capabilities);
std::string tmp_s;
for (const auto& i : capabilities) {
tmp_s = tmp_s + "\n" + i;
}
char* tmp = new char[tmp_s.length() + 1];
std::copy_n(tmp_s.begin(), tmp_s.length() + 1, tmp);
value->ptr = static_cast<void*>(tmp);
value->cnt = tmp_s.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
case ov_property_key_e::CACHE_DIR: {
auto dir = core->object->get_property(device_name, ov::cache_dir);
char* tmp = new char[dir.length() + 1];
std::copy_n(dir.begin(), dir.length() + 1, tmp);
value->ptr = static_cast<void*>(tmp);
value->cnt = dir.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
case ov_property_key_e::NUM_STREAMS: {
auto num = core->object->get_property(device_name, ov::num_streams);
int32_t* temp = new int32_t;
*temp = num.num;
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::INT32;
break;
}
case ov_property_key_e::AFFINITY: {
auto affinity = core->object->get_property(device_name, ov::affinity);
ov_affinity_e* temp = new ov_affinity_e;
*temp = static_cast<ov_affinity_e>(affinity);
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::ENUM;
break;
}
case ov_property_key_e::INFERENCE_NUM_THREADS: {
auto num = core->object->get_property(device_name, ov::inference_num_threads);
int32_t* temp = new int32_t;
*temp = num;
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::INT32;
break;
}
case ov_property_key_e::PERFORMANCE_HINT: {
auto perf_mode = core->object->get_property(device_name, ov::hint::performance_mode);
ov_performance_mode_e* temp = new ov_performance_mode_e;
*temp = static_cast<ov_performance_mode_e>(perf_mode);
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::ENUM;
break;
}
case ov_property_key_e::NETWORK_NAME: {
auto name = core->object->get_property(device_name, ov::model_name);
char* tmp = new char[name.length() + 1];
std::copy_n(name.begin(), name.length() + 1, tmp);
value->ptr = static_cast<void*>(tmp);
value->cnt = name.length() + 1;
value->type = ov_property_value_type_e::CHAR;
break;
}
case ov_property_key_e::INFERENCE_PRECISION_HINT: {
auto infer_precision = core->object->get_property(device_name, ov::hint::inference_precision);
ov_element_type_e* temp = new ov_element_type_e;
*temp = static_cast<ov_element_type_e>(ov::element::Type_t(infer_precision));
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::ENUM;
break;
}
case ov_property_key_e::OPTIMAL_BATCH_SIZE: {
auto batch_size = core->object->get_property(device_name, ov::optimal_batch_size);
uint32_t* temp = new uint32_t;
*temp = batch_size;
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::UINT32;
break;
}
case ov_property_key_e::MAX_BATCH_SIZE: {
auto batch_size = core->object->get_property(device_name, ov::max_batch_size);
uint32_t* temp = new uint32_t;
*temp = batch_size;
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::UINT32;
break;
}
case ov_property_key_e::PERFORMANCE_HINT_NUM_REQUESTS: {
auto num_requests = core->object->get_property(device_name, ov::hint::num_requests);
uint32_t* temp = new uint32_t;
*temp = num_requests;
value->ptr = static_cast<void*>(temp);
value->cnt = 1;
value->type = ov_property_value_type_e::UINT32;
break;
}
default:
return ov_status_e::OUT_OF_BOUNDS;
break;
}
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_get_available_devices(const ov_core_t* core, ov_available_devices_t* devices) {
if (!core) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto available_devices = core->object->get_available_devices();
devices->size = available_devices.size();
std::unique_ptr<char*[]> tmp_devices(new char*[available_devices.size()]);
for (int i = 0; i < available_devices.size(); i++) {
tmp_devices[i] = str_to_char_array(available_devices[i]);
}
devices->devices = tmp_devices.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_available_devices_free(ov_available_devices_t* devices) {
if (!devices) {
return;
}
for (int i = 0; i < devices->size; i++) {
if (devices->devices[i]) {
delete[] devices->devices[i];
}
}
if (devices->devices)
delete[] devices->devices;
devices->devices = nullptr;
devices->size = 0;
}
ov_status_e ov_core_import_model(const ov_core_t* core,
const char* content,
const size_t content_size,
const char* device_name,
ov_compiled_model_t** compiled_model) {
if (!core || !content || !device_name || !compiled_model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
mem_istream model_stream(content, content_size);
std::unique_ptr<ov_compiled_model_t> _compiled_model(new ov_compiled_model_t);
auto object = core->object->import_model(model_stream, device_name);
_compiled_model->object = std::make_shared<ov::CompiledModel>(std::move(object));
*compiled_model = _compiled_model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_core_get_versions_by_device_name(const ov_core_t* core,
const char* device_name,
ov_core_version_list_t* versions) {
if (!core || !device_name || !versions) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto object = core->object->get_versions(device_name);
if (object.empty()) {
return ov_status_e::NOT_FOUND;
}
versions->size = object.size();
auto tmp_versions(new ov_core_version_t[object.size()]);
auto iter = object.cbegin();
for (int i = 0; i < object.size(); i++, iter++) {
const auto& tmp_version_name = iter->first;
tmp_versions[i].device_name = str_to_char_array(tmp_version_name);
const auto tmp_version_build_number = iter->second.buildNumber;
tmp_versions[i].version.buildNumber = str_to_char_array(tmp_version_build_number);
const auto tmp_version_description = iter->second.description;
tmp_versions[i].version.description = str_to_char_array(tmp_version_description);
}
versions->versions = tmp_versions;
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_core_versions_free(ov_core_version_list_t* versions) {
if (!versions) {
return;
}
for (int i = 0; i < versions->size; i++) {
if (versions->versions[i].device_name)
delete[] versions->versions[i].device_name;
if (versions->versions[i].version.buildNumber)
delete[] versions->versions[i].version.buildNumber;
if (versions->versions[i].version.description)
delete[] versions->versions[i].version.description;
}
if (versions->versions)
delete[] versions->versions;
versions->versions = nullptr;
}
void ov_free(const char* content) {
if (content)
delete content;
}

View File

@ -0,0 +1,74 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_dimension.h"
#include "common.h"
ov_status_e ov_dimension_create_dynamic(ov_dimension_t** dim, int64_t min_dimension, int64_t max_dimension) {
if (!dim || min_dimension < -1 || max_dimension < -1) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_dimension_t> _dim(new ov_dimension_t);
if (min_dimension != max_dimension) {
_dim->object = ov::Dimension(min_dimension, max_dimension);
} else {
if (min_dimension > -1) {
_dim->object = ov::Dimension(min_dimension);
} else {
_dim->object = ov::Dimension();
}
}
*dim = _dim.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_dimension_create(ov_dimension_t** dim, int64_t dimension_value) {
if (!dim || dimension_value <= 0) {
return ov_status_e::INVALID_C_PARAM;
}
return ov_dimension_create_dynamic(dim, dimension_value, dimension_value);
}
void ov_dimension_free(ov_dimension_t* dim) {
if (dim)
delete dim;
}
ov_status_e ov_dimensions_create(ov_dimensions_t** dimensions) {
if (!dimensions) {
return ov_status_e::INVALID_C_PARAM;
}
*dimensions = nullptr;
try {
std::unique_ptr<ov_dimensions_t> dims(new ov_dimensions_t);
*dimensions = dims.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_dimensions_add(ov_dimensions_t* dimensions, int64_t value) {
if (!dimensions || value < 0) {
return ov_status_e::INVALID_C_PARAM;
}
dimensions->object.emplace_back(value);
return ov_status_e::OK;
}
ov_status_e ov_dimensions_add_dynamic(ov_dimensions_t* dimensions, int64_t min_dimension, int64_t max_dimension) {
if (!dimensions || min_dimension < -1 || max_dimension < -1) {
return ov_status_e::INVALID_C_PARAM;
}
dimensions->object.emplace_back(min_dimension, max_dimension);
return ov_status_e::OK;
}
void ov_dimensions_free(ov_dimensions_t* dimensions) {
if (dimensions)
delete dimensions;
}

View File

@ -0,0 +1,190 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_infer_request.h"
#include "common.h"
void ov_infer_request_free(ov_infer_request_t* infer_request) {
if (infer_request)
delete infer_request;
}
ov_status_e ov_infer_request_set_tensor(ov_infer_request_t* infer_request,
const char* tensor_name,
const ov_tensor_t* tensor) {
if (!infer_request || !tensor_name || !tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
infer_request->object->set_tensor(tensor_name, *tensor->object);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_set_input_tensor(ov_infer_request_t* infer_request,
size_t idx,
const ov_tensor_t* tensor) {
if (!infer_request || !tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
infer_request->object->set_input_tensor(idx, *tensor->object);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_get_tensor(const ov_infer_request_t* infer_request,
const char* tensor_name,
ov_tensor_t** tensor) {
if (!infer_request || !tensor_name || !tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_tensor_t> _tensor(new ov_tensor_t);
ov::Tensor tensor_get = infer_request->object->get_tensor(tensor_name);
_tensor->object = std::make_shared<ov::Tensor>(std::move(tensor_get));
*tensor = _tensor.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_get_output_tensor(const ov_infer_request_t* infer_request,
size_t idx,
ov_tensor_t** tensor) {
if (!infer_request || !tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_tensor_t> _tensor(new ov_tensor_t);
ov::Tensor tensor_get = infer_request->object->get_output_tensor(idx);
_tensor->object = std::make_shared<ov::Tensor>(std::move(tensor_get));
*tensor = _tensor.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_infer(ov_infer_request_t* infer_request) {
if (!infer_request) {
return ov_status_e::INVALID_C_PARAM;
}
try {
infer_request->object->infer();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_cancel(ov_infer_request_t* infer_request) {
if (!infer_request) {
return ov_status_e::INVALID_C_PARAM;
}
try {
infer_request->object->cancel();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_start_async(ov_infer_request_t* infer_request) {
if (!infer_request) {
return ov_status_e::INVALID_C_PARAM;
}
try {
infer_request->object->start_async();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_wait(ov_infer_request_t* infer_request) {
if (!infer_request) {
return ov_status_e::INVALID_C_PARAM;
}
try {
infer_request->object->wait();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_set_callback(ov_infer_request_t* infer_request, const ov_callback_t* callback) {
if (!infer_request || !callback) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto func = [callback](std::exception_ptr ex) {
callback->callback_func(callback->args);
};
infer_request->object->set_callback(func);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_infer_request_get_profiling_info(ov_infer_request_t* infer_request,
ov_profiling_info_list_t* profiling_infos) {
if (!infer_request || !profiling_infos) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto infos = infer_request->object->get_profiling_info();
int num = infos.size();
profiling_infos->size = num;
std::unique_ptr<ov_profiling_info_t[]> _profiling_info_arr(new ov_profiling_info_t[num]);
for (int i = 0; i < num; i++) {
_profiling_info_arr[i].status = (ov_profiling_info_t::Status)infos[i].status;
_profiling_info_arr[i].real_time = infos[i].real_time.count();
_profiling_info_arr[i].cpu_time = infos[i].cpu_time.count();
_profiling_info_arr[i].node_name = str_to_char_array(infos[i].node_name);
_profiling_info_arr[i].exec_type = str_to_char_array(infos[i].exec_type);
_profiling_info_arr[i].node_type = str_to_char_array(infos[i].node_type);
}
profiling_infos->profiling_infos = _profiling_info_arr.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_profiling_info_list_free(ov_profiling_info_list_t* profiling_infos) {
if (!profiling_infos) {
return;
}
for (int i = 0; i < profiling_infos->size; i++) {
if (profiling_infos->profiling_infos[i].node_name)
delete[] profiling_infos->profiling_infos[i].node_name;
if (profiling_infos->profiling_infos[i].exec_type)
delete[] profiling_infos->profiling_infos[i].exec_type;
if (profiling_infos->profiling_infos[i].node_type)
delete[] profiling_infos->profiling_infos[i].node_type;
}
if (profiling_infos->profiling_infos)
delete[] profiling_infos->profiling_infos;
profiling_infos->profiling_infos = nullptr;
profiling_infos->size = 0;
}

View File

@ -0,0 +1,35 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_layout.h"
#include "common.h"
ov_status_e ov_layout_create(ov_layout_t** layout, const char* layout_desc) {
if (!layout || !layout_desc) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_layout_t> _layout(new ov_layout_t);
_layout->object = ov::Layout(layout_desc);
*layout = _layout.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_layout_free(ov_layout_t* layout) {
if (layout)
delete layout;
}
const char* ov_layout_to_string(ov_layout_t* layout) {
if (!layout) {
return str_to_char_array("Error: null layout!");
}
auto str = layout->object.to_string();
const char* res = str_to_char_array(str);
return res;
}

View File

@ -0,0 +1,193 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_model.h"
#include "common.h"
ov_status_e ov_model_outputs(const ov_model_t* model, ov_output_node_list_t* output_nodes) {
if (!model || !output_nodes) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto results = std::const_pointer_cast<const ov::Model>(model->object)->outputs();
output_nodes->size = results.size();
std::unique_ptr<ov_output_const_node_t[]> tmp_output_nodes(new ov_output_const_node_t[output_nodes->size]);
for (size_t i = 0; i < output_nodes->size; i++) {
tmp_output_nodes[i].object = std::make_shared<ov::Output<const ov::Node>>(std::move(results[i]));
}
output_nodes->output_nodes = tmp_output_nodes.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_inputs(const ov_model_t* model, ov_output_node_list_t* input_nodes) {
if (!model || !input_nodes) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto results = std::const_pointer_cast<const ov::Model>(model->object)->inputs();
input_nodes->size = results.size();
std::unique_ptr<ov_output_const_node_t[]> tmp_output_nodes(new ov_output_const_node_t[input_nodes->size]);
for (size_t i = 0; i < input_nodes->size; i++) {
tmp_output_nodes[i].object = std::make_shared<ov::Output<const ov::Node>>(std::move(results[i]));
}
input_nodes->output_nodes = tmp_output_nodes.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_input_by_name(const ov_model_t* model,
const char* tensor_name,
ov_output_const_node_t** input_node) {
if (!model || !tensor_name || !input_node) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto result = std::const_pointer_cast<const ov::Model>(model->object)->input(tensor_name);
std::unique_ptr<ov_output_const_node_t> _input_node(new ov_output_const_node_t);
_input_node->object = std::make_shared<ov::Output<const ov::Node>>(std::move(result));
*input_node = _input_node.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_input_by_index(const ov_model_t* model, const size_t index, ov_output_const_node_t** input_node) {
if (!model || !input_node) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto result = std::const_pointer_cast<const ov::Model>(model->object)->input(index);
std::unique_ptr<ov_output_const_node_t> _input_node(new ov_output_const_node_t);
_input_node->object = std::make_shared<ov::Output<const ov::Node>>(std::move(result));
*input_node = _input_node.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
bool ov_model_is_dynamic(const ov_model_t* model) {
if (!model) {
printf("[ERROR] The model is NULL!!!\n");
return false;
}
return model->object->is_dynamic();
}
ov_status_e ov_model_reshape_input_by_name(const ov_model_t* model,
const char* tensor_name,
const ov_partial_shape_t* partial_shape) {
if (!model || !tensor_name || !partial_shape) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::map<std::string, ov::PartialShape> in_shape;
if (partial_shape->rank.is_static() && (partial_shape->rank.get_length() == partial_shape->dims.size())) {
in_shape[tensor_name] = partial_shape->dims;
} else {
return ov_status_e::PARAMETER_MISMATCH;
}
model->object->reshape(in_shape);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_reshape(const ov_model_t* model,
const char* tensor_names[],
const ov_partial_shape_t* partial_shapes[],
size_t cnt) {
if (!model || !tensor_names || !partial_shapes || cnt < 1) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::map<std::string, ov::PartialShape> in_shapes;
for (auto i = 0; i < cnt; i++) {
auto name = tensor_names[i];
if (partial_shapes[i]->rank.is_static() &&
(partial_shapes[i]->rank.get_length() == partial_shapes[i]->dims.size())) {
in_shapes[name] = partial_shapes[i]->dims;
} else {
return ov_status_e::PARAMETER_MISMATCH;
}
}
model->object->reshape(in_shapes);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_reshape_by_ports(const ov_model_t* model,
size_t* ports,
const ov_partial_shape_t** partial_shape,
size_t cnt) {
if (!model || !ports || !partial_shape || cnt < 1) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::map<size_t, ov::PartialShape> in_shapes;
for (auto i = 0; i < cnt; i++) {
auto port_id = ports[i];
if (partial_shape[i]->rank.is_static() &&
(partial_shape[i]->rank.get_length() == partial_shape[i]->dims.size())) {
in_shapes[port_id] = partial_shape[i]->dims;
} else {
return ov_status_e::PARAMETER_MISMATCH;
}
}
model->object->reshape(in_shapes);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_reshape_one_input(const ov_model_t* model, const ov_partial_shape_t* partial_shape) {
size_t port = 0;
return ov_model_reshape_by_ports(model, &port, &partial_shape, 1);
}
ov_status_e ov_model_reshape_by_nodes(const ov_model_t* model,
const ov_output_node_t* output_nodes[],
const ov_partial_shape_t* partial_shapes[],
size_t cnt) {
if (!model || !output_nodes || !partial_shapes || cnt < 1) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::map<ov::Output<ov::Node>, ov::PartialShape> in_shapes;
for (auto i = 0; i < cnt; i++) {
auto node = *output_nodes[i]->object;
if (partial_shapes[i]->rank.is_static() &&
(partial_shapes[i]->rank.get_length() == partial_shapes[i]->dims.size())) {
in_shapes[node] = partial_shapes[i]->dims;
} else {
return ov_status_e::PARAMETER_MISMATCH;
}
}
model->object->reshape(in_shapes);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_model_get_friendly_name(const ov_model_t* model, char** friendly_name) {
if (!model || !friendly_name) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto& result = model->object->get_friendly_name();
*friendly_name = str_to_char_array(result);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_model_free(ov_model_t* model) {
if (model)
delete model;
}

View File

@ -0,0 +1,100 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_node.h"
#include "common.h"
ov_status_e ov_node_get_shape(ov_output_const_node_t* node, ov_shape_t* tensor_shape) {
if (!node || !tensor_shape) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto shape = node->object->get_shape();
ov_shape_init(tensor_shape, shape.size());
std::copy_n(shape.begin(), shape.size(), tensor_shape->dims);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_node_list_get_shape_by_index(ov_output_node_list_t* nodes, size_t idx, ov_shape_t* tensor_shape) {
if (!nodes || idx >= nodes->size || !tensor_shape) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto shape = nodes->output_nodes[idx].object->get_shape();
ov_shape_init(tensor_shape, shape.size());
std::copy_n(shape.begin(), shape.size(), tensor_shape->dims);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_node_list_get_any_name_by_index(ov_output_node_list_t* nodes, size_t idx, char** tensor_name) {
if (!nodes || !tensor_name || idx >= nodes->size) {
return ov_status_e::INVALID_C_PARAM;
}
try {
*tensor_name = str_to_char_array(nodes->output_nodes[idx].object->get_any_name());
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_node_list_get_partial_shape_by_index(ov_output_node_list_t* nodes,
size_t idx,
ov_partial_shape_t** partial_shape) {
if (!nodes || idx >= nodes->size || !partial_shape) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_partial_shape_t> _partial_shape(new ov_partial_shape_t);
auto shape = nodes->output_nodes[idx].object->get_partial_shape();
_partial_shape->rank = shape.rank();
auto iter = shape.begin();
for (; iter != shape.end(); iter++)
_partial_shape->dims.emplace_back(*iter);
*partial_shape = _partial_shape.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_node_list_get_element_type_by_index(ov_output_node_list_t* nodes,
size_t idx,
ov_element_type_e* tensor_type) {
if (!nodes || idx >= nodes->size) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto type = (ov::element::Type_t)nodes->output_nodes[idx].object->get_element_type();
*tensor_type = (ov_element_type_e)type;
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_output_node_list_free(ov_output_node_list_t* output_nodes) {
if (output_nodes) {
if (output_nodes->output_nodes)
delete[] output_nodes->output_nodes;
output_nodes->output_nodes = nullptr;
}
}
void ov_output_node_free(ov_output_const_node_t* output_node) {
if (output_node)
delete output_node;
}

View File

@ -0,0 +1,106 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_partial_shape.h"
#include "common.h"
ov_status_e ov_partial_shape_create(ov_partial_shape_t** partial_shape_obj, ov_rank_t* rank, ov_dimensions_t* dims) {
if (!partial_shape_obj || !rank) {
return ov_status_e::INVALID_C_PARAM;
}
*partial_shape_obj = nullptr;
try {
std::unique_ptr<ov_partial_shape_t> partial_shape(new ov_partial_shape_t);
if (rank->object.is_dynamic()) {
partial_shape->rank = rank->object;
} else {
if (rank->object.get_length() != dims->object.size()) {
return ov_status_e::INVALID_C_PARAM;
}
partial_shape->rank = rank->object;
partial_shape->dims = dims->object;
}
*partial_shape_obj = partial_shape.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_partial_shape_free(ov_partial_shape_t* partial_shape) {
if (partial_shape)
delete partial_shape;
}
const char* ov_partial_shape_to_string(ov_partial_shape_t* partial_shape) {
if (!partial_shape) {
return str_to_char_array("Error: null partial_shape!");
}
// dynamic rank
if (partial_shape->rank.is_dynamic()) {
return str_to_char_array("?");
}
// static rank
auto rank = partial_shape->rank.get_length();
if (rank != partial_shape->dims.size()) {
return str_to_char_array("rank error");
}
std::string str = std::string("{");
int i = 0;
for (auto& item : partial_shape->dims) {
std::ostringstream out;
out.str("");
out << item;
str += out.str();
if (i++ < rank - 1)
str += ",";
}
str += std::string("}");
const char* res = str_to_char_array(str);
return res;
}
ov_status_e ov_partial_shape_to_shape(ov_partial_shape_t* partial_shape, ov_shape_t* shape) {
if (!partial_shape || !shape) {
return ov_status_e::INVALID_C_PARAM;
}
try {
if (partial_shape->rank.is_dynamic()) {
return ov_status_e::PARAMETER_MISMATCH;
}
auto rank = partial_shape->rank.get_length();
ov_shape_init(shape, rank);
for (auto i = 0; i < rank; ++i) {
auto& ov_dim = partial_shape->dims[i];
if (ov_dim.is_static())
shape->dims[i] = ov_dim.get_length();
else
return ov_status_e::PARAMETER_MISMATCH;
}
shape->rank = rank;
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_shape_to_partial_shape(ov_shape_t* shape, ov_partial_shape_t** partial_shape) {
if (!partial_shape || !shape || shape->rank <= 0 || !shape->dims) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_partial_shape_t> _partial_shape(new ov_partial_shape_t);
_partial_shape->rank = ov::Dimension(shape->rank);
for (int i = 0; i < shape->rank; i++) {
_partial_shape->dims.emplace_back(shape->dims[i]);
}
*partial_shape = _partial_shape.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}

View File

@ -0,0 +1,383 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_prepostprocess.h"
#include "common.h"
const std::map<ov_preprocess_resizealgorithm_e, ov::preprocess::ResizeAlgorithm> resize_algorithm_map = {
{ov_preprocess_resizealgorithm_e::RESIZE_CUBIC, ov::preprocess::ResizeAlgorithm::RESIZE_CUBIC},
{ov_preprocess_resizealgorithm_e::RESIZE_LINEAR, ov::preprocess::ResizeAlgorithm::RESIZE_LINEAR},
{ov_preprocess_resizealgorithm_e::RESIZE_NEAREST, ov::preprocess::ResizeAlgorithm::RESIZE_NEAREST}};
const std::map<ov_color_format_e, ov::preprocess::ColorFormat> color_format_map = {
{ov_color_format_e::UNDEFINE, ov::preprocess::ColorFormat::UNDEFINED},
{ov_color_format_e::NV12_SINGLE_PLANE, ov::preprocess::ColorFormat::NV12_SINGLE_PLANE},
{ov_color_format_e::NV12_TWO_PLANES, ov::preprocess::ColorFormat::NV12_TWO_PLANES},
{ov_color_format_e::I420_SINGLE_PLANE, ov::preprocess::ColorFormat::I420_SINGLE_PLANE},
{ov_color_format_e::I420_THREE_PLANES, ov::preprocess::ColorFormat::I420_THREE_PLANES},
{ov_color_format_e::RGB, ov::preprocess::ColorFormat::RGB},
{ov_color_format_e::BGR, ov::preprocess::ColorFormat::BGR},
{ov_color_format_e::RGBX, ov::preprocess::ColorFormat::RGBX},
{ov_color_format_e::BGRX, ov::preprocess::ColorFormat::BGRX}};
#define GET_OV_COLOR_FARMAT(a) \
(color_format_map.find(a) == color_format_map.end() ? ov::preprocess::ColorFormat::UNDEFINED \
: color_format_map.at(a))
ov_status_e ov_preprocess_prepostprocessor_create(const ov_model_t* model,
ov_preprocess_prepostprocessor_t** preprocess) {
if (!model || !preprocess) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_prepostprocessor_t> _preprocess(new ov_preprocess_prepostprocessor_t);
_preprocess->object = std::make_shared<ov::preprocess::PrePostProcessor>(model->object);
*preprocess = _preprocess.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_prepostprocessor_free(ov_preprocess_prepostprocessor_t* preprocess) {
if (preprocess)
delete preprocess;
}
ov_status_e ov_preprocess_prepostprocessor_input(const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_inputinfo_t** preprocess_input_info) {
if (!preprocess || !preprocess_input_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_inputinfo_t> _preprocess_input_info(new ov_preprocess_inputinfo_t);
_preprocess_input_info->object = &(preprocess->object->input());
*preprocess_input_info = _preprocess_input_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_prepostprocessor_input_by_name(const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_inputinfo_t** preprocess_input_info) {
if (!preprocess || !tensor_name || !preprocess_input_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_inputinfo_t> _preprocess_input_info(new ov_preprocess_inputinfo_t);
_preprocess_input_info->object = &(preprocess->object->input(tensor_name));
*preprocess_input_info = _preprocess_input_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_prepostprocessor_input_by_index(const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_inputinfo_t** preprocess_input_info) {
if (!preprocess || !preprocess_input_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_inputinfo_t> _preprocess_input_info(new ov_preprocess_inputinfo_t);
_preprocess_input_info->object = &(preprocess->object->input(tensor_index));
*preprocess_input_info = _preprocess_input_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_inputinfo_free(ov_preprocess_inputinfo_t* preprocess_input_info) {
if (preprocess_input_info)
delete preprocess_input_info;
}
ov_status_e ov_preprocess_inputinfo_tensor(const ov_preprocess_inputinfo_t* preprocess_input_info,
ov_preprocess_inputtensorinfo_t** preprocess_input_tensor_info) {
if (!preprocess_input_info || !preprocess_input_tensor_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_inputtensorinfo_t> _preprocess_input_tensor_info(
new ov_preprocess_inputtensorinfo_t);
_preprocess_input_tensor_info->object = &(preprocess_input_info->object->tensor());
*preprocess_input_tensor_info = _preprocess_input_tensor_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_inputtensorinfo_free(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info) {
if (preprocess_input_tensor_info)
delete preprocess_input_tensor_info;
}
ov_status_e ov_preprocess_inputinfo_preprocess(const ov_preprocess_inputinfo_t* preprocess_input_info,
ov_preprocess_preprocesssteps_t** preprocess_input_steps) {
if (!preprocess_input_info || !preprocess_input_steps) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_preprocesssteps_t> _preprocess_input_steps(new ov_preprocess_preprocesssteps_t);
_preprocess_input_steps->object = &(preprocess_input_info->object->preprocess());
*preprocess_input_steps = _preprocess_input_steps.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_preprocesssteps_free(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps) {
if (preprocess_input_process_steps)
delete preprocess_input_process_steps;
}
ov_status_e ov_preprocess_preprocesssteps_resize(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps,
const ov_preprocess_resizealgorithm_e resize_algorithm) {
if (!preprocess_input_process_steps) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_process_steps->object->resize(resize_algorithm_map.at(resize_algorithm));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_inputtensorinfo_set_element_type(
ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const ov_element_type_e element_type) {
if (!preprocess_input_tensor_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_tensor_info->object->set_element_type(get_element_type(element_type));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_inputtensorinfo_set_from(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const ov_tensor_t* tensor) {
if (!preprocess_input_tensor_info || !tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_tensor_info->object->set_from(*(tensor->object));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_inputtensorinfo_set_layout(ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
ov_layout_t* layout) {
if (!preprocess_input_tensor_info || !layout) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_tensor_info->object->set_layout(layout->object);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_inputtensorinfo_set_color_format(
ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const ov_color_format_e colorFormat) {
if (!preprocess_input_tensor_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_tensor_info->object->set_color_format(GET_OV_COLOR_FARMAT(colorFormat));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_inputtensorinfo_set_spatial_static_shape(
ov_preprocess_inputtensorinfo_t* preprocess_input_tensor_info,
const size_t input_height,
const size_t input_width) {
if (!preprocess_input_tensor_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_tensor_info->object->set_spatial_static_shape(input_height, input_width);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_preprocesssteps_convert_element_type(
ov_preprocess_preprocesssteps_t* preprocess_input_process_steps,
const ov_element_type_e element_type) {
if (!preprocess_input_process_steps) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_process_steps->object->convert_element_type(get_element_type(element_type));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_preprocesssteps_convert_color(ov_preprocess_preprocesssteps_t* preprocess_input_process_steps,
const ov_color_format_e colorFormat) {
if (!preprocess_input_process_steps) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_process_steps->object->convert_color(GET_OV_COLOR_FARMAT(colorFormat));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_prepostprocessor_output(const ov_preprocess_prepostprocessor_t* preprocess,
ov_preprocess_outputinfo_t** preprocess_output_info) {
if (!preprocess || !preprocess_output_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_outputinfo_t> _preprocess_output_info(new ov_preprocess_outputinfo_t);
_preprocess_output_info->object = &(preprocess->object->output());
*preprocess_output_info = _preprocess_output_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_prepostprocessor_output_by_index(const ov_preprocess_prepostprocessor_t* preprocess,
const size_t tensor_index,
ov_preprocess_outputinfo_t** preprocess_output_info) {
if (!preprocess || !preprocess_output_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_outputinfo_t> _preprocess_output_info(new ov_preprocess_outputinfo_t);
_preprocess_output_info->object = &(preprocess->object->output(tensor_index));
*preprocess_output_info = _preprocess_output_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_prepostprocessor_output_by_name(const ov_preprocess_prepostprocessor_t* preprocess,
const char* tensor_name,
ov_preprocess_outputinfo_t** preprocess_output_info) {
if (!preprocess || !tensor_name || !preprocess_output_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_outputinfo_t> _preprocess_output_info(new ov_preprocess_outputinfo_t);
_preprocess_output_info->object = &(preprocess->object->output(tensor_name));
*preprocess_output_info = _preprocess_output_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_outputinfo_free(ov_preprocess_outputinfo_t* preprocess_output_info) {
if (preprocess_output_info)
delete preprocess_output_info;
}
ov_status_e ov_preprocess_outputinfo_tensor(ov_preprocess_outputinfo_t* preprocess_output_info,
ov_preprocess_outputtensorinfo_t** preprocess_output_tensor_info) {
if (!preprocess_output_info || !preprocess_output_tensor_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_outputtensorinfo_t> _preprocess_output_tensor_info(
new ov_preprocess_outputtensorinfo_t);
_preprocess_output_tensor_info->object = &(preprocess_output_info->object->tensor());
*preprocess_output_tensor_info = _preprocess_output_tensor_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_outputtensorinfo_free(ov_preprocess_outputtensorinfo_t* preprocess_output_tensor_info) {
if (preprocess_output_tensor_info)
delete preprocess_output_tensor_info;
}
ov_status_e ov_preprocess_output_set_element_type(ov_preprocess_outputtensorinfo_t* preprocess_output_tensor_info,
const ov_element_type_e element_type) {
if (!preprocess_output_tensor_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_output_tensor_info->object->set_element_type(get_element_type(element_type));
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_inputinfo_model(ov_preprocess_inputinfo_t* preprocess_input_info,
ov_preprocess_inputmodelinfo_t** preprocess_input_model_info) {
if (!preprocess_input_info || !preprocess_input_model_info) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_preprocess_inputmodelinfo_t> _preprocess_input_model_info(
new ov_preprocess_inputmodelinfo_t);
_preprocess_input_model_info->object = &(preprocess_input_info->object->model());
*preprocess_input_model_info = _preprocess_input_model_info.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_preprocess_inputmodelinfo_free(ov_preprocess_inputmodelinfo_t* preprocess_input_model_info) {
if (preprocess_input_model_info)
delete preprocess_input_model_info;
}
ov_status_e ov_preprocess_inputmodelinfo_set_layout(ov_preprocess_inputmodelinfo_t* preprocess_input_model_info,
ov_layout_t* layout) {
if (!preprocess_input_model_info || !layout) {
return ov_status_e::INVALID_C_PARAM;
}
try {
preprocess_input_model_info->object->set_layout(layout->object);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_preprocess_prepostprocessor_build(const ov_preprocess_prepostprocessor_t* preprocess,
ov_model_t** model) {
if (!preprocess || !model) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_model_t> _model(new ov_model_t);
_model->object = preprocess->object->build();
*model = _model.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}

View File

@ -0,0 +1,104 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_property.h"
#include "common.h"
const std::map<ov_performance_mode_e, ov::hint::PerformanceMode> performance_mode_map = {
{ov_performance_mode_e::UNDEFINED_MODE, ov::hint::PerformanceMode::UNDEFINED},
{ov_performance_mode_e::THROUGHPUT, ov::hint::PerformanceMode::THROUGHPUT},
{ov_performance_mode_e::LATENCY, ov::hint::PerformanceMode::LATENCY},
{ov_performance_mode_e::CUMULATIVE_THROUGHPUT, ov::hint::PerformanceMode::CUMULATIVE_THROUGHPUT}};
ov_status_e ov_property_create(ov_property_t** property) {
if (!property) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_property_t> _property(new ov_property_t);
*property = _property.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_property_free(ov_property_t* property) {
if (property)
delete property;
}
ov_status_e ov_property_put(ov_property_t* property, ov_property_key_e key, ov_property_value_t* value) {
if (!property || !value) {
return ov_status_e::INVALID_C_PARAM;
}
try {
switch (key) {
case ov_property_key_e::PERFORMANCE_HINT_NUM_REQUESTS: {
uint32_t v = *(static_cast<uint32_t*>(value->ptr));
property->object.emplace(ov::hint::num_requests(v));
break;
}
case ov_property_key_e::NUM_STREAMS: {
uint32_t v = *(static_cast<uint32_t*>(value->ptr));
property->object.emplace(ov::num_streams(v));
break;
}
case ov_property_key_e::PERFORMANCE_HINT: {
ov_performance_mode_e m = *(static_cast<ov_performance_mode_e*>(value->ptr));
if (m > ov_performance_mode_e::CUMULATIVE_THROUGHPUT) {
return ov_status_e::INVALID_C_PARAM;
}
auto v = performance_mode_map.at(m);
property->object.emplace(ov::hint::performance_mode(v));
break;
}
case ov_property_key_e::AFFINITY: {
ov_affinity_e v = *(static_cast<ov_affinity_e*>(value->ptr));
if (v < ov_affinity_e::NONE || v > ov_affinity_e::HYBRID_AWARE) {
return ov_status_e::INVALID_C_PARAM;
}
ov::Affinity affinity = static_cast<ov::Affinity>(v);
property->object.emplace(ov::affinity(affinity));
break;
}
case ov_property_key_e::INFERENCE_NUM_THREADS: {
int32_t v = *(static_cast<int32_t*>(value->ptr));
property->object.emplace(ov::inference_num_threads(v));
break;
}
case ov_property_key_e::INFERENCE_PRECISION_HINT: {
ov_element_type_e v = *(static_cast<ov_element_type_e*>(value->ptr));
if (v > ov_element_type_e::U64) {
return ov_status_e::INVALID_C_PARAM;
}
ov::element::Type type(static_cast<ov::element::Type_t>(v));
property->object.emplace(ov::hint::inference_precision(type));
break;
}
case ov_property_key_e::CACHE_DIR: {
char* dir = static_cast<char*>(value->ptr);
property->object.emplace(ov::cache_dir(std::string(dir)));
break;
}
default:
return ov_status_e::OUT_OF_BOUNDS;
break;
}
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_property_value_clean(ov_property_value_t* value) {
if (value) {
if (value->ptr) {
char* temp = static_cast<char*>(value->ptr);
delete temp;
}
value->ptr = nullptr;
value->cnt = 0;
}
}

View File

@ -0,0 +1,40 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_rank.h"
#include "common.h"
ov_status_e ov_rank_create_dynamic(ov_rank_t** rank, int64_t min_dimension, int64_t max_dimension) {
if (!rank || min_dimension < -1 || max_dimension < -1) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_rank_t> _rank(new ov_rank_t);
if (min_dimension != max_dimension) {
_rank->object = ov::Dimension(min_dimension, max_dimension);
} else {
if (min_dimension > -1) {
_rank->object = ov::Dimension(min_dimension);
} else {
_rank->object = ov::Dimension();
}
}
*rank = _rank.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_rank_create(ov_rank** rank, int64_t rank_value) {
if (!rank || rank_value <= 0) {
return ov_status_e::INVALID_C_PARAM;
}
return ov_rank_create_dynamic(rank, rank_value, rank_value);
}
void ov_rank_free(ov_rank_t* rank) {
if (rank)
delete rank;
}

View File

@ -0,0 +1,35 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_shape.h"
#include "common.h"
ov_status_e ov_shape_init(ov_shape_t* shape, int64_t rank) {
if (!shape || rank <= 0) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<int64_t> _dims(new int64_t[rank]);
shape->rank = rank;
shape->dims = _dims.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_shape_deinit(ov_shape_t* shape) {
if (!shape) {
return ov_status_e::INVALID_C_PARAM;
}
shape->rank = 0;
if (shape->dims) {
delete[] shape->dims;
shape->dims = nullptr;
}
return ov_status_e::OK;
}

View File

@ -0,0 +1,150 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "openvino/c/ov_tensor.h"
#include "common.h"
const std::map<ov_element_type_e, ov::element::Type> element_type_map = {
{ov_element_type_e::UNDEFINED, ov::element::undefined},
{ov_element_type_e::DYNAMIC, ov::element::dynamic},
{ov_element_type_e::BOOLEAN, ov::element::boolean},
{ov_element_type_e::BF16, ov::element::bf16},
{ov_element_type_e::F16, ov::element::f16},
{ov_element_type_e::F32, ov::element::f32},
{ov_element_type_e::F64, ov::element::f64},
{ov_element_type_e::I4, ov::element::i4},
{ov_element_type_e::I8, ov::element::i8},
{ov_element_type_e::I16, ov::element::i16},
{ov_element_type_e::I32, ov::element::i32},
{ov_element_type_e::I64, ov::element::i64},
{ov_element_type_e::U1, ov::element::u1},
{ov_element_type_e::U4, ov::element::u4},
{ov_element_type_e::U8, ov::element::u8},
{ov_element_type_e::U16, ov::element::u16},
{ov_element_type_e::U32, ov::element::u32},
{ov_element_type_e::U64, ov::element::u64}};
ov_element_type_e find_ov_element_type_e(ov::element::Type type) {
for (auto iter = element_type_map.begin(); iter != element_type_map.end(); iter++) {
if (iter->second == type) {
return iter->first;
}
}
return ov_element_type_e::UNDEFINED;
}
ov::element::Type get_element_type(ov_element_type_e type) {
return element_type_map.at(type);
}
ov_status_e ov_tensor_create(const ov_element_type_e type, const ov_shape_t shape, ov_tensor_t** tensor) {
if (!tensor || element_type_map.find(type) == element_type_map.end()) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_tensor_t> _tensor(new ov_tensor_t);
auto tmp_type = get_element_type(type);
ov::Shape tmp_shape;
std::copy_n(shape.dims, shape.rank, std::back_inserter(tmp_shape));
_tensor->object = std::make_shared<ov::Tensor>(tmp_type, tmp_shape);
*tensor = _tensor.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_create_from_host_ptr(const ov_element_type_e type,
const ov_shape_t shape,
void* host_ptr,
ov_tensor_t** tensor) {
if (!tensor || !host_ptr || element_type_map.find(type) == element_type_map.end()) {
return ov_status_e::INVALID_C_PARAM;
}
try {
std::unique_ptr<ov_tensor_t> _tensor(new ov_tensor_t);
auto tmp_type = get_element_type(type);
ov::Shape tmp_shape;
std::copy_n(shape.dims, shape.rank, std::back_inserter(tmp_shape));
_tensor->object = std::make_shared<ov::Tensor>(tmp_type, tmp_shape, host_ptr);
*tensor = _tensor.release();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_set_shape(ov_tensor_t* tensor, const ov_shape_t shape) {
if (!tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
ov::Shape tmp_shape;
std::copy_n(shape.dims, shape.rank, std::back_inserter(tmp_shape));
tensor->object->set_shape(tmp_shape);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_get_shape(const ov_tensor_t* tensor, ov_shape_t* shape) {
if (!tensor) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto tmp_shape = tensor->object->get_shape();
ov_shape_init(shape, tmp_shape.size());
std::copy_n(tmp_shape.begin(), tmp_shape.size(), shape->dims);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_get_element_type(const ov_tensor_t* tensor, ov_element_type_e* type) {
if (!tensor || !type) {
return ov_status_e::INVALID_C_PARAM;
}
try {
auto tmp_type = tensor->object->get_element_type();
*type = find_ov_element_type_e(tmp_type);
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_get_size(const ov_tensor_t* tensor, size_t* elements_size) {
if (!tensor || !elements_size) {
return ov_status_e::INVALID_C_PARAM;
}
try {
*elements_size = tensor->object->get_size();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_get_byte_size(const ov_tensor_t* tensor, size_t* byte_size) {
if (!tensor || !byte_size) {
return ov_status_e::INVALID_C_PARAM;
}
try {
*byte_size = tensor->object->get_byte_size();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
ov_status_e ov_tensor_data(const ov_tensor_t* tensor, void** data) {
if (!tensor || !data) {
return ov_status_e::INVALID_C_PARAM;
}
try {
*data = tensor->object->data();
}
CATCH_OV_EXCEPTIONS
return ov_status_e::OK;
}
void ov_tensor_free(ov_tensor_t* tensor) {
if (tensor)
delete tensor;
}

View File

@ -2,6 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
#
# OpenVINO Legacy C API test sample
set(TARGET_NAME "InferenceEngineCAPITests")
add_executable(${TARGET_NAME} ie_c_api_test.cpp test_model_repo.hpp)
@ -9,10 +10,10 @@ add_executable(${TARGET_NAME} ie_c_api_test.cpp test_model_repo.hpp)
target_link_libraries(${TARGET_NAME} PRIVATE openvino_c commonTestUtils gtest_main)
target_compile_definitions(${TARGET_NAME}
PRIVATE
$<$<BOOL:${ENABLE_GAPI_PREPROCESSING}>:ENABLE_GAPI_PREPROCESSING>
DATA_PATH=\"${DATA_PATH}\"
MODELS_PATH=\"${MODELS_PATH}\" )
PRIVATE
$<$<BOOL:${ENABLE_GAPI_PREPROCESSING}>:ENABLE_GAPI_PREPROCESSING>
DATA_PATH=\"${DATA_PATH}\"
MODELS_PATH=\"${MODELS_PATH}\")
if(ENABLE_AUTO OR ENABLE_MULTI)
add_dependencies(${TARGET_NAME} openvino_auto_plugin)
@ -33,6 +34,47 @@ endif()
add_cpplint_target(${TARGET_NAME}_cpplint FOR_TARGETS ${TARGET_NAME})
install(TARGETS ${TARGET_NAME}
RUNTIME DESTINATION tests
COMPONENT tests
EXCLUDE_FROM_ALL)
RUNTIME DESTINATION tests
COMPONENT tests
EXCLUDE_FROM_ALL)
# OpenVINO 2.0 and Legacy C API test sample
set(TARGET_NAME "ov_capi_test")
file(GLOB SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/ov_*.cpp)
file(GLOB HEADERS ${CMAKE_CURRENT_SOURCE_DIR}/*.hpp)
add_executable(${TARGET_NAME} ${SOURCES} ${HEADERS})
target_link_libraries(${TARGET_NAME} PRIVATE openvino_c
commonTestUtils gtest_main)
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${OPENVINO_API_SOURCE_DIR}/include>)
target_compile_definitions(${TARGET_NAME}
PRIVATE
DATA_PATH=\"${DATA_PATH}\"
MODELS_PATH=\"${MODELS_PATH}\")
if(ENABLE_AUTO OR ENABLE_MULTI)
add_dependencies(${TARGET_NAME} openvino_auto_plugin)
endif()
if(ENABLE_AUTO_BATCH)
add_dependencies(${TARGET_NAME} openvino_auto_batch_plugin)
endif()
if(ENABLE_INTEL_CPU)
add_dependencies(${TARGET_NAME} openvino_intel_cpu_plugin)
endif()
if(ENABLE_INTEL_GPU)
add_dependencies(${TARGET_NAME} openvino_intel_gpu_plugin)
endif()
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME})
install(TARGETS ${TARGET_NAME}
RUNTIME DESTINATION tests
COMPONENT tests
EXCLUDE_FROM_ALL)

View File

@ -2,6 +2,7 @@
// SPDX-License-Identifier: Apache-2.0
//
// clang-format off
#include <gtest/gtest.h>
#include <stdio.h>
#include <stdlib.h>
@ -77,7 +78,6 @@ void completion_callback(void *args) {
ie_infer_request_t *infer_request = (ie_infer_request_t *)args;
ie_blob_t *output_blob = nullptr;
printf("async infer callback...\n");
IE_EXPECT_OK(ie_infer_request_get_blob(infer_request, "fc_out", &output_blob));
ie_blob_buffer_t buffer;
@ -2243,3 +2243,4 @@ TEST(ie_blob_make_memory_i420, inferRequestWithI420) {
}
#endif // ENABLE_GAPI_PREPROCESSING
// clang-format on

View File

@ -0,0 +1,208 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
class ov_compiled_model : public ::testing::TestWithParam<std::string> {};
INSTANTIATE_TEST_SUITE_P(device_name, ov_compiled_model, ::testing::Values("CPU"));
TEST_P(ov_compiled_model, get_runtime_model) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_model_t* runtime_model = nullptr;
OV_EXPECT_OK(ov_compiled_model_get_runtime_model(compiled_model, &runtime_model));
EXPECT_NE(nullptr, runtime_model);
ov_model_free(runtime_model);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, get_runtime_model_error_handling) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_model_t* runtime_model = nullptr;
OV_EXPECT_NOT_OK(ov_compiled_model_get_runtime_model(nullptr, &runtime_model));
OV_EXPECT_NOT_OK(ov_compiled_model_get_runtime_model(compiled_model, nullptr));
ov_model_free(runtime_model);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, get_inputs) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_output_node_list_t input_nodes;
input_nodes.output_nodes = nullptr;
input_nodes.size = 0;
OV_EXPECT_OK(ov_compiled_model_inputs(compiled_model, &input_nodes));
EXPECT_NE(nullptr, input_nodes.output_nodes);
EXPECT_NE(0, input_nodes.size);
ov_output_node_list_free(&input_nodes);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, get_inputs_error_handling) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_output_node_list_t input_nodes;
input_nodes.output_nodes = nullptr;
input_nodes.size = 0;
OV_EXPECT_NOT_OK(ov_compiled_model_inputs(nullptr, &input_nodes));
OV_EXPECT_NOT_OK(ov_compiled_model_inputs(compiled_model, nullptr));
ov_output_node_list_free(&input_nodes);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, get_outputs) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_output_node_list_t output_nodes;
output_nodes.output_nodes = nullptr;
output_nodes.size = 0;
OV_EXPECT_OK(ov_compiled_model_outputs(compiled_model, &output_nodes));
EXPECT_NE(nullptr, output_nodes.output_nodes);
EXPECT_NE(0, output_nodes.size);
ov_output_node_list_free(&output_nodes);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, get_outputs_error_handling) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_output_node_list_t output_nodes;
output_nodes.output_nodes = nullptr;
output_nodes.size = 0;
OV_EXPECT_NOT_OK(ov_compiled_model_outputs(nullptr, &output_nodes));
OV_EXPECT_NOT_OK(ov_compiled_model_outputs(compiled_model, nullptr));
ov_output_node_list_free(&output_nodes);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, create_infer_request) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_infer_request_t* infer_request = nullptr;
OV_EXPECT_OK(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
EXPECT_NE(nullptr, infer_request);
ov_infer_request_free(infer_request);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_compiled_model, create_infer_request_error_handling) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
ov_infer_request_t* infer_request = nullptr;
OV_EXPECT_NOT_OK(ov_compiled_model_create_infer_request(nullptr, &infer_request));
OV_EXPECT_NOT_OK(ov_compiled_model_create_infer_request(compiled_model, nullptr));
ov_infer_request_free(infer_request);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}

View File

@ -0,0 +1,285 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
#include "test_model_repo.hpp"
TEST(ov_version, api_version) {
ov_version_t version;
ov_get_openvino_version(&version);
auto ver = ov::get_openvino_version();
EXPECT_STREQ(version.buildNumber, ver.buildNumber);
ov_version_free(&version);
}
TEST(ov_util, ov_get_error_info_check) {
auto res = ov_get_error_info(ov_status_e::INVALID_C_PARAM);
auto str = "invalid c input parameters";
EXPECT_STREQ(res, str);
}
class ov_core : public ::testing::TestWithParam<std::string> {};
INSTANTIATE_TEST_SUITE_P(device_name, ov_core, ::testing::Values("CPU"));
TEST(ov_core, ov_core_create_with_config) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create_with_config(plugins_xml, &core));
ASSERT_NE(nullptr, core);
ov_core_free(core);
}
TEST(ov_core, ov_core_create_with_no_config) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_core_free(core);
}
TEST(ov_core, ov_core_read_model) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_core, ov_core_read_model_no_bin) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, nullptr, &model));
ASSERT_NE(nullptr, model);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_core, ov_core_read_model_from_memory) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
std::vector<uint8_t> weights_content(content_from_file(bin, true));
ov_tensor_t* tensor = nullptr;
ov_shape_t shape;
ov_shape_init(&shape, 2);
shape.dims[0] = 1;
shape.dims[1] = (int64_t)weights_content.size();
OV_ASSERT_OK(ov_tensor_create_from_host_ptr(ov_element_type_e::U8, shape, weights_content.data(), &tensor));
ASSERT_NE(nullptr, tensor);
std::vector<uint8_t> xml_content(content_from_file(xml, false));
ov_model_t* model = nullptr;
OV_ASSERT_OK(
ov_core_read_model_from_memory(core, reinterpret_cast<const char*>(xml_content.data()), tensor, &model));
ASSERT_NE(nullptr, model);
ov_shape_deinit(&shape);
ov_tensor_free(tensor);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_compile_model) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, nullptr, &model));
ASSERT_NE(nullptr, model);
ov_compiled_model_t* compiled_model = nullptr;
ov_property_t* property = nullptr;
OV_ASSERT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, property));
ASSERT_NE(nullptr, compiled_model);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_compile_model_from_file) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_compiled_model_t* compiled_model = nullptr;
ov_property_t* property = nullptr;
OV_ASSERT_OK(ov_core_compile_model_from_file(core, xml, device_name.c_str(), &compiled_model, property));
ASSERT_NE(nullptr, compiled_model);
ov_compiled_model_free(compiled_model);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_set_property) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_property_t* property = nullptr;
OV_ASSERT_OK(ov_property_create(&property));
ov_property_key_e key = ov_property_key_e::PERFORMANCE_HINT;
ov_performance_mode_e mode = ov_performance_mode_e::THROUGHPUT;
ov_property_value_t value;
value.ptr = (void*)&mode;
value.cnt = 1;
value.type = ov_property_value_type_e::ENUM;
OV_ASSERT_OK(ov_property_put(property, key, &value));
OV_ASSERT_OK(ov_core_set_property(core, device_name.c_str(), property));
ov_property_free(property);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_get_property) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_property_value_t property_value;
OV_ASSERT_OK(
ov_core_get_property(core, device_name.c_str(), ov_property_key_e::SUPPORTED_PROPERTIES, &property_value));
ov_property_value_clean(&property_value);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_set_get_property_str) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_property_t* property = nullptr;
OV_ASSERT_OK(ov_property_create(&property));
ov_property_key_e key = ov_property_key_e::CACHE_DIR;
const char cache_dir[] = "./cache_dir";
ov_property_value_t value;
value.ptr = (void*)cache_dir;
value.cnt = sizeof(cache_dir);
value.type = ov_property_value_type_e::CHAR;
OV_ASSERT_OK(ov_property_put(property, key, &value));
OV_ASSERT_OK(ov_core_set_property(core, device_name.c_str(), property));
ov_property_value_t property_value;
OV_ASSERT_OK(ov_core_get_property(core, device_name.c_str(), key, &property_value));
EXPECT_STREQ(cache_dir, (char*)property_value.ptr);
ov_property_free(property);
ov_property_value_clean(&property_value);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_set_get_property_int) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_property_t* property = nullptr;
OV_ASSERT_OK(ov_property_create(&property));
ov_property_key_e key = ov_property_key_e::INFERENCE_NUM_THREADS;
int32_t num = 8;
ov_property_value_t value;
value.ptr = (void*)&num;
value.cnt = 1;
value.type = ov_property_value_type_e::INT32;
OV_ASSERT_OK(ov_property_put(property, key, &value));
OV_ASSERT_OK(ov_core_set_property(core, device_name.c_str(), property));
ov_property_value_t property_value;
OV_ASSERT_OK(ov_core_get_property(core, device_name.c_str(), key, &property_value));
int32_t res = *(int32_t*)property_value.ptr;
EXPECT_EQ(num, res);
ov_property_value_clean(&property_value);
ov_property_free(property);
ov_core_free(core);
}
TEST(ov_core, ov_core_get_available_devices) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_available_devices_t devices;
OV_ASSERT_OK(ov_core_get_available_devices(core, &devices));
ov_available_devices_free(&devices);
ov_core_free(core);
}
TEST_P(ov_core, ov_compiled_model_export_model) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_compiled_model_t* compiled_model = nullptr;
OV_ASSERT_OK(ov_core_compile_model_from_file(core, xml, device_name.c_str(), &compiled_model, nullptr));
ASSERT_NE(nullptr, compiled_model);
std::string export_path = TestDataHelpers::generate_model_path("test_model", "exported_model.blob");
OV_ASSERT_OK(ov_compiled_model_export_model(compiled_model, export_path.c_str()));
ov_compiled_model_free(compiled_model);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_import_model) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_compiled_model_t* compiled_model = nullptr;
OV_ASSERT_OK(ov_core_compile_model_from_file(core, xml, device_name.c_str(), &compiled_model, nullptr));
ASSERT_NE(nullptr, compiled_model);
std::string export_path = TestDataHelpers::generate_model_path("test_model", "exported_model.blob");
OV_ASSERT_OK(ov_compiled_model_export_model(compiled_model, export_path.c_str()));
ov_compiled_model_free(compiled_model);
std::vector<uint8_t> buffer(content_from_file(export_path.c_str(), true));
ov_compiled_model_t* compiled_model_imported = nullptr;
OV_ASSERT_OK(ov_core_import_model(core,
reinterpret_cast<const char*>(buffer.data()),
buffer.size(),
device_name.c_str(),
&compiled_model_imported));
ASSERT_NE(nullptr, compiled_model_imported);
ov_compiled_model_free(compiled_model_imported);
ov_core_free(core);
}
TEST_P(ov_core, ov_core_get_versions_by_device_name) {
auto device_name = GetParam();
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_core_version_list_t version_list;
OV_ASSERT_OK(ov_core_get_versions_by_device_name(core, device_name.c_str(), &version_list));
EXPECT_EQ(version_list.size, 1);
ov_core_versions_free(&version_list);
ov_core_free(core);
}

View File

@ -0,0 +1,339 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <mutex>
#include "ov_test.hpp"
void get_tensor_info(ov_model_t* model,
bool input,
size_t idx,
char** name,
ov_shape_t* shape,
ov_element_type_e* type) {
ov_output_node_list_t output_nodes;
output_nodes.size = 0;
output_nodes.output_nodes = nullptr;
if (input) {
OV_EXPECT_OK(ov_model_inputs(model, &output_nodes));
} else {
OV_EXPECT_OK(ov_model_outputs(model, &output_nodes));
}
EXPECT_NE(nullptr, output_nodes.output_nodes);
EXPECT_NE(0, output_nodes.size);
OV_EXPECT_OK(ov_node_list_get_any_name_by_index(&output_nodes, idx, name));
EXPECT_NE(nullptr, *name);
OV_EXPECT_OK(ov_node_list_get_shape_by_index(&output_nodes, idx, shape));
OV_EXPECT_OK(ov_node_list_get_element_type_by_index(&output_nodes, idx, type));
ov_partial_shape_t* p_shape = nullptr;
OV_EXPECT_OK(ov_node_list_get_partial_shape_by_index(&output_nodes, idx, &p_shape));
ov_partial_shape_free(p_shape);
ov_output_node_list_free(&output_nodes);
}
class ov_infer_request : public ::testing::TestWithParam<std::string> {
protected:
void SetUp() override {
auto device_name = GetParam();
core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
in_tensor_name = nullptr;
ov_shape_t tensor_shape = {0, nullptr};
ov_element_type_e tensor_type;
get_tensor_info(model, true, 0, &in_tensor_name, &tensor_shape, &tensor_type);
input_tensor = nullptr;
output_tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(tensor_type, tensor_shape, &input_tensor));
EXPECT_NE(nullptr, input_tensor);
compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
infer_request = nullptr;
OV_EXPECT_OK(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
EXPECT_NE(nullptr, infer_request);
ov_shape_deinit(&tensor_shape);
}
void TearDown() override {
ov_tensor_free(input_tensor);
ov_tensor_free(output_tensor);
ov_free(in_tensor_name);
ov_infer_request_free(infer_request);
ov_compiled_model_free(compiled_model);
ov_model_free(model);
ov_core_free(core);
}
public:
ov_core_t* core;
ov_model_t* model;
ov_compiled_model_t* compiled_model;
ov_infer_request_t* infer_request;
char* in_tensor_name;
ov_tensor_t* input_tensor;
ov_tensor_t* output_tensor;
static std::mutex m;
static bool ready;
static std::condition_variable condVar;
};
bool ov_infer_request::ready = false;
std::mutex ov_infer_request::m;
std::condition_variable ov_infer_request::condVar;
class ov_infer_request_ppp : public ::testing::TestWithParam<std::string> {
protected:
void SetUp() override {
auto device_name = GetParam();
output_tensor = nullptr;
core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
model = nullptr;
OV_EXPECT_OK(ov_core_read_model(core, xml, bin, &model));
EXPECT_NE(nullptr, model);
preprocess = nullptr;
OV_EXPECT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
EXPECT_NE(nullptr, preprocess);
input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
EXPECT_NE(nullptr, input_info);
input_tensor_info = nullptr;
OV_EXPECT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
EXPECT_NE(nullptr, input_tensor_info);
ov_shape_t shape = {0, nullptr};
OV_ASSERT_OK(ov_shape_init(&shape, 4));
shape.dims[0] = 1;
shape.dims[1] = 224;
shape.dims[2] = 224;
shape.dims[3] = 3;
ov_element_type_e type = U8;
OV_ASSERT_OK(ov_tensor_create(type, shape, &input_tensor));
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_from(input_tensor_info, input_tensor));
OV_ASSERT_OK(ov_shape_deinit(&shape));
const char* layout_desc = "NHWC";
ov_layout_t* layout = nullptr;
OV_ASSERT_OK(ov_layout_create(&layout, layout_desc));
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_layout(input_tensor_info, layout));
ov_layout_free(layout);
input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
OV_ASSERT_OK(
ov_preprocess_preprocesssteps_resize(input_process, ov_preprocess_resizealgorithm_e::RESIZE_LINEAR));
input_model = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_model(input_info, &input_model));
ASSERT_NE(nullptr, input_model);
ov_layout_t* model_layout = nullptr;
const char* model_layout_desc = "NCHW";
OV_ASSERT_OK(ov_layout_create(&model_layout, model_layout_desc));
OV_ASSERT_OK(ov_preprocess_inputmodelinfo_set_layout(input_model, model_layout));
ov_layout_free(model_layout);
OV_ASSERT_OK(ov_preprocess_prepostprocessor_build(preprocess, &model));
EXPECT_NE(nullptr, model);
compiled_model = nullptr;
OV_EXPECT_OK(ov_core_compile_model(core, model, device_name.c_str(), &compiled_model, nullptr));
EXPECT_NE(nullptr, compiled_model);
infer_request = nullptr;
OV_EXPECT_OK(ov_compiled_model_create_infer_request(compiled_model, &infer_request));
EXPECT_NE(nullptr, infer_request);
}
void TearDown() override {
ov_tensor_free(output_tensor);
ov_tensor_free(input_tensor);
ov_infer_request_free(infer_request);
ov_compiled_model_free(compiled_model);
ov_preprocess_inputmodelinfo_free(input_model);
ov_preprocess_preprocesssteps_free(input_process);
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
public:
ov_core_t* core;
ov_model_t* model;
ov_compiled_model_t* compiled_model;
ov_infer_request_t* infer_request;
ov_tensor_t* input_tensor;
ov_tensor_t* output_tensor;
ov_preprocess_prepostprocessor_t* preprocess;
ov_preprocess_inputinfo_t* input_info;
ov_preprocess_inputtensorinfo_t* input_tensor_info;
ov_preprocess_preprocesssteps_t* input_process;
ov_preprocess_inputmodelinfo_t* input_model;
};
INSTANTIATE_TEST_SUITE_P(device_name, ov_infer_request, ::testing::Values("CPU"));
INSTANTIATE_TEST_SUITE_P(device_name, ov_infer_request_ppp, ::testing::Values("CPU"));
TEST_P(ov_infer_request, set_tensor) {
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
}
TEST_P(ov_infer_request, set_input_tensor) {
OV_EXPECT_OK(ov_infer_request_set_input_tensor(infer_request, 0, input_tensor));
}
TEST_P(ov_infer_request, set_tensor_error_handling) {
OV_EXPECT_NOT_OK(ov_infer_request_set_tensor(nullptr, in_tensor_name, input_tensor));
OV_EXPECT_NOT_OK(ov_infer_request_set_tensor(infer_request, nullptr, input_tensor));
OV_EXPECT_NOT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, nullptr));
}
TEST_P(ov_infer_request, get_tensor) {
OV_EXPECT_OK(ov_infer_request_get_tensor(infer_request, in_tensor_name, &input_tensor));
EXPECT_NE(nullptr, input_tensor);
}
TEST_P(ov_infer_request, get_out_tensor) {
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
}
TEST_P(ov_infer_request, get_tensor_error_handling) {
OV_EXPECT_NOT_OK(ov_infer_request_get_tensor(nullptr, in_tensor_name, &input_tensor));
OV_EXPECT_NOT_OK(ov_infer_request_get_tensor(infer_request, nullptr, &input_tensor));
OV_EXPECT_NOT_OK(ov_infer_request_get_tensor(infer_request, in_tensor_name, nullptr));
}
TEST_P(ov_infer_request, infer) {
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
OV_EXPECT_OK(ov_infer_request_infer(infer_request));
char* out_tensor_name = nullptr;
ov_shape_t tensor_shape = {0, nullptr};
ov_element_type_e tensor_type;
get_tensor_info(model, false, 0, &out_tensor_name, &tensor_shape, &tensor_type);
OV_EXPECT_OK(ov_infer_request_get_tensor(infer_request, out_tensor_name, &output_tensor));
EXPECT_NE(nullptr, output_tensor);
ov_shape_deinit(&tensor_shape);
ov_free(out_tensor_name);
}
TEST_P(ov_infer_request, cancel) {
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
OV_EXPECT_OK(ov_infer_request_cancel(infer_request));
}
TEST_P(ov_infer_request_ppp, infer_ppp) {
OV_EXPECT_OK(ov_infer_request_set_input_tensor(infer_request, 0, input_tensor));
OV_EXPECT_OK(ov_infer_request_infer(infer_request));
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
EXPECT_NE(nullptr, output_tensor);
}
TEST(ov_infer_request, infer_error_handling) {
OV_EXPECT_NOT_OK(ov_infer_request_infer(nullptr));
}
TEST_P(ov_infer_request, infer_async) {
OV_EXPECT_OK(ov_infer_request_set_input_tensor(infer_request, 0, input_tensor));
OV_EXPECT_OK(ov_infer_request_start_async(infer_request));
if (!HasFatalFailure()) {
OV_EXPECT_OK(ov_infer_request_wait(infer_request));
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
EXPECT_NE(nullptr, output_tensor);
}
}
TEST_P(ov_infer_request_ppp, infer_async_ppp) {
OV_EXPECT_OK(ov_infer_request_set_input_tensor(infer_request, 0, input_tensor));
OV_EXPECT_OK(ov_infer_request_start_async(infer_request));
if (!HasFatalFailure()) {
OV_EXPECT_OK(ov_infer_request_wait(infer_request));
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
EXPECT_NE(nullptr, output_tensor);
}
}
void infer_request_callback(void* args) {
ov_infer_request_t* infer_request = (ov_infer_request_t*)args;
ov_tensor_t* out_tensor = nullptr;
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, 0, &out_tensor));
EXPECT_NE(nullptr, out_tensor);
ov_tensor_free(out_tensor);
std::lock_guard<std::mutex> lock(ov_infer_request::m);
ov_infer_request::ready = true;
ov_infer_request::condVar.notify_one();
}
TEST_P(ov_infer_request, infer_request_set_callback) {
OV_EXPECT_OK(ov_infer_request_set_input_tensor(infer_request, 0, input_tensor));
ov_callback_t callback;
callback.callback_func = infer_request_callback;
callback.args = infer_request;
OV_EXPECT_OK(ov_infer_request_set_callback(infer_request, &callback));
OV_EXPECT_OK(ov_infer_request_start_async(infer_request));
if (!HasFatalFailure()) {
std::unique_lock<std::mutex> lock(ov_infer_request::m);
ov_infer_request::condVar.wait(lock, [] {
return ov_infer_request::ready;
});
}
}
TEST_P(ov_infer_request, get_profiling_info) {
auto device_name = GetParam();
OV_EXPECT_OK(ov_infer_request_set_tensor(infer_request, in_tensor_name, input_tensor));
OV_EXPECT_OK(ov_infer_request_infer(infer_request));
OV_EXPECT_OK(ov_infer_request_get_output_tensor(infer_request, 0, &output_tensor));
EXPECT_NE(nullptr, output_tensor);
ov_profiling_info_list_t profiling_infos;
profiling_infos.size = 0;
profiling_infos.profiling_infos = nullptr;
OV_EXPECT_OK(ov_infer_request_get_profiling_info(infer_request, &profiling_infos));
EXPECT_NE(0, profiling_infos.size);
EXPECT_NE(nullptr, profiling_infos.profiling_infos);
ov_profiling_info_list_free(&profiling_infos);
}

View File

@ -0,0 +1,30 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
TEST(ov_layout, ov_layout_create_static_layout) {
const char* str = "[N,C,H,W]";
const char* desc = "NCHW";
ov_layout_t* layout = nullptr;
OV_ASSERT_OK(ov_layout_create(&layout, desc));
const char* res = ov_layout_to_string(layout);
EXPECT_STREQ(res, str);
ov_layout_free(layout);
ov_free(res);
}
TEST(ov_layout, ov_layout_create_dynamic_layout) {
const char* str = "[N,...,C]";
const char* desc = "N...C";
ov_layout_t* layout = nullptr;
OV_ASSERT_OK(ov_layout_create(&layout, desc));
const char* res = ov_layout_to_string(layout);
EXPECT_STREQ(res, str);
ov_layout_free(layout);
ov_free(res);
}

View File

@ -0,0 +1,163 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
TEST(ov_model, ov_model_outputs) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_output_node_list_t output_node_list;
output_node_list.output_nodes = nullptr;
OV_ASSERT_OK(ov_model_outputs(model, &output_node_list));
ASSERT_NE(nullptr, output_node_list.output_nodes);
ov_output_node_list_free(&output_node_list);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_model, ov_model_inputs) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_output_node_list_t input_node_list;
input_node_list.output_nodes = nullptr;
OV_ASSERT_OK(ov_model_inputs(model, &input_node_list));
ASSERT_NE(nullptr, input_node_list.output_nodes);
ov_output_node_list_free(&input_node_list);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_model, ov_model_input_by_name) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_output_const_node_t* input_node = nullptr;
OV_ASSERT_OK(ov_model_input_by_name(model, "data", &input_node));
ASSERT_NE(nullptr, input_node);
ov_shape_t shape;
OV_ASSERT_OK(ov_node_get_shape(input_node, &shape));
ov_shape_deinit(&shape);
ov_output_node_free(input_node);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_model, ov_model_input_by_index) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_output_const_node_t* input_node = nullptr;
OV_ASSERT_OK(ov_model_input_by_index(model, 0, &input_node));
ASSERT_NE(nullptr, input_node);
ov_shape_t shape;
OV_ASSERT_OK(ov_node_get_shape(input_node, &shape));
ov_shape_deinit(&shape);
ov_output_node_free(input_node);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_model, ov_model_is_dynamic) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ASSERT_NO_THROW(ov_model_is_dynamic(model));
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_model, ov_model_reshape_input_by_name) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_output_node_list_t input_node_list1;
input_node_list1.output_nodes = nullptr;
OV_ASSERT_OK(ov_model_inputs(model, &input_node_list1));
ASSERT_NE(nullptr, input_node_list1.output_nodes);
char* tensor_name = nullptr;
OV_ASSERT_OK(ov_node_list_get_any_name_by_index(&input_node_list1, 0, &tensor_name));
ov_shape_t shape = {0, nullptr};
OV_ASSERT_OK(ov_shape_init(&shape, 4));
shape.dims[0] = 1;
shape.dims[1] = 3;
shape.dims[2] = 896;
shape.dims[3] = 896;
ov_partial_shape_t* partial_shape = nullptr;
OV_ASSERT_OK(ov_shape_to_partial_shape(&shape, &partial_shape));
OV_ASSERT_OK(ov_model_reshape_input_by_name(model, tensor_name, partial_shape));
ov_output_node_list_t input_node_list2;
input_node_list2.output_nodes = nullptr;
OV_ASSERT_OK(ov_model_inputs(model, &input_node_list2));
ASSERT_NE(nullptr, input_node_list2.output_nodes);
EXPECT_NE(input_node_list1.output_nodes, input_node_list2.output_nodes);
ov_shape_deinit(&shape);
ov_partial_shape_free(partial_shape);
ov_free(tensor_name);
ov_output_node_list_free(&input_node_list1);
ov_output_node_list_free(&input_node_list2);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_model, ov_model_get_friendly_name) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
char* friendly_name = nullptr;
OV_ASSERT_OK(ov_model_get_friendly_name(model, &friendly_name));
ASSERT_NE(nullptr, friendly_name);
ov_free(friendly_name);
ov_model_free(model);
ov_core_free(core);
}

View File

@ -0,0 +1,177 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
TEST(ov_partial_shape, ov_partial_shape_init_and_parse) {
const char* str = "{1,20,300,40..100}";
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
ov_dimensions_t* dims = nullptr;
OV_ASSERT_OK(ov_rank_create(&rank, 4));
OV_ASSERT_OK(ov_dimensions_create(&dims));
OV_ASSERT_OK(ov_dimensions_add(dims, 1));
OV_ASSERT_OK(ov_dimensions_add(dims, 20));
OV_ASSERT_OK(ov_dimensions_add(dims, 300));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, 40, 100));
OV_ASSERT_OK(ov_partial_shape_create(&partial_shape, rank, dims));
auto tmp = ov_partial_shape_to_string(partial_shape);
EXPECT_STREQ(tmp, str);
ov_free(tmp);
ov_rank_free(rank);
ov_dimensions_free(dims);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_partial_shape_init_and_parse_dynamic) {
const char* str = "{1,?,300,40..100}";
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
ov_dimensions_t* dims = nullptr;
OV_ASSERT_OK(ov_rank_create(&rank, 4));
OV_ASSERT_OK(ov_dimensions_create(&dims));
OV_ASSERT_OK(ov_dimensions_add(dims, 1));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, -1, -1));
OV_ASSERT_OK(ov_dimensions_add(dims, 300));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, 40, 100));
OV_ASSERT_OK(ov_partial_shape_create(&partial_shape, rank, dims));
auto tmp = ov_partial_shape_to_string(partial_shape);
EXPECT_STREQ(tmp, str);
ov_free(tmp);
ov_rank_free(rank);
ov_dimensions_free(dims);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_partial_shape_init_and_parse_dynamic_mix) {
const char* str = "{1,?,?,40..100}";
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
ov_dimensions_t* dims = nullptr;
OV_ASSERT_OK(ov_rank_create(&rank, 4));
OV_ASSERT_OK(ov_dimensions_create(&dims));
OV_ASSERT_OK(ov_dimensions_add(dims, 1));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, -1, -1));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, -1, -1));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, 40, 100));
OV_ASSERT_OK(ov_partial_shape_create(&partial_shape, rank, dims));
auto tmp = ov_partial_shape_to_string(partial_shape);
EXPECT_STREQ(tmp, str);
ov_free(tmp);
ov_rank_free(rank);
ov_dimensions_free(dims);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_partial_shape_init_and_parse_dynamic_rank) {
const char* str = "?";
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
OV_ASSERT_OK(ov_rank_create_dynamic(&rank, -1, -1));
OV_ASSERT_OK(ov_partial_shape_create(&partial_shape, rank, nullptr));
auto tmp = ov_partial_shape_to_string(partial_shape);
EXPECT_STREQ(tmp, str);
ov_free(tmp);
ov_rank_free(rank);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_partial_shape_init_and_parse_invalid) {
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
ov_dimensions_t* dims = nullptr;
OV_ASSERT_OK(ov_rank_create(&rank, 3));
OV_ASSERT_OK(ov_dimensions_create(&dims));
OV_ASSERT_OK(ov_dimensions_add(dims, 1));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, -1, -1));
OV_ASSERT_OK(ov_dimensions_add(dims, 300));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, 40, 100));
OV_EXPECT_NOT_OK(ov_partial_shape_create(&partial_shape, rank, dims));
ov_rank_free(rank);
ov_dimensions_free(dims);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_partial_shape_to_shape) {
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
ov_dimensions_t* dims = nullptr;
OV_ASSERT_OK(ov_rank_create(&rank, 5));
OV_ASSERT_OK(ov_dimensions_create(&dims));
OV_ASSERT_OK(ov_dimensions_add(dims, 10));
OV_ASSERT_OK(ov_dimensions_add(dims, 20));
OV_ASSERT_OK(ov_dimensions_add(dims, 30));
OV_ASSERT_OK(ov_dimensions_add(dims, 40));
OV_ASSERT_OK(ov_dimensions_add(dims, 50));
OV_EXPECT_OK(ov_partial_shape_create(&partial_shape, rank, dims));
ov_shape_t shape;
OV_ASSERT_OK(ov_partial_shape_to_shape(partial_shape, &shape));
EXPECT_EQ(shape.rank, 5);
EXPECT_EQ(shape.dims[0], 10);
EXPECT_EQ(shape.dims[1], 20);
EXPECT_EQ(shape.dims[2], 30);
EXPECT_EQ(shape.dims[3], 40);
EXPECT_EQ(shape.dims[4], 50);
ov_rank_free(rank);
ov_dimensions_free(dims);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_partial_shape_to_shape_invalid) {
ov_partial_shape_t* partial_shape = nullptr;
ov_rank_t* rank = nullptr;
ov_dimensions_t* dims = nullptr;
OV_ASSERT_OK(ov_rank_create(&rank, 5));
OV_ASSERT_OK(ov_dimensions_create(&dims));
OV_ASSERT_OK(ov_dimensions_add(dims, 10));
OV_ASSERT_OK(ov_dimensions_add_dynamic(dims, -1, -1));
OV_ASSERT_OK(ov_dimensions_add(dims, 30));
OV_ASSERT_OK(ov_dimensions_add(dims, 40));
OV_ASSERT_OK(ov_dimensions_add(dims, 50));
ov_shape_t shape;
shape.rank = 0;
OV_EXPECT_NOT_OK(ov_partial_shape_to_shape(partial_shape, &shape));
ov_rank_free(rank);
ov_dimensions_free(dims);
ov_partial_shape_free(partial_shape);
}
TEST(ov_partial_shape, ov_shape_to_partial_shape) {
const char* str = "{10,20,30,40,50}";
ov_shape_t shape;
OV_ASSERT_OK(ov_shape_init(&shape, 5));
shape.dims[0] = 10;
shape.dims[1] = 20;
shape.dims[2] = 30;
shape.dims[3] = 40;
shape.dims[4] = 50;
ov_partial_shape_t* partial_shape = nullptr;
OV_ASSERT_OK(ov_shape_to_partial_shape(&shape, &partial_shape));
auto tmp = ov_partial_shape_to_string(partial_shape);
EXPECT_STREQ(tmp, str);
ov_partial_shape_free(partial_shape);
ov_shape_deinit(&shape);
}

View File

@ -0,0 +1,706 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
TEST(ov_preprocess, ov_preprocess_prepostprocessor_create) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_input) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input(preprocess, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_input_by_name) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_name(preprocess, "data", &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_input_by_index) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputinfo_tensor) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputinfo_preprocess) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_preprocesssteps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
ov_preprocess_preprocesssteps_free(input_process);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_preprocesssteps_resize) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_preprocesssteps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
OV_ASSERT_OK(ov_preprocess_preprocesssteps_resize(input_process, ov_preprocess_resizealgorithm_e::RESIZE_LINEAR));
ov_preprocess_preprocesssteps_free(input_process);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputtensorinfo_set_element_type) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_element_type(input_tensor_info, ov_element_type_e::F32));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputtensorinfo_set_from) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
ov_tensor_t* tensor = nullptr;
ov_shape_t shape;
OV_ASSERT_OK(ov_shape_init(&shape, 4));
shape.dims[0] = 1;
shape.dims[1] = 416;
shape.dims[2] = 416;
shape.dims[3] = 3;
OV_ASSERT_OK(ov_tensor_create(ov_element_type_e::F32, shape, &tensor));
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_from(input_tensor_info, tensor));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_shape_deinit(&shape);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputtensorinfo_set_layout) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
ov_layout_t* layout = nullptr;
const char* input_layout_desc = "NCHW";
OV_ASSERT_OK(ov_layout_create(&layout, input_layout_desc));
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_layout(input_tensor_info, layout));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_layout_free(layout);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputtensorinfo_set_color_format) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(
ov_preprocess_inputtensorinfo_set_color_format(input_tensor_info, ov_color_format_e::NV12_SINGLE_PLANE));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputtensorinfo_set_spatial_static_shape) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
size_t input_height = 500;
size_t input_width = 500;
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_spatial_static_shape(input_tensor_info, input_height, input_width));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_preprocesssteps_convert_element_type) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_preprocesssteps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_element_type(input_tensor_info, ov_element_type_e::U8));
OV_ASSERT_OK(ov_preprocess_preprocesssteps_convert_element_type(input_process, ov_element_type_e::F32));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_preprocesssteps_free(input_process);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_preprocesssteps_convert_color) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_preprocesssteps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
OV_ASSERT_OK(
ov_preprocess_inputtensorinfo_set_color_format(input_tensor_info, ov_color_format_e::NV12_SINGLE_PLANE));
OV_ASSERT_OK(ov_preprocess_preprocesssteps_convert_color(input_process, ov_color_format_e::BGR));
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_preprocess_preprocesssteps_free(input_process);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_output) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_outputinfo_t* output_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_output(preprocess, &output_info));
ASSERT_NE(nullptr, output_info);
ov_preprocess_outputinfo_free(output_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_output_by_index) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_outputinfo_t* output_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_output_by_index(preprocess, 0, &output_info));
ASSERT_NE(nullptr, output_info);
ov_preprocess_outputinfo_free(output_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_output_by_name) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_outputinfo_t* output_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_output_by_name(preprocess, "fc_out", &output_info));
ASSERT_NE(nullptr, output_info);
ov_preprocess_outputinfo_free(output_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_outputinfo_tensor) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_outputinfo_t* output_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_output_by_index(preprocess, 0, &output_info));
ASSERT_NE(nullptr, output_info);
ov_preprocess_outputtensorinfo_t* output_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_outputinfo_tensor(output_info, &output_tensor_info));
ASSERT_NE(nullptr, output_tensor_info);
ov_preprocess_outputtensorinfo_free(output_tensor_info);
ov_preprocess_outputinfo_free(output_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_output_set_element_type) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_outputinfo_t* output_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_output_by_index(preprocess, 0, &output_info));
ASSERT_NE(nullptr, output_info);
ov_preprocess_outputtensorinfo_t* output_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_outputinfo_tensor(output_info, &output_tensor_info));
ASSERT_NE(nullptr, output_tensor_info);
OV_ASSERT_OK(ov_preprocess_output_set_element_type(output_tensor_info, ov_element_type_e::F32));
ov_preprocess_outputtensorinfo_free(output_tensor_info);
ov_preprocess_outputinfo_free(output_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputinfo_model) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputmodelinfo_t* input_model = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_model(input_info, &input_model));
ASSERT_NE(nullptr, input_model);
ov_preprocess_inputmodelinfo_free(input_model);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_inputmodelinfo_set_layout) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputmodelinfo_t* input_model = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_model(input_info, &input_model));
ASSERT_NE(nullptr, input_model);
ov_layout_t* layout = nullptr;
const char* layout_desc = "NCHW";
OV_ASSERT_OK(ov_layout_create(&layout, layout_desc));
OV_ASSERT_OK(ov_preprocess_inputmodelinfo_set_layout(input_model, layout));
ov_layout_free(layout);
ov_preprocess_inputmodelinfo_free(input_model);
ov_preprocess_inputinfo_free(input_info);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_build) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_model_t* new_model = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_build(preprocess, &new_model));
ASSERT_NE(nullptr, new_model);
ov_model_free(new_model);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}
TEST(ov_preprocess, ov_preprocess_prepostprocessor_build_apply) {
ov_core_t* core = nullptr;
OV_ASSERT_OK(ov_core_create(&core));
ASSERT_NE(nullptr, core);
ov_model_t* model = nullptr;
OV_ASSERT_OK(ov_core_read_model(core, xml, bin, &model));
ASSERT_NE(nullptr, model);
ov_preprocess_prepostprocessor_t* preprocess = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_create(model, &preprocess));
ASSERT_NE(nullptr, preprocess);
ov_preprocess_inputinfo_t* input_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_input_by_index(preprocess, 0, &input_info));
ASSERT_NE(nullptr, input_info);
ov_preprocess_inputtensorinfo_t* input_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_tensor(input_info, &input_tensor_info));
ASSERT_NE(nullptr, input_tensor_info);
ov_tensor_t* tensor = nullptr;
ov_shape_t shape;
OV_ASSERT_OK(ov_shape_init(&shape, 4));
shape.dims[0] = 1;
shape.dims[1] = 416;
shape.dims[2] = 416;
shape.dims[3] = 3;
OV_ASSERT_OK(ov_tensor_create(ov_element_type_e::U8, shape, &tensor));
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_from(input_tensor_info, tensor));
const char* layout_desc = "NHWC";
ov_layout_t* layout = nullptr;
OV_ASSERT_OK(ov_layout_create(&layout, layout_desc));
OV_ASSERT_OK(ov_preprocess_inputtensorinfo_set_layout(input_tensor_info, layout));
ov_layout_free(layout);
ov_preprocess_preprocesssteps_t* input_process = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_preprocess(input_info, &input_process));
ASSERT_NE(nullptr, input_process);
OV_ASSERT_OK(ov_preprocess_preprocesssteps_resize(input_process, ov_preprocess_resizealgorithm_e::RESIZE_LINEAR));
ov_preprocess_inputmodelinfo_t* input_model = nullptr;
OV_ASSERT_OK(ov_preprocess_inputinfo_model(input_info, &input_model));
ASSERT_NE(nullptr, input_model);
const char* model_layout_desc = "NCHW";
ov_layout_t* model_layout = nullptr;
OV_ASSERT_OK(ov_layout_create(&model_layout, model_layout_desc));
OV_ASSERT_OK(ov_preprocess_inputmodelinfo_set_layout(input_model, model_layout));
ov_layout_free(model_layout);
ov_preprocess_outputinfo_t* output_info = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_output_by_index(preprocess, 0, &output_info));
ASSERT_NE(nullptr, output_info);
ov_preprocess_outputtensorinfo_t* output_tensor_info = nullptr;
OV_ASSERT_OK(ov_preprocess_outputinfo_tensor(output_info, &output_tensor_info));
ASSERT_NE(nullptr, output_tensor_info);
OV_ASSERT_OK(ov_preprocess_output_set_element_type(output_tensor_info, ov_element_type_e::F32));
ov_model_t* new_model = nullptr;
OV_ASSERT_OK(ov_preprocess_prepostprocessor_build(preprocess, &new_model));
ASSERT_NE(nullptr, new_model);
ov_preprocess_inputtensorinfo_free(input_tensor_info);
ov_tensor_free(tensor);
ov_shape_deinit(&shape);
ov_preprocess_preprocesssteps_free(input_process);
ov_preprocess_inputmodelinfo_free(input_model);
ov_preprocess_outputtensorinfo_free(output_tensor_info);
ov_preprocess_outputinfo_free(output_info);
ov_preprocess_inputinfo_free(input_info);
ov_model_free(new_model);
ov_preprocess_prepostprocessor_free(preprocess);
ov_model_free(model);
ov_core_free(core);
}

View File

@ -0,0 +1,163 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
void setup_4d_shape(ov_shape_t* shape, int64_t d0, int64_t d1, int64_t d2, int64_t d3) {
ov_shape_init(shape, 4);
shape->dims[0] = d0;
shape->dims[1] = d1;
shape->dims[2] = d2;
shape->dims[3] = d3;
}
TEST(ov_tensor, ov_tensor_create) {
ov_element_type_e type = ov_element_type_e::U8;
ov_shape_t shape;
setup_4d_shape(&shape, 10, 20, 30, 40);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
ov_tensor_free(tensor);
ov_shape_deinit(&shape);
}
TEST(ov_tensor, ov_tensor_create_from_host_ptr) {
ov_element_type_e type = ov_element_type_e::U8;
ov_shape_t shape;
setup_4d_shape(&shape, 1, 3, 4, 4);
uint8_t host_ptr[1][3][4][4] = {0};
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create_from_host_ptr(type, shape, &host_ptr, &tensor));
EXPECT_NE(nullptr, tensor);
ov_tensor_free(tensor);
ov_shape_deinit(&shape);
}
TEST(ov_tensor, ov_tensor_get_shape) {
ov_element_type_e type = ov_element_type_e::U8;
ov_shape_t shape;
setup_4d_shape(&shape, 10, 20, 30, 40);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
ov_shape_t shape_res;
OV_EXPECT_OK(ov_tensor_get_shape(tensor, &shape_res));
EXPECT_EQ(shape.rank, shape_res.rank);
EXPECT_EQ(shape.dims[0], shape_res.dims[0]);
EXPECT_EQ(shape.dims[1], shape_res.dims[1]);
EXPECT_EQ(shape.dims[2], shape_res.dims[2]);
EXPECT_EQ(shape.dims[3], shape_res.dims[3]);
ov_shape_deinit(&shape);
ov_shape_deinit(&shape_res);
ov_tensor_free(tensor);
}
TEST(ov_tensor, ov_tensor_set_shape) {
ov_element_type_e type = ov_element_type_e::U8;
ov_shape_t shape;
setup_4d_shape(&shape, 1, 1, 1, 1);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
ov_shape_t shape_update;
setup_4d_shape(&shape_update, 10, 20, 30, 40);
OV_EXPECT_OK(ov_tensor_set_shape(tensor, shape_update));
ov_shape_t shape_res;
OV_EXPECT_OK(ov_tensor_get_shape(tensor, &shape_res));
EXPECT_EQ(shape_update.rank, shape_res.rank);
EXPECT_EQ(shape_update.dims[0], shape_res.dims[0]);
EXPECT_EQ(shape_update.dims[1], shape_res.dims[1]);
EXPECT_EQ(shape_update.dims[2], shape_res.dims[2]);
EXPECT_EQ(shape_update.dims[3], shape_res.dims[3]);
ov_shape_deinit(&shape_update);
ov_shape_deinit(&shape_res);
ov_shape_deinit(&shape);
ov_tensor_free(tensor);
}
TEST(ov_tensor, ov_tensor_get_element_type) {
ov_element_type_e type = ov_element_type_e::U8;
ov_shape_t shape;
setup_4d_shape(&shape, 10, 20, 30, 40);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
ov_element_type_e type_res;
OV_EXPECT_OK(ov_tensor_get_element_type(tensor, &type_res));
EXPECT_EQ(type, type_res);
ov_shape_deinit(&shape);
ov_tensor_free(tensor);
}
static size_t product(const std::vector<size_t>& dims) {
if (dims.empty())
return 0;
return std::accumulate(std::begin(dims), std::end(dims), (size_t)1, std::multiplies<size_t>());
}
size_t calculate_size(ov_shape_t shape) {
std::vector<size_t> tmp_shape;
std::copy_n(shape.dims, shape.rank, std::back_inserter(tmp_shape));
return product(tmp_shape);
}
size_t calculate_byteSize(ov_shape_t shape, ov_element_type_e type) {
return (calculate_size(shape) * GET_ELEMENT_TYPE_SIZE(type) + 7) >> 3;
}
TEST(ov_tensor, ov_tensor_get_size) {
ov_element_type_e type = ov_element_type_e::I16;
ov_shape_t shape;
setup_4d_shape(&shape, 1, 3, 4, 4);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
size_t size = calculate_size(shape);
size_t size_res;
OV_EXPECT_OK(ov_tensor_get_size(tensor, &size_res));
EXPECT_EQ(size_res, size);
ov_shape_deinit(&shape);
ov_tensor_free(tensor);
}
TEST(ov_tensor, ov_tensor_get_byte_size) {
ov_element_type_e type = ov_element_type_e::I16;
ov_shape_t shape;
setup_4d_shape(&shape, 1, 3, 4, 4);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
size_t size = calculate_byteSize(shape, type);
size_t size_res;
OV_EXPECT_OK(ov_tensor_get_byte_size(tensor, &size_res));
EXPECT_EQ(size_res, size);
ov_shape_deinit(&shape);
ov_tensor_free(tensor);
}
TEST(ov_tensor, ov_tensor_data) {
ov_element_type_e type = ov_element_type_e::U8;
ov_shape_t shape;
setup_4d_shape(&shape, 10, 20, 30, 40);
ov_tensor_t* tensor = nullptr;
OV_EXPECT_OK(ov_tensor_create(type, shape, &tensor));
EXPECT_NE(nullptr, tensor);
void* data = nullptr;
OV_EXPECT_OK(ov_tensor_data(tensor, &data));
EXPECT_NE(nullptr, data);
ov_shape_deinit(&shape);
ov_tensor_free(tensor);
}

View File

@ -0,0 +1,42 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "ov_test.hpp"
#include "test_model_repo.hpp"
std::string xml_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.xml");
std::string bin_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.bin");
const char* xml = xml_std.c_str();
const char* bin = bin_std.c_str();
#ifdef _WIN32
# ifdef __MINGW32__
std::string plugins_xml_std = TestDataHelpers::generate_ieclass_xml_path("plugins_mingw.xml");
# else
std::string plugins_xml_std = TestDataHelpers::generate_ieclass_xml_path("plugins_win.xml");
# endif
#elif defined __APPLE__
std::string plugins_xml_std = TestDataHelpers::generate_ieclass_xml_path("plugins_apple.xml");
#else
std::string plugins_xml_std = TestDataHelpers::generate_ieclass_xml_path("plugins.xml");
#endif
const char* plugins_xml = plugins_xml_std.c_str();
std::map<ov_element_type_e, size_t> element_type_size_map = {{ov_element_type_e::BOOLEAN, 8},
{ov_element_type_e::BF16, 16},
{ov_element_type_e::F16, 16},
{ov_element_type_e::F32, 32},
{ov_element_type_e::F64, 64},
{ov_element_type_e::I4, 4},
{ov_element_type_e::I8, 8},
{ov_element_type_e::I16, 16},
{ov_element_type_e::I32, 32},
{ov_element_type_e::I64, 64},
{ov_element_type_e::U1, 1},
{ov_element_type_e::U4, 4},
{ov_element_type_e::U8, 8},
{ov_element_type_e::U16, 16},
{ov_element_type_e::U32, 32},
{ov_element_type_e::U64, 64}};

View File

@ -0,0 +1,52 @@
// Copyright (C) 2018-2022 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <gtest/gtest.h>
#include <stdio.h>
#include <stdlib.h>
#include <condition_variable>
#include <fstream>
#include "openvino/c/openvino.h"
#include "openvino/openvino.hpp"
extern const char* xml;
extern const char* bin;
extern const char* input_image;
extern const char* input_image_nv12;
extern const char* plugins_xml;
#define OV_EXPECT_OK(...) EXPECT_EQ(ov_status_e::OK, __VA_ARGS__)
#define OV_ASSERT_OK(...) ASSERT_EQ(ov_status_e::OK, __VA_ARGS__)
#define OV_EXPECT_NOT_OK(...) EXPECT_NE(ov_status_e::OK, __VA_ARGS__)
#define OV_EXPECT_ARREQ(arr1, arr2) EXPECT_TRUE(std::equal(std::begin(arr1), std::end(arr1), std::begin(arr2)))
extern std::map<ov_element_type_e, size_t> element_type_size_map;
#define GET_ELEMENT_TYPE_SIZE(a) element_type_size_map[a]
inline size_t find_device(ov_available_devices_t avai_devices, const char* device_name) {
for (size_t i = 0; i < avai_devices.size; ++i) {
if (strstr(avai_devices.devices[i], device_name))
return i;
}
return -1;
}
inline static std::vector<uint8_t> content_from_file(const char* filename, bool is_binary) {
std::vector<uint8_t> result;
{
std::ifstream is(filename, is_binary ? std::ifstream::binary | std::ifstream::in : std::ifstream::in);
if (is) {
is.seekg(0, std::ifstream::end);
result.resize(is.tellg());
if (result.size() > 0) {
is.seekg(0, std::ifstream::beg);
is.read(reinterpret_cast<char*>(&result[0]), result.size());
}
}
}
return result;
}

View File

@ -6,12 +6,12 @@ namespace TestDataHelpers {
static const char kPathSeparator =
#if defined _WIN32 || defined __CYGWIN__
'\\';
'\\';
#else
'/';
'/';
#endif
std::string getModelPathNonFatal() noexcept {
inline std::string getModelPathNonFatal() noexcept {
if (const auto envVar = std::getenv("MODELS_PATH")) {
return envVar;
}
@ -23,11 +23,11 @@ std::string getModelPathNonFatal() noexcept {
#endif
}
std::string get_models_path() {
inline std::string get_models_path() {
return getModelPathNonFatal() + kPathSeparator + std::string("models");
};
std::string get_data_path() {
inline std::string get_data_path() {
if (const auto envVar = std::getenv("DATA_PATH")) {
return envVar;
}
@ -39,15 +39,15 @@ std::string get_data_path() {
#endif
}
std::string generate_model_path(std::string dir, std::string filename) {
inline std::string generate_model_path(std::string dir, std::string filename) {
return get_models_path() + kPathSeparator + dir + kPathSeparator + filename;
}
std::string generate_image_path(std::string dir, std::string filename) {
inline std::string generate_image_path(std::string dir, std::string filename) {
return get_data_path() + kPathSeparator + "validation_set" + kPathSeparator + dir + kPathSeparator + filename;
}
std::string generate_ieclass_xml_path(std::string filename) {
inline std::string generate_ieclass_xml_path(std::string filename) {
return getModelPathNonFatal() + kPathSeparator + "ie_class" + kPathSeparator + filename;
}
} // namespace TestDataHelpers
} // namespace TestDataHelpers

View File

@ -160,10 +160,8 @@ class SamplesCommonTestClass():
executable_path = 'python ' + executable_path
else:
executable_path = 'python3 ' + executable_path
elif 'c' in sample_type.lower() and not 'c++' in sample_type.lower() and not '2.0' in sample_type:
elif 'c' in sample_type.lower() and not 'c++' in sample_type.lower():
executable_path += '_c'
elif '2.0' in sample_type:
executable_path += '_ov_c'
if is_windows and not 'python' in sample_type.lower():
executable_path += '.exe'

View File

@ -28,13 +28,13 @@ log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=s
test_data_fp32 = get_tests(cmd_params={'i': [os.path.join('227x227', 'dog.bmp')],
'm': [os.path.join('squeezenet1.1', 'FP32', 'squeezenet1.1.xml')],
'd': ['CPU'],
'sample_type': ['C++', 'C', 'C2.0']},
'sample_type': ['C++', 'C']},
use_device=['d'])
test_data_fp32_unicode = get_tests(cmd_params={'i': [os.path.join('227x227', 'dog.bmp')],
'm': [os.path.join('squeezenet1.1', 'FP32', 'squeezenet1.1.xml')],
'd': ['CPU'],
'sample_type': ['C++', 'C', 'C2.0']},
'sample_type': ['C++', 'C']},
use_device=['d'])

View File

@ -23,7 +23,7 @@ log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=s
test_data_fp32 = get_tests(cmd_params={'i': [os.path.join('224x224', 'dog6.yuv')],
'm': [os.path.join('squeezenet1.1', 'FP32', 'squeezenet1.1.xml')],
'size': ['224x224'],
'sample_type': ['C++', 'C', 'C2.0'],
'sample_type': ['C++', 'C'],
'd': ['CPU']},
use_device=['d']
)