[C API DOC] Reconstruct the guide about integration with OpenVINO Runtime for C

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>
This commit is contained in:
xuejun 2022-10-12 16:59:05 +08:00
parent eb0493ea43
commit 9552054e7e
3 changed files with 45 additions and 61 deletions

View File

@ -18,7 +18,38 @@ Following these steps, you can implement a typical OpenVINO™ Runtime inference
![ie_api_use_cpp]
## Step 1. Create OpenVINO™ Runtime Core
## Step 1. Create Cmake Script
This step may differ for different projects. In this example, a C++ & C application is used, together with CMake for project configuration.
`
Create a structure for the project:
project/
├── CMakeLists.txt - CMake file to build
├── ... - Additional folders like includes/
└── src/ - source folder
└── main.c - [Optional] For C sample
└── main.cpp - [Optional] For C++ sample
build/ - build directory
...
`
@sphinxtabset
@sphinxtab{C++}
@snippet snippets/CMakeLists.txt cmake:integration_example_cpp
@endsphinxtab
@sphinxtab{C}
@snippet snippets/CMakeLists.txt cmake:integration_example_c
@endsphinxtab
@endsphinxtabset
## Step 2. Create OpenVINO™ Runtime Core
Include next files to work with OpenVINO™ Runtime:
@ -68,7 +99,7 @@ Use the following code to create OpenVINO™ Core to manage available devices an
@endsphinxtabset
## Step 2. Compile the Model
## Step 3. Compile the Model
`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index. This approach is aligned with the majority of frameworks.
@ -181,7 +212,7 @@ The code above creates a compiled model associated with a single hardware device
It is possible to create as many compiled models as needed and use them simultaneously (up to the limitation of the hardware resources).
To learn how to change the device configuration, read the [Query device properties](./supported_plugins/config_properties.md) article.
## Step 3. Create an Inference Request
## Step 4. Create an Inference Request
`ov::InferRequest` class provides methods for model inference in OpenVINO™ Runtime. Create an infer request using the following code (see [InferRequest detailed documentation](./ov_infer_request.md) for more details):
@ -207,7 +238,7 @@ To learn how to change the device configuration, read the [Query device properti
@endsphinxtabset
## Step 4. Set Inputs
## Step 5. Set Inputs
You can use external memory to create `ov::Tensor` and use the `ov::InferRequest::set_input_tensor` method to put this tensor on the device:
@ -233,7 +264,7 @@ You can use external memory to create `ov::Tensor` and use the `ov::InferRequest
@endsphinxtabset
## Step 5. Start Inference
## Step 6. Start Inference
OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application's overall frame-rate: instead of waiting for inference to complete, the app can keep working on the host while the accelerator is busy. You can use `ov::InferRequest::start_async` to start model inference in the asynchronous mode and call `ov::InferRequest::wait` to wait for the inference results:
@ -261,7 +292,7 @@ OpenVINO™ Runtime supports inference in either synchronous or asynchronous mod
This section demonstrates a simple pipeline. To get more information about other ways to perform inference, read the dedicated ["Run inference" section](./ov_infer_request.md).
## Step 6. Process the Inference Results
## Step 7. Process the Inference Results
Go over the output tensors and process the inference results.
@ -287,7 +318,7 @@ Go over the output tensors and process the inference results.
@endsphinxtabset
## Step 7. Release the allocated objects (only for C)
## Step 8. Release the allocated objects (only for C)
To avoid memory leak, applications developed with C API need to release the allocated objects in order.
@ -301,33 +332,6 @@ To avoid memory leak, applications developed with C API need to release the allo
@endsphinxtabset
## Step 8. Link and Build Your Application with OpenVINO™ Runtime (example)
This step may differ for different projects. In this example, a C++ & C application is used, together with CMake for project configuration.
For details on additional CMake build options, refer to the [CMake page](https://cmake.org/cmake/help/latest/manual/cmake.1.html#manual:cmake(1)).
@sphinxtabset
@sphinxtab{C++}
@snippet docs/snippets/src/main.cpp part7
@snippet snippets/CMakeLists.txt cmake:integration_example_cpp
@endsphinxtab
@sphinxtab{C}
@snippet docs/snippets/src/main.c part7
@snippet snippets/CMakeLists.txt cmake:integration_example_c
@endsphinxtab
@endsphinxtabset
To build your project using CMake with the default build tools currently available on your machine, execute the following commands:
```sh

View File

@ -35,7 +35,7 @@ ov_core_compile_model_from_file(core, "model.pdmodel", device_name, 0, &compiled
//! [part2_4]
// Construct a model
ov_model_t* model = NULL; // need to free by ov_model_free(model)
ov_model_t* model = NULL;
ov_core_read_model(core, "model.xml", NULL, &model);
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model(core, model, device_name, 0, &compiled_model);
@ -47,8 +47,12 @@ ov_infer_request_t* infer_request = NULL;
ov_compiled_model_create_infer_request(compiled_model, &infer_request);
//! [part3]
ov_shape_t input_shape = {0, NULL};
ov_output_port_t* input_port = NULL;
ov_model_input(model, &input_port);
ov_shape_t input_shape;
ov_port_get_shape(input_port, &input_shape);
void* img_data = NULL;
// read img ...
ov_element_type_e input_type = U8;
//! [part4]
ov_tensor_t* tensor = NULL;
@ -69,6 +73,7 @@ ov_infer_request_get_output_tensor_by_index(infer_request, 0, &output_tensor);
//! [part6]
//! [part8]
ov_output_port_free(input_port);
ov_tensor_free(output_tensor);
ov_tensor_free(tensor);
ov_infer_request_free(infer_request);
@ -79,16 +84,3 @@ ov_core_free(core);
return 0;
}
/*
//! [part7]
// Create a structure for the project:
project/
CMakeLists.txt - CMake file to build
... - Additional folders like includes/
src/ - source folder
main.c
build/ - build directory
...
//! [part7]
*/

View File

@ -67,16 +67,4 @@ const float *output_buffer = output.data<const float>();
//! [part6]
return 0;
}
/*
//! [part7]
// Create a structure for the project:
project/
CMakeLists.txt - CMake file to build
... - Additional folders like includes/
src/ - source folder
main.cpp
build/ - build directory
...
//! [part7]
*/