[C Sample][Readme] correct C sample read me (#14052)

* [C Sample][Readme] correct C sample read me

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

* [C API][Sample] modify readme about api list

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>

Signed-off-by: xuejun <Xuejun.Zhai@intel.com>
This commit is contained in:
Xuejun Zhai 2022-11-21 12:03:53 +08:00 committed by GitHub
parent 9680617333
commit 05ed3e218b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 18 additions and 35 deletions

View File

@ -2,15 +2,16 @@
This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature.
Hello Classification C sample application demonstrates how to use the following Inference Engine C API in applications:
Hello Classification C sample application demonstrates how to use the C API from OpenVINO in applications.
| Feature | API | Description |
|:--- |:--- |:---
| Basic Infer Flow | [ie_core_create], [ie_core_read_network], [ie_core_load_network], [ie_exec_network_create_infer_request], [ie_infer_request_set_blob], [ie_infer_request_get_blob] | Common API to do inference: configure input and output blobs, loading model, create infer request
| Synchronous Infer | [ie_infer_request_infer] | Do synchronous inference
| Network Operations | [ie_network_get_input_name], [ie_network_get_inputs_number], [ie_network_get_outputs_number], [ie_network_set_input_precision], [ie_network_get_output_name], [ie_network_get_output_precision] | Managing of network
| Blob Operations| [ie_blob_make_memory_from_preallocated], [ie_blob_get_dims], [ie_blob_get_cbuffer] | Work with memory container for storing inputs, outputs of the network, weights and biases of the layers
| Input auto-resize | [ie_network_set_input_resize_algorithm], [ie_network_set_input_layout] | Set image of the original size as input for a network with other input size. Resize and layout conversions will be performed automatically by the corresponding plugin just before inference
| Feature | API | Description |
| :--- | :--- | :--- |
| OpenVINO Runtime Version | `ov_get_openvino_version` | Get Openvino API version |
| Basic Infer Flow | `ov_core_create`, `ov_core_read_model`, `ov_core_compile_model`, `ov_compiled_model_create_infer_request`, `ov_infer_request_set_input_tensor_by_index`, `ov_infer_request_get_output_tensor_by_index` | Common API to do inference: read and compile a model, create an infer request, configure input and output tensors |
| Synchronous Infer | `ov_infer_request_infer` | Do synchronous inference |
| Model Operations | `ov_model_const_input`, `ov_model_const_output` | Get inputs and outputs of a model |
| Tensor Operations | `ov_tensor_create_from_host_ptr` | Create a tensor shape |
| Preprocessing | `ov_preprocess_prepostprocessor_create`, `ov_preprocess_prepostprocessor_get_input_info_by_index`, `ov_preprocess_input_info_get_tensor_info`, `ov_preprocess_input_tensor_info_set_from`, `ov_preprocess_input_tensor_info_set_layout`, `ov_preprocess_input_info_get_preprocess_steps`, `ov_preprocess_preprocess_steps_resize`, `ov_preprocess_input_model_info_set_layout`, `ov_preprocess_output_set_element_type`, `ov_preprocess_prepostprocessor_build` | Set image of the original size as input for a model with other input size. Resize and layout conversions are performed automatically by the corresponding plugin just before inference. |
| Options | Values |
|:--- |:---
@ -94,22 +95,4 @@ This sample is an API example, for any performance measurements please use the d
- [Using OpenVINO™ Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[ie_core_create]:https://docs.openvino.ai/latest/ie_c_api/group__Core.html#gaab73c7ee3704c742eaac457636259541
[ie_core_read_network]:https://docs.openvino.ai/latest/ie_c_api/group__Core.html#gaa40803295255b3926a3d1b8924f26c29
[ie_network_get_input_name]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga36b0c28dfab6db2bfcc2941fd57fbf6d
[ie_network_set_input_precision]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#gadd99b7cc98b3c33daa2095b8a29f66d7
[ie_network_get_output_name]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga1feabc49576db24d9821a150b2b50a6c
[ie_network_get_output_precision]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#gaeaa7f1fb8f56956fc492cd9207235984
[ie_core_load_network]:https://docs.openvino.ai/latest/ie_c_api/group__Core.html#ga318d4b0214b8a3fd33f9e44170befcc5
[ie_exec_network_create_infer_request]:https://docs.openvino.ai/latest/ie_c_api/group__ExecutableNetwork.html#gae72247391c1429a18c367594a4b7db9f
[ie_blob_make_memory_from_preallocated]:https://docs.openvino.ai/latest/ie_c_api/group__Blob.html#ga7a874d46375e10fa1a7e8e3d7e1c9c9c
[ie_infer_request_set_blob]:https://docs.openvino.ai/latest/ie_c_api/group__InferRequest.html#ga891c2d475501bba761148a0c3faca196
[ie_infer_request_infer]:https://docs.openvino.ai/latest/ie_c_api/group__InferRequest.html#gac6c6fcb67ccb4d0ec9ad1c63a5bee7b6
[ie_infer_request_get_blob]:https://docs.openvino.ai/latest/ie_c_api/group__InferRequest.html#ga6cd04044ea95987260037bfe17ce1a2d
[ie_blob_get_dims]:https://docs.openvino.ai/latest/ie_c_api/group__Blob.html#ga25d93efd7ec1052a8896ac61cc14c30a
[ie_blob_get_cbuffer]:https://docs.openvino.ai/latest/ie_c_api/group__Blob.html#gaf6b4a110b4c5723dcbde135328b3620a
[ie_network_set_input_resize_algorithm]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga46ab3b3a06359f2b77f58bdd6e8a5492
[ie_network_set_input_layout]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga27ea9f92290e0b2cdedbe8a85feb4c01
[ie_network_get_inputs_number]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga6a3349bca66c4ba8b41a434061fccf52
[ie_network_get_outputs_number]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga869b8c309797f1e09f73ddffd1b57509
- [C API Reference](https://docs.openvino.ai/latest/api/api_reference.html)

View File

@ -2,12 +2,14 @@
This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API.
Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API of the Inference Engine in your applications:
Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API in your applications:
| Feature | API | Description |
| :--- | :--- | :--- |
| Node Operations | `ov_port_get_any_name` | Get a layer name |
| Infer Request Operations | `ov_infer_request_set_tensor`, `ov_infer_request_get_output_tensor_by_index` | Operate with tensors |
| Preprocessing | `ov_preprocess_input_tensor_info_set_color_format`, `ov_preprocess_preprocess_steps_convert_element_type`, `ov_preprocess_preprocess_steps_convert_color` | Change the color format of the input data |
| Feature | API | Description |
|:--- |:--- |:---
| Blob Operations| [ie_blob_make_memory_nv12] | Create a NV12 blob
| Input in N12 color format |[ie_network_set_color_format]| Change the color format of the input data
Basic Inference Engine API is covered by [Hello Classification C sample](../hello_classification/README.md).
| Options | Values |
@ -109,6 +111,4 @@ This sample is an API example, for any performance measurements please use the d
- [Using OpenVINO™ Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[ie_network_set_color_format]:https://docs.openvino.ai/latest/ie_c_api/group__Network.html#ga85f3251f1f7b08507c297e73baa58969
[ie_blob_make_memory_nv12]:https://docs.openvino.ai/latest/ie_c_api/group__Blob.html#ga0a2d97b0d40a53c01ead771f82ae7f4a
- [C API Reference](https://docs.openvino.ai/latest/api/api_reference.html)