Added onnx support for C samples (#2747)

* ngraph python sample

This sample demonstrates how to execute an inference using ngraph::Function to create a network
- added sample
- added readme
- added lenet weights

* Added onnx support for C samples

* Revert "ngraph python sample"

This reverts commit 8033292dc3.

* Added onnx support for C samples

Fixed codestyle mistake

* Removed optional code

Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
This commit is contained in:
Mikhail Ryzhov 2020-10-23 21:47:01 +03:00 committed by GitHub
parent d846969a1c
commit dea5f43c9a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 11 additions and 10 deletions

View File

@ -17,6 +17,8 @@ To properly demonstrate this API, it is required to run several networks in pipe
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a trained AlexNet network on a GPU using the following command: You can do inference of an image using a trained AlexNet network on a GPU using the following command:

View File

@ -92,7 +92,7 @@ int main(int argc, char **argv) {
goto err; goto err;
// ----------------------------------------------------------------------------------------------------- // -----------------------------------------------------------------------------------------------------
// --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
status = ie_core_read_network(core, input_model, NULL, &network); status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK) if (status != OK)
goto err; goto err;

View File

@ -40,6 +40,8 @@ or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the
> Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command: You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command:
```sh ```sh

View File

@ -152,7 +152,7 @@ int main(int argc, char **argv) {
goto err; goto err;
// ----------------------------------------------------------------------------------------------------- // -----------------------------------------------------------------------------------------------------
// --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ // 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
status = ie_core_read_network(core, input_model, NULL, &network); status = ie_core_read_network(core, input_model, NULL, &network);
if (status != OK) if (status != OK)
goto err; goto err;

View File

@ -40,6 +40,8 @@ Running the application with the empty list of options yields the usage message
To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
For example, to do inference on a CPU with the OpenVINO&trade; toolkit person detection SSD models, run one of the following commands: For example, to do inference on a CPU with the OpenVINO&trade; toolkit person detection SSD models, run one of the following commands:

View File

@ -344,15 +344,10 @@ int main(int argc, char **argv) {
} }
// ----------------------------------------------------------------------------------------------------- // -----------------------------------------------------------------------------------------------------
// --------------------------- 4. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------ // 4. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
input_weight = (char *)calloc(strlen(input_model) + 1, sizeof(char)); printf("%sLoading network:\n", info);
memcpy(input_weight, input_model, strlen(input_model) - 4);
memcpy(input_weight + strlen(input_model) - 4, ".bin", strlen(".bin") + 1);
printf("%sLoading network files:\n", info);
printf("\t%s\n", input_model); printf("\t%s\n", input_model);
printf("\t%s\n", input_weight); status = ie_core_read_network(core, input_model, NULL, &network);
status = ie_core_read_network(core, input_model, input_weight, &network);
if (status != OK) if (status != OK)
goto err; goto err;
// ----------------------------------------------------------------------------------------------------- // -----------------------------------------------------------------------------------------------------