This sample demonstrates how to execute an synchronous inference using [nGraph function feature](../../../docs/nGraph_DG/build_function.md) to create a network, which uses weights from LeNet classification network, which is known to work well on digit classification tasks.
You do not need an XML file to create a network. The API of ngraph::Function allows to create a network on the fly from the source code.
nGraph Function Creation C++ Sample demonstrates the following Inference Engine API in your applications:
| Feature | API | Description |
|:--- |:--- |:---
|Inference Engine Version| `InferenceEngine::GetInferenceEngineVersion` | Get Inference Engine API version
|Available Devices|`InferenceEngine::Core::GetAvailableDevices`| Get version information of the devices for inference
| Network Operations | `InferenceEngine::CNNNetwork::setBatchSize`, `InferenceEngine::CNNNetwork::getBatchSize` | Managing of network, operate with its batch size. Setting batch size using input image count.
|nGraph Functions| `ngraph::Function`, `ngraph::op`, `ngraph::Node`, `ngraph::Shape::Shape`, `ngraph::Strides::Strides`, `ngraph::CoordinateDiff::CoordinateDiff`, `ngraph::Node::set_friendly_name`, `ngraph::shape_size`, `ngraph::ParameterVector::vector` | Illustrates how to construct an nGraph function
At startup, the sample application reads command-line parameters, prepares input data, creates a network using the [nGraph function feature](../../../docs/nGraph_DG/build_function.md) and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference and processes output data, logging each step in a standard output stream. You can place labels in .labels file near the model to get pretty output.
You can see the explicit description of each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
- you can use LeNet model weights in the sample folder: `lenet.bin` with FP32 weights file
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> - The `lenet.bin` with FP32 weights file was generated by the [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
>
> The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
-m "<path>" Required. Path to a .bin file with weights for the trained model.
-i "<path>" Required. Path to a folder with images or path to image files. Support ubyte files only.
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma_separated_devices_list>" format to specify HETERO plugin. Sample will look for a suitable plugin for device specified.
*Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.*
*Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.*