* add sphinx log parsing * fix * fix log * fixes * fixes * fixes * fixes * fixes * fixes * fixes * fixes * fixes * fixes * doxygen-xfail * fixes * fixes * fixes * fixe * fixes * fixes * fix pot * add pot check * fixes * fixes * Fixed POT docs * Fixed POT docs * Fixes * change heading markup * fixes Co-authored-by: azaytsev <andrey.zaytsev@intel.com>
nGraph Function Creation Python* Sample
This sample demonstrates how to execute an inference using nGraph function feature to create a network that uses weights from LeNet classification network, which is known to work well on digit classification tasks. So you don't need an XML file, the model will be created from the source code on the fly.
In addition to regular grayscale images with a digit, the sample also supports single-channel ubyte images as an input.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
|---|---|---|
| Network Operations | IENetwork, IENetwork.batch_size | Managing of network |
| nGraph Functions | ngraph.impl.Function, ngraph.parameter, ngraph.constant, ngraph.convolution, ngraph.add, ngraph.max_pool, ngraph.reshape, ngraph.matmul, ngraph.relu, ngraph.softmax, ngraph.result, ngraph.impl.Function.to_capsule | Description of a network using nGraph Python API |
Basic Inference Engine API is covered by Hello Classification Python* Sample.
| Options | Values |
|---|---|
| Validated Models | LeNet |
| Model Format | Network weights file (*.bin) |
| Validated images | The sample uses OpenCV* to read input grayscale image (*.bmp, *.png) or single-channel ubyte image |
| Supported devices | All |
| Other language realization | C++ |
How It Works
At startup, the sample application reads command-line parameters, prepares input data, creates a network using nGraph function feature and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
Running
Run the application with the -h option to see the usage message:
python <path_to_sample>/ngraph_function_creation_sample.py -h
Usage message:
usage: ngraph_function_creation_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to a file with network weights.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.
To run the sample, you need specify a model weights and image. You can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
Note
:
This sample supports models with FP32 weights only.
The
lenet.binweights file was generated by the Model Optimizer tool from the public LeNet model with the--input_shape [64,1,28,28]parameter specified.The original model is available in the Caffe* repository on GitHub*.
The white over black images will be automatically inverted in color for a better predictions.
For example, you can do inference of 3.png using the pre-trained model on a GPU:
python <path_to_sample>/ngraph_function_creation_sample.py -m <path_to_sample>/lenet.bin -i <path_to_image>/3.png -d GPU
Sample Output
The sample application logs each step in a standard output stream and outputs top-10 inference results.
[ INFO ] Creating Inference Engine
[ INFO ] Loading the network using ngraph function with weights from c:\openvino\samples\python\ngraph_function_creation_sample\lenet.bin
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\3.png is inverted to white over black
[ WARNING ] Image c:\images\3.png is resized from (351, 353) to (28, 28)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: c:\images\3.png
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 3 1.0000000
[ INFO ] 9 0.0000000
[ INFO ] 8 0.0000000
[ INFO ] 7 0.0000000
[ INFO ] 6 0.0000000
[ INFO ] 5 0.0000000
[ INFO ] 4 0.0000000
[ INFO ] 2 0.0000000
[ INFO ] 1 0.0000000
[ INFO ] 0 0.0000000
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
See Also
- Integrate the Inference Engine with Your Application
- Using Inference Engine Samples
- [Model Downloader](@ref omz_tools_downloader)
- Model Optimizer