* Added info on DockerHub CI Framework * Feature/azaytsev/change layout (#3295) * Changes according to feedback comments * Replaced @ref's with html links * Fixed links, added a title page for installing from repos and images, fixed formatting issues * Added links * minor fix * Added DL Streamer to the list of components installed by default * Link fixes * Link fixes * ovms doc fix (#2988) * added OpenVINO Model Server * ovms doc fixes Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com> * Updated openvino_docs.xml * Edits to MO Per findings spreadsheet * macOS changes per issue spreadsheet * Fixes from review spreadsheet Mostly IE_DG fixes * Consistency changes * Make doc fixes from last round of review * integrate changes from baychub/master * Update Intro.md * Update Cutting_Model.md * Update Cutting_Model.md * Fixed link to Customize_Model_Optimizer.md Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com> Co-authored-by: baychub <cbay@yahoo.com>
7.6 KiB
Hello Classification Python* Sample
This sample demonstrates how to do inference of image classification networks using Synchronous Inference Request API.
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
|---|---|---|
| Basic Infer Flow | IECore, IECore.read_network, IECore.load_network | Common API to do inference |
| Synchronous Infer | ExecutableNetwork.infer | Do synchronous inference |
| Network Operations | IENetwork.input_info, IENetwork.outputs, InputInfoPtr.precision, DataPtr.precision, InputInfoPtr.input_data.shape | Managing of network: configure input and output blobs |
| Options | Values |
|---|---|
| Validated Models | alexnet |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | All |
| Other language realization | C++, C |
How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
Running
Run the application with the -h option to see the usage message:
python hello_classification.py -h
Usage message:
usage: hello_classification.py [-h] -m MODEL -i INPUT [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
NOTES:
By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
For example, to perform inference of an image using a pre-trained model on a GPU, run the following command:
python hello_classification.py -m <path_to_model>/alexnet.xml -i <path_to_image>/cat.bmp -d GPU
Sample Output
The sample application logs each step in a standard output stream and outputs top-10 inference results.
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\alexnet.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (227, 227)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: images\cat.bmp
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 435 0.0996890
[ INFO ] 876 0.0900242
[ INFO ] 999 0.0691449
[ INFO ] 587 0.0390189
[ INFO ] 666 0.0360393
[ INFO ] 419 0.0308307
[ INFO ] 285 0.0306287
[ INFO ] 700 0.0293009
[ INFO ] 696 0.0202707
[ INFO ] 631 0.0199126
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
See Also
- Integrate the Inference Engine with Your Application
- Using Inference Engine Samples
- [Model Downloader](@ref omz_tools_downloader_README)
- Model Optimizer