Files
openvino/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md
Andrey Zaytsev 235cd56e54 Feature/azaytsev/cherry picks from 2021 2 (#4069)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Added Intel® Iris® Xe Dedicated Graphics, naming convention info (#3523)

* Added Intel® Iris® Xe Dedicated Graphics, naming convention info

* Added GPU.0 GPU.1

* added info about Intel® Iris® Xe MAX Graphics drivers

* Feature/azaytsev/transition s3 bucket (#3609)

* Replaced https://download.01.org/ links with https://storage.openvinotoolkit.org/

* Fixed links
# Conflicts:
#	inference-engine/ie_bridges/java/samples/README.md

* Benchmarks 2021 2 (#3590)

* Initial changes

* Updates

* Updates

* Updates

* Fixed graph names

* minor fix

* Fixed link

* Implemented changes according to the review changes

* fixed links

* Updated Legal_Information.md according to review feedback

* Replaced  Uzel* UI-AR8 with Mustang-V100-MX8

* Feature/azaytsev/ovsa docs (#3627)

* Added ovsa_get_started.md

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Updated the GSG topic, added a new image

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Revert "Formatting issues fixes"

This reverts commit c6e6207431.

* Replaced to Security section

* doc fixes (#3626)

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
# Conflicts:
#	docs/IE_DG/network_state_intro.md

* fix latex formula (#3630)

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>

* fix comments ngraph api 2021.2 (#3520)

* fix comments ngraph api

* remove whitespace

* fixes

Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>

* Feature/azaytsev/g api docs (#3731)

* Initial commit

* Added content

* Added new content for g-api documentation. Removed obsolete links through all docs

* Fixed layout

* Fixed layout

* Added new topics

* Added new info

* added a note

* Removed redundant .svg
# Conflicts:
#	docs/get_started/get_started_dl_workbench.md

* [Cherry-pick] DL Workbench cross-linking (#3488)

* Added links to MO and Benchmark App

* Changed wording

* Fixes a link

* fixed a link

* Changed the wording

* Links to WB

* Changed wording

* Changed wording

* Fixes

* Changes the wording

* Minor corrections

* Removed an extra point

* cherry-pick

* Added the doc

* More instructions and images

* Added slide

* Borders for screenshots

* fixes

* Fixes

* Added link to Benchmark app

* Replaced the image

* tiny fix

* tiny fix

* Fixed a typo

* Feature/azaytsev/g api docs (#3731)

* Initial commit

* Added content

* Added new content for g-api documentation. Removed obsolete links through all docs

* Fixed layout

* Fixed layout

* Added new topics

* Added new info

* added a note

* Removed redundant .svg

* Doc updates 2021 2 (#3749)

* Change the name of parameter tensorflow_use_custom_operations_config to transformations_config

* Fixed formatting

* Corrected MYRIAD plugin name

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Updated IGS, added links to Get Started Guides

* Fixed links

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Move the Note to the proper place

* Removed optimization notice
# Conflicts:
#	docs/ops/detection/DetectionOutput_1.md

* minor fix

* Benchmark updates (#4041)

* Link fixes for 2021.2 benchmark page  (#4086)

* Benchmark updates

* Fixed links

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Nikolay Tyukaev <ntyukaev_lo@jenkins.inn.intel.com>
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
2021-02-02 11:29:12 +03:00

4.8 KiB

Image Classification Python* Sample Async

This sample demonstrates how to run the Image Classification sample application with inference executed in the asynchronous mode.

The sample demonstrates how to use the new Infer Request API of Inference Engine in applications. Refer to Integrate the Inference Engine New Request API with Your Application for details. The sample demonstrates how to build and execute an inference request 10 times in the asynchronous mode on example of classifications networks. The asynchronous mode might increase the throughput of the pictures.

The batch mode is an independent attribute on the asynchronous mode. Asynchronous mode works efficiently with any batch size.

How It Works

Upon the start-up, the sample application reads command line parameters and loads specified network and input images (or a folder with images) to the Inference Engine plugin. The batch size of the network is set according to the number of read images.

Then, the sample creates an inference request object and assigns completion callback for it. In scope of the completion callback handling the inference request is executed again.

After that, the application starts inference for the first infer request and waits of 10th inference request execution being completed.

When inference is done, the application outputs data to the standard output stream.

Note

: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.

Running

Running the application with the -h option yields the following usage message:

python3 classification_sample_async.py -h

The command yields the following usage message:

usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
                                      [-l CPU_EXTENSION]
                                      [-d DEVICE] [--labels LABELS]
                                      [-nt NUMBER_TOP]

Options:
  -h, --help            Show this help message and exit.
  -m MODEL, --model MODEL
                        Required. Path to an .xml file with a trained model.
  -i INPUT [INPUT ...], --input INPUT [INPUT ...]
                        Required. Path to a folder with images or path to an
                        image files
  -l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
                        Optional. Required for CPU custom layers. Absolute
                        path to a shared library with the kernels
                        implementations.
  -d DEVICE, --device DEVICE
                        Optional. Specify the target device to infer on; CPU,
                        GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
                        will look for a suitable plugin for device specified.
                        Default value is CPU
  --labels LABELS       Optional. Labels mapping file
  -nt NUMBER_TOP, --number_top NUMBER_TOP
                        Optional. Number of top results

Running the application with the empty list of options yields the usage message given above and an error message.

To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models using the [Model Downloader](@ref omz_tools_downloader_README).

Note

: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.

The sample accepts models in ONNX format (.onnx) that do not require preprocessing.

You can do inference of an image using a trained AlexNet network on FPGA with fallback to CPU using the following command:

    python3 classification_sample_async.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d HETERO:FPGA,CPU

Sample Output

By default, the application outputs top-10 inference results for each infer request. It also provides throughput value measured in frames per seconds.

See Also