[IE Python Sample] Update docs (#9807)

* update hello_classification readme

* update classification_async readme

* update hello_query_device readme

* Fix hello_classification launch line

* Update hello_reshape_ssd readme

* update speech sample docs

* update ngraph sample docs

* fix launch command

* refactor py ngraph imports

* Replace `network` with `model`

* update example section with openvino-dev

* Update samples/python/classification_sample_async/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/classification_sample_async/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/hello_classification/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/hello_classification/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/hello_reshape_ssd/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Update samples/python/ngraph_function_creation_sample/README.md

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* Replace `Inference Engine` with `OpenVINO`

* fix ngraph ref

* Replace `Inference Engine` by `OpenVINO™ Runtime`

* Fix IR mentions

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
This commit is contained in:
Dmitry Pigasin
2022-02-14 19:03:45 +03:00
committed by GitHub
parent 310eb81403
commit 3a5d821219
12 changed files with 567 additions and 548 deletions

View File

@@ -1,67 +1,35 @@
# Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README}
This sample demonstrates how to do synchronous inference of object detection networks using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md).
This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md).
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
The following Python API is used in the application:
| Feature | API | Description |
| :----------------------- | :-------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- |
| Network Operations | [IENetwork.reshape] | Managing of network: configure input and output blobs |
| Custom Extension Kernels | [IECore.add_extension], [IECore.set_config] | Load extension library and config to the device |
| Feature | API | Description |
| :--------------- | :---------------------------------------------------------------------------------------------------------------------------------------- | :---------------- |
| Model Operations | [openvino.runtime.Model.reshape], [openvino.runtime.Model.input], [openvino.runtime.Output.get_any_name], [openvino.runtime.PartialShape] | Managing of model |
Basic Inference Engine API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
| Options | Values |
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../samples/cpp/hello_reshape_ssd/README.md) |
| Options | Values |
| :------------------------- | :----------------------------------------------------------------------- |
| Validated Models | [ssdlite_mobilenet_v2](@ref omz_models_model_ssdlite_mobilenet_v2) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../samples/cpp/hello_reshape_ssd/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data.
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the OpenVINO™ Runtime plugin, performs synchronous inference, and processes output data.
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md) section of "Integrate the OpenVINO™ Runtime with Your Application" guide.
## Running
Run the application with the `-h` option to see the usage message:
```
python <path_to_sample>/hello_reshape_ssd.py -h
```
Usage message:
```
usage: hello_reshape_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG]
[-d DEVICE] [--labels LABELS]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml or .onnx file with a trained
model.
-i INPUT, --input INPUT
Required. Path to an image file.
-l EXTENSION, --extension EXTENSION
Optional. Required by the CPU Plugin for executing the
custom operation on a CPU. Absolute path to a shared
library with the kernels implementations.
-c CONFIG, --config CONFIG
Optional. Required by GPU or VPU Plugins for the
custom operation kernel. Absolute path to operation
description file (.xml).
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
python hello_reshape_ssd.py <path_to_model> <path_to_image> <device_name>
```
To run the sample, you need specify a model and image:
@@ -70,28 +38,35 @@ To run the sample, you need specify a model and image:
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
> - By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
### Example
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
```
python <path_to_omz_tools>/downloader.py --name mobilenet-ssd
```
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
```
python <path_to_omz_tools>/converter.py --name mobilenet-ssd
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
```
3. Perform inference of `car.bmp` using `mobilenet-ssd` model on a `GPU`, for example:
2. Download a pre-trained model:
```
omz_downloader --name ssdlite_mobilenet_v2
```
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
```
python <path_to_sample>/hello_reshape_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/car.bmp -d GPU
omz_converter --name ssdlite_mobilenet_v2
```
4. Perform inference of `banana.jpg` using `ssdlite_mobilenet_v2` model on a `GPU`, for example:
```
python hello_reshape_ssd.py ssdlite_mobilenet_v2.xml banana.jpg GPU
```
## Sample Output
@@ -99,34 +74,24 @@ python <path_to_sample>/hello_reshape_ssd.py -m <path_to_model>/mobilenet-ssd.xm
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\mobilenet-ssd\FP32\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
[ INFO ] Reshaping the network to the height and width of the input image
[ INFO ] Input shape before reshape: [1, 3, 300, 300]
[ INFO ] Input shape after reshape: [1, 3, 637, 749]
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: C:/test_data/models/ssdlite_mobilenet_v2.xml
[ INFO ] Reshaping the model to the height and width of the input image
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Found: label = 7, confidence = 0.99, coords = (283, 166), (541, 472)
[ INFO ] Found: class_id = 52, confidence = 0.98, coords = (21, 98), (276, 210)
[ INFO ] Image out.bmp was created!
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
```
## See Also
- [Integrate the Inference Engine with Your Application](../../../docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
- [Integrate the OpenVINO™ Runtime with Your Application](../../../docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md)
- [Using OpenVINO™ Toolkit Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967
[IECore.set_config]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IECore.html#a2c738cee90fca27146e629825c039a05
[IECore.read_network]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f
[IENetwork.input_info]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[IENetwork.outputs]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IENetwork.html#data_fields
[InputInfoPtr.precision]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields
[DataPtr.precision]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields
[IECore.load_network]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc
[IENetwork.reshape]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1IENetwork.html#a6683f0291db25f908f8d6720ab2f221a
[ExecutableNetwork.infer]:https://docs.openvino.ai/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519
<!-- [openvino.runtime.Model.reshape]:
[openvino.runtime.Model.input]:
[openvino.runtime.Output.get_any_name]:
[openvino.runtime.PartialShape]: -->

View File

@@ -18,7 +18,7 @@ def main():
# Parsing and validation of input arguments
if len(sys.argv) != 4:
log.info('Usage: <path_to_model> <path_to_image> <device_name>')
log.info(f'Usage: {sys.argv[0]} <path_to_model> <path_to_image> <device_name>')
return 1
model_path = sys.argv[1]
@@ -30,7 +30,7 @@ def main():
core = Core()
# --------------------------- Step 2. Read a model --------------------------------------------------------------------
log.info(f'Reading the network: {model_path}')
log.info(f'Reading the model: {model_path}')
# (.xml and .bin files) or (.onnx file)
model = core.read_model(model_path)
@@ -48,7 +48,7 @@ def main():
# Add N dimension
input_tensor = np.expand_dims(image, 0)
log.info('Reshaping the network to the height and width of the input image')
log.info('Reshaping the model to the height and width of the input image')
n, h, w, c = input_tensor.shape
model.reshape({model.input().get_any_name(): PartialShape((n, c, h, w))})