diff --git a/inference-engine/ie_bridges/c/samples/hello_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_classification/README.md index 26babb9c20c..2b0ca163ac0 100644 --- a/inference-engine/ie_bridges/c/samples/hello_classification/README.md +++ b/inference-engine/ie_bridges/c/samples/hello_classification/README.md @@ -47,33 +47,45 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (\*.onnx) that do not require preprocessing. -You can do inference of an image using a trained AlexNet network on a GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -./hello_classification_c /alexnet_fp32.xml /cat.png GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example: + +``` +/hello_classification_c /alexnet.xml /car.bmp GPU ``` ## Sample Output The application outputs top-10 inference results. -```sh +``` Top 10 results: -Image /opt/intel/openvino/deployment_tools/demo/car.png +Image C:\images\car.bmp classid probability ------- ----------- -479 0.7562205 -511 0.0760381 -436 0.0724111 -817 0.0462140 -656 0.0301231 -661 0.0056171 -581 0.0031622 -468 0.0029917 -717 0.0023081 -627 0.0016193 +656 0.666479 +654 0.112940 +581 0.068487 +874 0.033385 +436 0.026132 +817 0.016731 +675 0.010980 +511 0.010592 +569 0.008178 +717 0.006336 This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool ``` diff --git a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md index 1456c87a6db..0479ae90278 100644 --- a/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md +++ b/inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md @@ -62,17 +62,29 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can perform inference on an NV12 image using a trained AlexNet network on a CPU with the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -./hello_nv12_input_classification_c /alexnet_fp32.xml /cat.yuv 300x300 CPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of NV12 image using `alexnet` model on a `CPU`, for example: + +``` +/hello_nv12_input_classification_c /alexnet.xml /cat.yuv 300x300 CPU ``` ## Sample Output The application outputs top-10 inference results. -```sh +``` Top 10 results: Image ./cat.yuv diff --git a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md index 1a046ebe32c..f253da02ec2 100644 --- a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md +++ b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md @@ -45,10 +45,10 @@ To run the sample, you need specify a model and image: - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. -Running the application with the -h option yields the following usage message: +Running the application with the `-h` option yields the following usage message: -```sh -./object_detection_sample_ssd_c -h +``` +/object_detection_sample_ssd_c -h [ INFO ] InferenceEngine: [ INFO ] Parsing input parameters @@ -76,24 +76,36 @@ Options: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -For example, to perform inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name person-detection-retail-0013 +``` + +2. `person-detection-retail-0013` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script: + +``` +python /converter.py --name +``` + +3. For example, to perform inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands: - with one image and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model -```sh -./object_detection_sample_ssd_c -i /inputImage.bmp -m /person-detection-retail-0013.xml -d CPU +``` +/object_detection_sample_ssd_c -i /inputImage.bmp -m /person-detection-retail-0013.xml -d CPU ``` - with some images and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model -```sh -./object_detection_sample_ssd_c -i /inputImage1.bmp /inputImage2.bmp ... -m /person-detection-retail-0013.xml -d CPU +``` +/object_detection_sample_ssd_c -i /inputImage1.bmp /inputImage2.bmp ... -m /person-detection-retail-0013.xml -d CPU ``` - with [person-detection-retail-0002](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0002_description_person_detection_retail_0002.html) model -```sh -./object_detection_sample_ssd_c -i -m /person-detection-retail-0002.xml -d CPU +``` +/object_detection_sample_ssd_c -i -m /person-detection-retail-0002.xml -d CPU ``` ## Sample Output @@ -101,8 +113,8 @@ For example, to perform inference on a CPU with the OpenVINO™ toolkit pers The application outputs several images (`out_0.bmp`, `out_1.bmp`, ... ) with detected objects enclosed in rectangles. It outputs the list of classes of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream. -```sh -object_detection_sample_ssd_c -m person-detection-retail-0013.xml -i image_1.png image_2.jpg +``` +/object_detection_sample_ssd_c -m person-detection-retail-0013.xml -i image_1.png image_2.jpg [ INFO ] InferenceEngine: diff --git a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md index c757df57d44..67a8282beaa 100644 --- a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md +++ b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md @@ -28,15 +28,15 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with ## Running -Run the application with the -h option to see the usage message: +Run the application with the `-h` option to see the usage message: -```sh -python classification_sample_async.py -h +``` +python /classification_sample_async.py -h ``` Usage message: -```sh +``` usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...] [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP] @@ -79,55 +79,67 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can do inference of an image using a pre-trained model on a GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -python classification_sample_async.py -m /alexnet.xml -i /cat.bmp /car.bmp -d GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of `car.bmp` and `cat.jpg` using `alexnet` model on a `GPU`, for example: + +``` +python /classification_sample_async.py -m /alexnet.xml -i /car.bmp /cat.jpg -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and outputs top-10 inference results. -```sh +``` [ INFO ] Creating Inference Engine -[ INFO ] Reading the network: models\alexnet.xml +[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\alexnet\FP32\alexnet.xml [ INFO ] Configuring input and output blobs [ INFO ] Loading the model to the plugin -[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (227, 227) -[ WARNING ] Image images\car.bmp is resized from (259, 787) to (227, 227) +[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (227, 227) +[ WARNING ] Image c:\images\cat.jpg is resized from (300, 300) to (227, 227) [ INFO ] Starting inference in asynchronous mode [ INFO ] Infer request 0 returned 0 -[ INFO ] Image path: images\cat.bmp +[ INFO ] Image path: c:\images\car.bmp [ INFO ] Top 10 results: [ INFO ] classid probability [ INFO ] ------------------- -[ INFO ] 435 0.0996898 -[ INFO ] 876 0.0900239 -[ INFO ] 999 0.0691452 -[ INFO ] 587 0.0390186 -[ INFO ] 666 0.0360390 -[ INFO ] 419 0.0308306 -[ INFO ] 285 0.0306287 -[ INFO ] 700 0.0293007 -[ INFO ] 696 0.0202707 -[ INFO ] 631 0.0199126 +[ INFO ] 656 0.6645315 +[ INFO ] 654 0.1121185 +[ INFO ] 581 0.0698451 +[ INFO ] 874 0.0334973 +[ INFO ] 436 0.0259718 +[ INFO ] 817 0.0173190 +[ INFO ] 675 0.0109321 +[ INFO ] 511 0.0109075 +[ INFO ] 569 0.0083093 +[ INFO ] 717 0.0063173 [ INFO ] [ INFO ] Infer request 1 returned 0 -[ INFO ] Image path: images\car.bmp +[ INFO ] Image path: c:\images\cat.jpg [ INFO ] Top 10 results: [ INFO ] classid probability [ INFO ] ------------------- -[ INFO ] 479 0.7561803 -[ INFO ] 511 0.0755696 -[ INFO ] 436 0.0730265 -[ INFO ] 817 0.0460268 -[ INFO ] 656 0.0303792 -[ INFO ] 661 0.0055282 -[ INFO ] 581 0.0031296 -[ INFO ] 468 0.0029875 -[ INFO ] 717 0.0022792 -[ INFO ] 627 0.0016297 +[ INFO ] 876 0.1320105 +[ INFO ] 435 0.1210389 +[ INFO ] 285 0.0712640 +[ INFO ] 282 0.0570528 +[ INFO ] 281 0.0319335 +[ INFO ] 999 0.0285931 +[ INFO ] 94 0.0270323 +[ INFO ] 36 0.0240510 +[ INFO ] 335 0.0198461 +[ INFO ] 186 0.0183939 [ INFO ] [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool ``` diff --git a/inference-engine/ie_bridges/python/sample/hello_classification/README.md b/inference-engine/ie_bridges/python/sample/hello_classification/README.md index f5ba3e1ff5a..19bfcacddb0 100644 --- a/inference-engine/ie_bridges/python/sample/hello_classification/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_classification/README.md @@ -13,7 +13,7 @@ The following Inference Engine Python API is used in the application: | Options | Values | | :------------------------- | :-------------------------------------------------------------------------------------------------------- | -| Validated Models | [alexnet](@ref omz_models_model_alexnet) | +| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | | Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | | Other language realization | [C++](../../../../samples/hello_classification/README.md), [C](../../../c/samples/hello_classification/README.md) | @@ -29,13 +29,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with Run the application with the `-h` option to see the usage message: -```sh -python hello_classification.py -h +``` +python /hello_classification.py -h ``` Usage message: -```sh +``` usage: hello_classification.py [-h] -m MODEL -i INPUT [-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP] @@ -68,37 +68,49 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -For example, to perform inference of an image using a pre-trained model on a GPU, run the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -python hello_classification.py -m /alexnet.xml -i /cat.bmp -d GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example: + +``` +python /hello_classification.py -m /alexnet.xml -i /car.bmp -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and outputs top-10 inference results. -```sh +``` [ INFO ] Creating Inference Engine -[ INFO ] Reading the network: models\alexnet.xml +[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\alexnet\FP32\alexnet.xml [ INFO ] Configuring input and output blobs [ INFO ] Loading the model to the plugin -[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (227, 227) +[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (227, 227) [ INFO ] Starting inference in synchronous mode -[ INFO ] Image path: images\cat.bmp -[ INFO ] Top 10 results: +[ INFO ] Image path: c:\images\car.bmp +[ INFO ] Top 10 results: [ INFO ] classid probability [ INFO ] ------------------- -[ INFO ] 435 0.0996890 -[ INFO ] 876 0.0900242 -[ INFO ] 999 0.0691449 -[ INFO ] 587 0.0390189 -[ INFO ] 666 0.0360393 -[ INFO ] 419 0.0308307 -[ INFO ] 285 0.0306287 -[ INFO ] 700 0.0293009 -[ INFO ] 696 0.0202707 -[ INFO ] 631 0.0199126 +[ INFO ] 656 0.6645315 +[ INFO ] 654 0.1121185 +[ INFO ] 581 0.0698451 +[ INFO ] 874 0.0334973 +[ INFO ] 436 0.0259718 +[ INFO ] 817 0.0173190 +[ INFO ] 675 0.0109321 +[ INFO ] 511 0.0109075 +[ INFO ] 569 0.0083093 +[ INFO ] 717 0.0063173 [ INFO ] [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool ``` diff --git a/inference-engine/ie_bridges/python/sample/hello_query_device/README.md b/inference-engine/ie_bridges/python/sample/hello_query_device/README.md index 41ef7451a94..52bf3e32114 100644 --- a/inference-engine/ie_bridges/python/sample/hello_query_device/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_query_device/README.md @@ -22,15 +22,15 @@ The sample queries all available Inference Engine devices and prints their suppo The sample has no command-line parameters. To see the report, run the following command: -```sh -python hello_query_device.py +``` +python /hello_query_device.py ``` ## Sample Output The application prints all available devices with their supported metrics and default values for configuration parameters. (Some lines are not shown due to length.) For example: -```sh +``` [ INFO ] Creating Inference Engine [ INFO ] Available devices: [ INFO ] CPU : diff --git a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md index e35804abdc2..ec8903c442c 100644 --- a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md @@ -29,15 +29,15 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with ## Running -Run the application with the -h option to see the usage message: +Run the application with the `-h` option to see the usage message: -```sh -python hello_reshape_ssd.py -h +``` +python /hello_reshape_ssd.py -h ``` Usage message: -```sh +``` usage: hello_reshape_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--labels LABELS] @@ -76,26 +76,38 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can do inference of an image using a pre-trained model on a GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name mobilenet-ssd +``` -```sh -python hello_reshape_ssd.py -m /mobilenet-ssd.xml -i /cat.bmp -d GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name mobilenet-ssd +``` + +3. Perform inference of `car.bmp` using `mobilenet-ssd` model on a `GPU`, for example: + +``` +python /hello_reshape_ssd.py -m /mobilenet-ssd.xml -i /car.bmp -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence. -```sh +``` [ INFO ] Creating Inference Engine -[ INFO ] Reading the network: models\mobilenet-ssd.xml +[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\mobilenet-ssd\FP32\mobilenet-ssd.xml [ INFO ] Configuring input and output blobs [ INFO ] Reshaping the network to the height and width of the input image [ INFO ] Input shape before reshape: [1, 3, 300, 300] -[ INFO ] Input shape after reshape: [1, 3, 300, 300] +[ INFO ] Input shape after reshape: [1, 3, 637, 749] [ INFO ] Loading the model to the plugin [ INFO ] Starting inference in synchronous mode -[ INFO ] Found: label = 8, confidence = 1.00, coords = (115, 64), (189, 182) +[ INFO ] Found: label = 7, confidence = 0.99, coords = (283, 166), (541, 472) [ INFO ] Image out.bmp was created! [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool ``` diff --git a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md index 254b9f864ef..097da5ac35f 100644 --- a/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md +++ b/inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md @@ -30,15 +30,15 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with ## Running -Run the application with the -h option to see the usage message: +Run the application with the `-h` option to see the usage message: -```sh -python ngraph_function_creation_sample.py -h +``` +python /ngraph_function_creation_sample.py -h ``` Usage message: -```sh +``` usage: ngraph_function_creation_sample.py [-h] -m MODEL -i INPUT [INPUT ...] [-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP] @@ -73,25 +73,25 @@ To run the sample, you need specify a model weights and image: > > - The white over black images will be automatically inverted in color for a better predictions. -You can do inference of an image using a pre-trained model on a GPU using the following command: +For example, you can do inference of `3.png` using the pre-trained model on a `GPU`: -```sh -python ngraph_function_creation_sample.py -m /lenet.bin -i /3.png -d GPU +``` +python /ngraph_function_creation_sample.py -m /lenet.bin -i /3.png -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and outputs top-10 inference results. -```sh +``` [ INFO ] Creating Inference Engine -[ INFO ] Loading the network using ngraph function with weights from /lenet.bin +[ INFO ] Loading the network using ngraph function with weights from c:\openvino\deployment_tools\inference_engine\samples\python\ngraph_function_creation_sample\lenet.bin [ INFO ] Configuring input and output blobs [ INFO ] Loading the model to the plugin -[ WARNING ] /3.png is inverted to white over black -[ WARNING ] /3.png is is resized from (351, 353) to (28, 28) +[ WARNING ] Image c:\images\3.png is inverted to white over black +[ WARNING ] Image c:\images\3.png is resized from (351, 353) to (28, 28) [ INFO ] Starting inference in synchronous mode -[ INFO ] Image path: /3.png +[ INFO ] Image path: c:\images\3.png [ INFO ] Top 10 results: [ INFO ] classid probability [ INFO ] ------------------- diff --git a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md index 85f8f0d7932..020fe3869f9 100644 --- a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md +++ b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md @@ -29,15 +29,15 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with ## Running -Run the application with the -h option to see the usage message: +Run the application with the `-h` option to see the usage message: -```sh -python object_detection_sample_ssd.py -h +``` +python /object_detection_sample_ssd.py -h ``` Usage message: -```sh +``` usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--labels LABELS] @@ -78,23 +78,37 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can do inference of an image using a pre-trained model on a GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name mobilenet-ssd +``` -```sh -python object_detection_sample_ssd.py -m /mobilenet-ssd.xml -i /cat.bmp -d GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name mobilenet-ssd +``` + +3. Perform inference of `car.bmp` using `mobilenet-ssd` model on a `GPU`, for example: + +``` +python /object_detection_sample_ssd.py -m /mobilenet-ssd.xml -i /car.bmp -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence. -```sh +``` [ INFO ] Creating Inference Engine -[ INFO ] Reading the network: models\mobilenet-ssd.xml +[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\mobilenet-ssd\FP32\mobilenet-ssd.xml [ INFO ] Configuring input and output blobs [ INFO ] Loading the model to the plugin +[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (300, 300) [ INFO ] Starting inference in synchronous mode -[ INFO ] Found: label = 8, confidence = 1.00, coords = (115, 64), (189, 182) +[ INFO ] Found: label = 7, confidence = 1.00, coords = (228, 120), (502, 460) +[ INFO ] Found: label = 7, confidence = 0.95, coords = (637, 233), (743, 608) [ INFO ] Image out.bmp created! [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool ``` diff --git a/inference-engine/ie_bridges/python/sample/speech_sample/README.md b/inference-engine/ie_bridges/python/sample/speech_sample/README.md index 16a918adaae..0d7289145f1 100644 --- a/inference-engine/ie_bridges/python/sample/speech_sample/README.md +++ b/inference-engine/ie_bridges/python/sample/speech_sample/README.md @@ -68,15 +68,15 @@ In addition to performing inference directly from a GNA model file, this option ## Running -Run the application with the -h option to see the usage message: +Run the application with the `-h` option to see the usage message: -```sh -python speech_sample.py -h +``` +python /speech_sample.py -h ``` Usage message: -```sh +``` usage: speech_sample.py [-h] (-m MODEL | -rg IMPORT_GNA_MODEL) -i INPUT [-o OUTPUT] [-r REFERENCE] [-d DEVICE] [-bs BATCH_SIZE] [-qb QUANTIZATION_BITS] @@ -131,8 +131,8 @@ Options: You can use the following model optimizer command to convert a Kaldi nnet1 or nnet2 neural network to Inference Engine Intermediate Representation format: -```sh -python mo.py --framework kaldi --input_model wsj_dnn5b.nnet --counts wsj_dnn5b.counts --remove_output_softmax --output_dir +``` +python /mo.py --framework kaldi --input_model wsj_dnn5b.nnet --counts wsj_dnn5b.counts --remove_output_softmax --output_dir ``` The following pre-trained models are available: @@ -147,8 +147,8 @@ All of them can be downloaded from [https://storage.openvinotoolkit.org/models_c You can do inference on IntelĀ® Processors with the GNA co-processor (or emulation library): -```sh -python speech_sample.py -d GNA_AUTO -m wsj_dnn5b.xml -i dev93_10.ark -r dev93_scores_10.ark -o result.npz +``` +python /speech_sample.py -m /wsj_dnn5b.xml -i /dev93_10.ark -r /dev93_scores_10.ark -d GNA_AUTO -o result.npz ``` > **NOTES**: @@ -161,7 +161,7 @@ python speech_sample.py -d GNA_AUTO -m wsj_dnn5b.xml -i dev93_10.ark -r dev93_sc The sample application logs each step in a standard output stream. -```sh +``` [ INFO ] Creating Inference Engine [ INFO ] Reading the network: wsj_dnn5b.xml [ INFO ] Configuring input and output blobs diff --git a/inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md b/inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md index d6ca9f8ba49..9689acacba7 100644 --- a/inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md +++ b/inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md @@ -30,15 +30,15 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with ## Running -Run the application with the -h option to see the usage message: +Run the application with the `-h` option to see the usage message: -```sh -python style_transfer_sample.py -h +``` +python /style_transfer_sample.py -h ``` Usage message: -```sh +``` usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...] [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--original_size] [--mean_val_r MEAN_VAL_R] @@ -90,23 +90,35 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can do inference of an image using a pre-trained model on a GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name fast-neural-style-mosaic-onnx +``` -```sh -python style_transfer_sample.py -m /fast-neural-style-mosaic-onnx.onnx -i /car.png /cat.jpg -d GPU +2. `fast-neural-style-mosaic-onnx` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script: + +``` +python /converter.py --name +``` + +3. Perform inference of `car.bmp` and `cat.jpg` using `fast-neural-style-mosaic-onnx` model on a `GPU`, for example: + +``` +python /style_transfer_sample.py -m /fast-neural-style-mosaic-onnx.onnx -i /car.bmp /cat.jpg -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and creates an output image (`out_0.bmp`) or a sequence of images (`out_0.bmp`, .., `out_.bmp`) that are redrawn in the style of the style transfer model used. -```sh +``` [ INFO ] Creating Inference Engine -[ INFO ] Reading the network: models\fast-neural-style-mosaic-onnx.onnx +[ INFO ] Reading the network: c:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\fast-neural-style-mosaic-onnx\fast-neural-style-mosaic-onnx.onnx [ INFO ] Configuring input and output blobs [ INFO ] Loading the model to the plugin -[ WARNING ] Image images\car.bmp is resized from (259, 787) to (224, 224) -[ WARNING ] Image images\cat.bmp is resized from (300, 300) to (224, 224) +[ WARNING ] Image c:\images\car.bmp is resized from (637, 749) to (224, 224) +[ WARNING ] Image c:\images\cat.jpg is resized from (300, 300) to (224, 224) [ INFO ] Starting inference in synchronous mode [ INFO ] Image out_0.bmp created! [ INFO ] Image out_1.bmp created! diff --git a/inference-engine/samples/classification_sample_async/README.md b/inference-engine/samples/classification_sample_async/README.md index 2504daa1316..4f93e6dd51f 100644 --- a/inference-engine/samples/classification_sample_async/README.md +++ b/inference-engine/samples/classification_sample_async/README.md @@ -60,8 +60,8 @@ To run the sample, you need specify a model and image: Running the application with the `-h` option yields the following usage message: -```sh -./classification_sample_async -h +``` +/classification_sample_async -h InferenceEngine: API version ............ Build .................. @@ -85,33 +85,43 @@ Options: Running the application with the empty list of options yields the usage message given above and an error message. -You can do inference of an image using a trained [AlexNet network](https://docs.openvinotoolkit.org/latest/omz_models_model_alexnet.html) on GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -./classification_sample_async -m /alexnet_fp32.xml -i /cat.bmp -d GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example: + +``` +/classification_sample_async -m /alexnet.xml -i /car.bmp -d GPU ``` ## Sample Output By default the application outputs top-10 inference results for each infer request. -```sh -classification_sample_async -m alexnet_fp32/alexnet.xml -i car_1.bmp -d GPU +``` [ INFO ] InferenceEngine: - API version ............ - Build .................. - Description ....... API -[ INFO ] Parsing input parameters + IE version ......... 2021.4.0 + Build ........... 2021.4.0-3839-cd81789d294-releases/2021/4 [ INFO ] Parsing input parameters [ INFO ] Files were added: 1 -[ INFO ] car_1.bmp +[ INFO ] C:\images\car.bmp [ INFO ] Loading Inference Engine [ INFO ] Device info: GPU - clDNNPlugin version ......... - Build ........... + clDNNPlugin version ......... 2021.4.0 + Build ........... 2021.4.0-3839-cd81789d294-releases/2021/4 + [ INFO ] Loading network files: - alexnet_fp32/alexnet.xml +[ INFO ] C:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\alexnet\FP32\alexnet.xml [ INFO ] Preparing input blobs [ WARNING ] Image is resized from (749, 637) to (227, 227) [ INFO ] Batch size is 1 @@ -132,20 +142,20 @@ classification_sample_async -m alexnet_fp32/alexnet.xml -i car_1.bmp -d GPU Top 10 results: -Image car_1.bmp +Image C:\images\car.bmp classid probability ------- ----------- -656 0.5491584 -874 0.1101241 -654 0.0559816 -436 0.0488046 -581 0.0330480 -705 0.0307707 -734 0.0185521 -627 0.0162536 -675 0.0145008 -757 0.0125437 +656 0.6645315 +654 0.1121185 +581 0.0698451 +874 0.0334973 +436 0.0259718 +817 0.0173190 +675 0.0109321 +511 0.0109075 +569 0.0083093 +717 0.0063173 [ INFO ] Execution successful diff --git a/inference-engine/samples/hello_classification/README.md b/inference-engine/samples/hello_classification/README.md index ccd81725135..2ee81703812 100644 --- a/inference-engine/samples/hello_classification/README.md +++ b/inference-engine/samples/hello_classification/README.md @@ -47,33 +47,45 @@ To run the sample, you need specify a model and image: > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can do inference of an image using a trained AlexNet network on a GPU using the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -./hello_classification /alexnet_fp32.xml /car.png GPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example: + +``` +/hello_classification /alexnet.xml /car.bmp GPU ``` ## Sample Output The application outputs top-10 inference results. -```sh +``` Top 10 results: -Image /opt/intel/openvino/deployment_tools/demo/car.png +Image C:\images\car.bmp classid probability ------- ----------- -479 0.7562194 -511 0.0760387 -436 0.0724114 -817 0.0462140 -656 0.0301230 -661 0.0056171 -581 0.0031623 -468 0.0029917 -717 0.0023081 -627 0.0016193 +656 0.6664789 +654 0.1129405 +581 0.0684867 +874 0.0333845 +436 0.0261321 +817 0.0167310 +675 0.0109796 +511 0.0105919 +569 0.0081782 +717 0.0063356 This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool ``` diff --git a/inference-engine/samples/hello_nv12_input_classification/README.md b/inference-engine/samples/hello_nv12_input_classification/README.md index 06a8d6781e5..a643fcf0d5d 100644 --- a/inference-engine/samples/hello_nv12_input_classification/README.md +++ b/inference-engine/samples/hello_nv12_input_classification/README.md @@ -64,17 +64,29 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv > > - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. -You can perform inference on an NV12 image using a trained AlexNet network on CPU with the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name alexnet +``` -```sh -./hello_nv12_input_classification /alexnet_fp32.xml /cat.yuv 300x300 CPU +2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script: + +``` +python /converter.py --name alexnet +``` + +3. Perform inference of NV12 image using `alexnet` model on a `CPU`, for example: + +``` +/hello_nv12_input_classification /alexnet.xml /cat.yuv 300x300 CPU ``` ## Sample Output The application outputs top-10 inference results. -```sh +``` [ INFO ] Files were added: 1 [ INFO ] ./cat.yuv Batch size is 1 diff --git a/inference-engine/samples/hello_query_device/README.md b/inference-engine/samples/hello_query_device/README.md index 059077c48ad..06a3d087845 100644 --- a/inference-engine/samples/hello_query_device/README.md +++ b/inference-engine/samples/hello_query_device/README.md @@ -27,8 +27,8 @@ To build the sample, please use instructions available at [Build the Sample Appl To see quired information, run the following: -```sh -./hello_query_device -h +``` +/hello_query_device -h Usage : hello_query_device ``` @@ -36,8 +36,7 @@ Usage : hello_query_device The application prints all available devices with their supported metrics and default values for configuration parameters: -```sh -./hello_query_device +``` Available devices: Device: CPU Metrics: diff --git a/inference-engine/samples/hello_reshape_ssd/README.md b/inference-engine/samples/hello_reshape_ssd/README.md index 69727c31e2d..4d382f2e590 100644 --- a/inference-engine/samples/hello_reshape_ssd/README.md +++ b/inference-engine/samples/hello_reshape_ssd/README.md @@ -51,14 +51,26 @@ To run the sample, you need specify a model and image: You can use the following command to do inference on CPU of an image using a trained SSD network: -```sh -hello_reshape_ssd +``` +/hello_reshape_ssd ``` -with one image and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name person-detection-retail-0013 +``` -```sh -hello_reshape_ssd /person-detection-retail-0013.xml /inputImage.bmp CPU 1 +2. `person-detection-retail-0013` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script: + +``` +python /converter.py --name +``` + +3. Perform inference of `person_detection.png` using `person-detection-retail-0013` model on a `GPU`, for example: + +``` +/hello_reshape_ssd /person-detection-retail-0013.xml /person_detection.png GPU 1 ``` ## Sample Output @@ -67,13 +79,11 @@ The application renders an image with detected objects enclosed in rectangles. I of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream. -```sh -hello_reshape_ssd person-detection-retail-0013/FP16/person-detection-retail-0013.xml person_detection.png CPU 1 - +``` Resizing network to the image size = [960x1699] with batch = 1 Resulting input shape = [1,3,960,1699] Resulting output shape = [1,1,200,7] -[0,1] element, prob = 0.721457, bbox = (852.37,187.54)-(983.326,520.672), batch id = 0 +[0,1] element, prob = 0.722292, bbox = (852.382,187.756)-(983.352,520.733), batch id = 0 The resulting image was saved in the file: hello_reshape_ssd_output.jpg This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool diff --git a/inference-engine/samples/ngraph_function_creation_sample/README.md b/inference-engine/samples/ngraph_function_creation_sample/README.md index 72516cc274d..ad10a9d5f82 100644 --- a/inference-engine/samples/ngraph_function_creation_sample/README.md +++ b/inference-engine/samples/ngraph_function_creation_sample/README.md @@ -50,7 +50,7 @@ To run the sample, you need specify a model wights and ubyte image: Running the application with the `-h` option yields the following usage message: -```sh +``` ngraph_function_creation_sample -h [ INFO ] InferenceEngine: API version ............ @@ -75,8 +75,8 @@ Running the application with the empty list of options yields the usage message You can do inference of an image using a pre-trained model on a GPU using the following command: -```sh -./ngraph_function_creation_sample -m /lenet.bin -i -d GPU +``` +/ngraph_function_creation_sample -m /lenet.bin -i -d GPU ``` ## Sample Output diff --git a/inference-engine/samples/object_detection_sample_ssd/README.md b/inference-engine/samples/object_detection_sample_ssd/README.md index e011acd5eea..4c466d42a34 100644 --- a/inference-engine/samples/object_detection_sample_ssd/README.md +++ b/inference-engine/samples/object_detection_sample_ssd/README.md @@ -41,9 +41,9 @@ To run the sample, you need specify a model and image: - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. -Running the application with the -h option yields the following usage message: +Running the application with the `-h` option yields the following usage message: -```sh +``` ./object_detection_sample_ssd -h InferenceEngine: API version ............ @@ -75,18 +75,30 @@ Running the application with the empty list of options yields the usage message > > - The sample accepts models in ONNX format (\*.onnx) that do not require preprocessing. -For example, to do inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name person-detection-retail-0013 +``` + +2. `person-detection-retail-0013` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script: + +``` +python /converter.py --name +``` + +3. For example, to do inference on a CPU with the OpenVINO™ toolkit person detection SSD models, run one of the following commands: - with one image and [person-detection-retail-0013](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) model -```sh -./object_detection_sample_ssd -m /person-detection-retail-0013.xml -i /inputImage.bmp -d CPU +``` +/object_detection_sample_ssd -m /person-detection-retail-0013.xml -i /person_detection.png -d CPU ``` - with one image and [person-detection-retail-0002](https://docs.openvinotoolkit.org/latest/omz_models_intel_person_detection_retail_0002_description_person_detection_retail_0002.html) model -```sh -./object_detection_sample_ssd -m /person-detection-retail-0002.xml -i /inputImage.jpg -d GPU +``` +/object_detection_sample_ssd -m /person-detection-retail-0002.xml -i /person_detection.png -d GPU ``` ## Sample Output @@ -95,7 +107,7 @@ The application outputs an image (`out_0.bmp`) with detected objects enclosed in of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream. -```sh +``` object_detection_sample_ssd -m person-detection-retail-0013\FP16\person-detection-retail-0013.xml -i person_detection.png [ INFO ] InferenceEngine: API version ............ diff --git a/inference-engine/samples/speech_sample/README.md b/inference-engine/samples/speech_sample/README.md index caa5b829d70..392b4e1403e 100644 --- a/inference-engine/samples/speech_sample/README.md +++ b/inference-engine/samples/speech_sample/README.md @@ -87,8 +87,7 @@ To build the sample, please use instructions available at [Build the Sample Appl Running the application with the `-h` option yields the following usage message: -```sh -./speech_sample -h +``` [ INFO ] InferenceEngine: API version ............ Build .................. @@ -132,8 +131,8 @@ Running the application with the empty list of options yields the usage message You can use the following model optimizer command to convert a Kaldi nnet1 or nnet2 neural network to Inference Engine Intermediate Representation format: -```sh -python mo.py --framework kaldi --input_model wsj_dnn5b.nnet --counts wsj_dnn5b.counts --remove_output_softmax --output_dir +``` +python /mo.py --framework kaldi --input_model wsj_dnn5b.nnet --counts wsj_dnn5b.counts --remove_output_softmax --output_dir ``` Assuming that the model optimizer (`mo.py`), Kaldi-trained neural network, `wsj_dnn5b.nnet`, and Kaldi class counts file, `wsj_dnn5b.counts`, are in the working directory this produces @@ -151,8 +150,8 @@ All of them can be downloaded from [https://storage.openvinotoolkit.org/models_c Once the IR is created, you can use the following command to do inference on Intel® Processors with the GNA co-processor (or emulation library): -```sh -./speech_sample -d GNA_AUTO -bs 2 -i dev93_10.ark -m wsj_dnn5b.xml -o scores.ark -r dev93_scores_10.ark +``` +/speech_sample -d GNA_AUTO -bs 2 -i /dev93_10.ark -m /wsj_dnn5b.xml -o scores.ark -r /dev93_scores_10.ark ``` Here, the floating point Kaldi-generated reference neural network scores (`dev93_scores_10.ark`) corresponding to the input feature file (`dev93_10.ark`) are assumed to be available @@ -170,7 +169,7 @@ All of them can be downloaded from [https://storage.openvinotoolkit.org/models_c The acoustic log likelihood sequences for all utterances are stored in the file. Example `scores.ark` or `scores.npz`. If the `-r` option is used, a report on the statistical score error is generated for each utterance such as the following: -```sh +``` ./speech_sample -d GNA_AUTO -bs 2 -i dev93_10.ark -m wsj_dnn5b.xml -o scores.ark -r dev93_scores_10.ark [ INFO ] InferenceEngine: API version ............ diff --git a/inference-engine/samples/style_transfer_sample/README.md b/inference-engine/samples/style_transfer_sample/README.md index 630e7e3d861..dcb178bedda 100644 --- a/inference-engine/samples/style_transfer_sample/README.md +++ b/inference-engine/samples/style_transfer_sample/README.md @@ -40,10 +40,9 @@ To run the sample, you need specify a model and image: - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. -Running the application with the -h option yields the following usage message: +Running the application with the `-h` option yields the following usage message: -```sh -./style_transfer_sample -h +``` [ INFO ] InferenceEngine: API version ............ Build .................. @@ -65,7 +64,6 @@ Options: -mean_val_b Mean values. Required if the model needs mean values for preprocessing and postprocessing. Available target devices: - ``` Running the application with the empty list of options yields the usage message given above and an error message. @@ -78,37 +76,47 @@ Running the application with the empty list of options yields the usage message > > - The sample accepts models in ONNX format (\*.onnx) that do not require preprocessing. -To perform inference of an image using a trained model of fast-neural-style-mosaic-onnx network on IntelĀ® CPUs, use the following command: +### Example +1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader_README): +``` +python /downloader.py --name fast-neural-style-mosaic-onnx +``` -```sh -./style_transfer_sample -i /cat.bmp -m /fast-neural-style-mosaic-onnx.onnx +2. `fast-neural-style-mosaic-onnx` model does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script: + +``` +python /converter.py --name +``` + +3. Perform inference of `car.bmp` and `cat.jpg` using `fast-neural-style-mosaic-onnx` model on a `GPU`, for example: + +``` +/style_transfer_sample -m /fast-neural-style-mosaic-onnx.onnx -i /car.bmp /cat.jpg -d GPU ``` ## Sample Output The sample application logs each step in a standard output stream and creates an image (`out1.bmp`) or a sequence of images (`out1.bmp`, ..., `out.bmp`) which are redrawn in style of the style transfer model used for the sample. -```sh -style_transfer_sample -m fast-neural-style-mosaic-onnx.onnx -i car.png car_1.bmp +``` [ INFO ] InferenceEngine: - API version ............ - Build .................. - Description ....... API + IE version ......... 2021.4.0 + Build ........... 2021.4.0-3839-cd81789d294-releases/2021/4 [ INFO ] Parsing input parameters [ INFO ] Files were added: 2 -[ INFO ] car.png -[ INFO ] car_1.bmp -[ INFO +[ INFO ] C:\images\car.bmp +[ INFO ] C:\images\cat.jpg +[ INFO ] Loading Inference Engine [ INFO ] Device info: - CPU - MKLDNNPlugin version ......... - Build ........... + GPU + clDNNPlugin version ......... 2021.4.0 + Build ........... 2021.4.0-3839-cd81789d294-releases/2021/4 -[ INFO ] Loading network files -[ INFO ] fast-neural-style-mosaic-onnx.onnx +[ INFO ] Loading network files: +[ INFO ] C:\openvino\deployment_tools\open_model_zoo\tools\downloader\public\fast-neural-style-mosaic-onnx\fast-neural-style-mosaic-onnx.onnx [ INFO ] Preparing input blobs -[ WARNING ] Image is resized from (787, 259) to (224, 224) [ WARNING ] Image is resized from (749, 637) to (224, 224) +[ WARNING ] Image is resized from (300, 300) to (224, 224) [ INFO ] Batch size is 2 [ INFO ] Preparing output blobs [ INFO ] Loading model to the device