From 2063f173914e55448f34be2e2bc4b0c872fec5cc Mon Sep 17 00:00:00 2001 From: Kate Generalova Date: Fri, 23 Apr 2021 16:40:08 +0300 Subject: [PATCH] fix: doxygen links for samples (#5365) --- docs/IE_DG/Samples_Overview.md | 35 ++++++++++--------- docs/doxygen/openvino_docs.xml | 22 ++++++------ .../object_detection_sample_ssd/README.md | 4 +-- .../classification_sample_async/README.md | 29 +++++++-------- .../sample/hello_classification/README.md | 26 +++++++------- .../sample/hello_query_device/README.md | 6 ++-- .../python/sample/hello_reshape_ssd/README.md | 26 +++++++------- .../ngraph_function_creation_sample/README.md | 35 +++++++++---------- .../object_detection_sample_ssd/README.md | 27 +++++++------- .../sample/style_transfer_sample/README.md | 26 +++++++------- .../classification_sample_async/README.md | 4 +-- .../object_detection_sample_ssd/README.md | 4 +-- 12 files changed, 123 insertions(+), 121 deletions(-) diff --git a/docs/IE_DG/Samples_Overview.md b/docs/IE_DG/Samples_Overview.md index 8243fc7f7d6..11bfb1d51c3 100644 --- a/docs/IE_DG/Samples_Overview.md +++ b/docs/IE_DG/Samples_Overview.md @@ -1,22 +1,23 @@ # Inference Engine Samples {#openvino_docs_IE_DG_Samples_Overview} -The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc. +The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc. After installation of Intel® Distribution of OpenVINO™ toolkit, С, C++ and Python* sample applications are available in the following directories, respectively: * `/inference_engine/samples/c` * `/inference_engine/samples/cpp` -* `/inference_engine/samples/python` +* `/inference_engine/samples/python` Inference Engine sample applications include the following: + - **[Automatic Speech Recognition C++ Sample](../../inference-engine/samples/speech_sample/README.md)** – Acoustic model inference based on Kaldi neural networks and speech feature vectors. - **Benchmark Application** – Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes. - - [Benchmark C++ Application](../../inference-engine/samples/benchmark_app/README.md) - - [Benchmark Python Application](../../inference-engine/tools/benchmark_tool/README.md) + - [Benchmark C++ Tool](../../inference-engine/samples/benchmark_app/README.md) + - [Benchmark Python Tool](../../inference-engine/tools/benchmark_tool/README.md) - **Hello Classification Sample** – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths). - [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md) - [Hello Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_classification/README.md) - [Hello Classification Python Sample](../../inference-engine/ie_bridges/python/sample/hello_classification/README.md) -- **Hello NV12 Input Classification Sample** – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs. +- **Hello NV12 Input Classification Sample** – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs. - [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md) - [Hello NV12 Input Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md) - **Hello Query Device Sample** – Query of available Inference Engine devices and their metrics, configuration values. @@ -25,21 +26,21 @@ Inference Engine sample applications include the following: - **Hello Reshape SSD Sample** – Inference of SSD networks resized by ShapeInfer API according to an input size. - [Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md) - [Hello Reshape SSD Python Sample**](../../inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md) -- **Image Classification Sample Async** – Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs). - - [Image Classification C++ Sample Async](../../inference-engine/samples/classification_sample_async/README.md) - - [Image Classification Python* Sample Async](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md) -- **Neural Style Transfer Sample** – Style Transfer sample (the sample supports only images as inputs). - - [Neural Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md) - - [Neural Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md) +- **Image Classification Sample Async** – Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs). + - [Image Classification Async C++ Sample](../../inference-engine/samples/classification_sample_async/README.md) + - [Image Classification Async Python* Sample](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md) +- **Style Transfer Sample** – Style Transfer sample (the sample supports only images as inputs). + - [Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md) + - [Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md) - **nGraph Function Creation Sample** – Construction of the LeNet network using the nGraph function creation sample. - [nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md) - [nGraph Function Creation Python Sample](../../inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md) -- **Object Detection for SSD Sample** – Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs. - - [Object Detection for SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md) - - [Object Detection for SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md) - - [Object Detection for SSD Python* Sample](../../inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md) +- **Object Detection for SSD Sample** – Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs. + - [Object Detection SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md) + - [Object Detection SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md) + - [Object Detection SSD Python* Sample](../../inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md) -> **NOTE**: All samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode. +> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode. ## Media Files Available for Samples @@ -55,7 +56,7 @@ To run the sample, you can use [public](@ref omz_models_group_public) or [Intel' The officially supported Linux* build environment is the following: -* Ubuntu* 18.04 LTS 64-bit or CentOS* 7.6 64-bit +* Ubuntu* 18.04 LTS 64-bit or CentOS* 7 64-bit * GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6) * CMake* version 3.10 or higher diff --git a/docs/doxygen/openvino_docs.xml b/docs/doxygen/openvino_docs.xml index 92238645a05..34539a41246 100644 --- a/docs/doxygen/openvino_docs.xml +++ b/docs/doxygen/openvino_docs.xml @@ -163,25 +163,25 @@ limitations under the License. - - + + - + - + - - - - - + + + + + - - + + diff --git a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md index a50649d254c..7370b6ab61f 100644 --- a/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md +++ b/inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md @@ -1,8 +1,8 @@ -# Object Detection C Sample SSD {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README} +# Object Detection SSD C Sample {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README} This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Asynchronous Inference Request API and [input reshape feature](../../../../../docs/IE_DG/ShapeInference.md). -Object Detection C sample SSD application demonstrates how to use the following Inference Engine C API in applications: +Object Detection SSD C sample application demonstrates how to use the following Inference Engine C API in applications: | Feature | API | Description | |:--- |:--- |:--- diff --git a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md index 368699665e5..a689438cb8b 100644 --- a/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md +++ b/inference-engine/ie_bridges/python/sample/classification_sample_async/README.md @@ -1,4 +1,4 @@ -# Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README} +# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README} This sample demonstrates how to do inference of image classification networks using Asynchronous Inference Request API. Models with only 1 input and output are supported. @@ -30,13 +30,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with Run the application with the -h option to see the usage message: -``` +```sh python classification_sample_async.py -h ``` Usage message: -``` +```sh usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...] [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP] @@ -67,20 +67,21 @@ Options: ``` To run the sample, you need specify a model and image: - - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. + +- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). +- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. > **NOTES**: > -> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). +> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). > -> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > -> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing. +> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can do inference of an image using a pre-trained model on a GPU using the following command: -``` +```sh python classification_sample_async.py -m /alexnet.xml -i /cat.bmp /car.bmp -d GPU ``` @@ -88,7 +89,7 @@ python classification_sample_async.py -m /alexnet.xml -i -h option to see the usage message: -``` +```sh python hello_classification.py -h ``` Usage message: -``` +```sh usage: hello_classification.py [-h] -m MODEL -i INPUT [-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP] @@ -57,20 +57,20 @@ Options: ``` To run the sample, you need specify a model and image: - - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. +- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). +- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. > **NOTES**: > -> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). +> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). > -> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > -> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing. +> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can do inference of an image using a pre-trained model on a GPU using the following command: -``` +```sh python hello_classification.py -m /alexnet.xml -i /cat.bmp -d GPU ``` @@ -78,7 +78,7 @@ python hello_classification.py -m /alexnet.xml -i The sample application logs each step in a standard output stream and outputs top-10 inference results. -``` +```sh [ INFO ] Creating Inference Engine [ INFO ] Reading the network: models\alexnet.xml [ INFO ] Configuring input and output blobs @@ -105,10 +105,10 @@ The sample application logs each step in a standard output stream and outputs to ## See Also -* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) -* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) -* [Model Downloader](@ref omz_tools_downloader_README) -* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) +- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) +- [Model Downloader](@ref omz_tools_downloader_README) +- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) [IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html [IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f diff --git a/inference-engine/ie_bridges/python/sample/hello_query_device/README.md b/inference-engine/ie_bridges/python/sample/hello_query_device/README.md index 15b1b7e1a45..35e84bc23ed 100644 --- a/inference-engine/ie_bridges/python/sample/hello_query_device/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_query_device/README.md @@ -22,7 +22,7 @@ The sample queries all available Inference Engine devices and prints their suppo The sample has no command-line parameters. To see the report, run the following command: -``` +```sh python hello_query_device.py ``` @@ -30,7 +30,7 @@ python hello_query_device.py For example: -``` +```sh [ INFO ] Creating Inference Engine [ INFO ] Available devices: [ INFO ] CPU : @@ -104,7 +104,7 @@ For example: ## See Also -* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) +- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) [IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html [IECore.get_metric]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#af1cdf2ecbea6399c556957c2c2fdf8eb diff --git a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md index 2c5dac57b23..956f219e1b0 100644 --- a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md @@ -31,13 +31,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with Run the application with the -h option to see the usage message: -``` +```sh python hello_reshape_ssd.py -h ``` Usage message: -``` +```sh usage: hello_reshape_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--labels LABELS] @@ -65,20 +65,20 @@ Options: ``` To run the sample, you need specify a model and image: - - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. +- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). +- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. > **NOTES**: > -> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). +> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). > -> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > -> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing. +> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can do inference of an image using a pre-trained model on a GPU using the following command: -``` +```sh python hello_reshape_ssd.py -m /mobilenet-ssd.xml -i /cat.bmp -d GPU ``` @@ -86,7 +86,7 @@ python hello_reshape_ssd.py -m /mobilenet-ssd.xml -i -h option to see the usage message: -``` +```sh python ngraph_function_creation_sample.py -h ``` Usage message: -``` +```sh usage: ngraph_function_creation_sample.py [-h] -m MODEL -i INPUT [INPUT ...] [-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP] @@ -57,21 +56,22 @@ Options: ``` To run the sample, you need specify a model weights and image: - - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. -> **NOTE**: -> -> * This sample supports models with FP32 weights only. +- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. + +> **NOTE**: > -> * The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified. +> - This sample supports models with FP32 weights only. > -> * The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*. +> - The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified. > -> * The white over black images will be automatically inverted in color for a better predictions. +> - The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*. +> +> - The white over black images will be automatically inverted in color for a better predictions. You can do inference of an image using a pre-trained model on a GPU using the following command: -``` +```sh python ngraph_function_creation_sample.py -m /lenet.bin -i /3.bmp -d GPU ``` @@ -79,7 +79,7 @@ python ngraph_function_creation_sample.py -m /lenet.bin -i /lenet.bin [ INFO ] Configuring input and output blobs @@ -116,7 +116,7 @@ The sample application logs each step in a standard output stream and outputs to Removal Date December 1, 2020 - + *Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.* @@ -124,11 +124,10 @@ The sample application logs each step in a standard output stream and outputs to ## See Also -* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) -* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) -* [Model Downloader](@ref omz_tools_downloader_README) -* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - +- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) +- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) +- [Model Downloader](@ref omz_tools_downloader_README) +- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) [IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html [IENetwork]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html diff --git a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md index a6c77d9e308..17a5640ccf3 100644 --- a/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md +++ b/inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md @@ -31,13 +31,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with Run the application with the -h option to see the usage message: -``` +```sh python object_detection_sample_ssd.py -h ``` Usage message: -``` +```sh usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--labels LABELS] @@ -66,20 +66,21 @@ Options: ``` To run the sample, you need specify a model and image: - - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. + +- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). +- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. > **NOTES**: > -> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). +> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). > -> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > -> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing. +> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can do inference of an image using a pre-trained model on a GPU using the following command: -``` +```sh python object_detection_sample_ssd.py -m /mobilenet-ssd.xml -i /cat.bmp -d GPU ``` @@ -87,7 +88,7 @@ python object_detection_sample_ssd.py -m /mobilenet-ssd.xml -i

-h option to see the usage message: -``` +```sh python style_transfer_sample.py -h ``` Usage message: -``` +```sh usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...] [-l EXTENSION] [-c CONFIG] [-d DEVICE] [--original_size] [--mean_val_r MEAN_VAL_R] @@ -79,20 +79,20 @@ Options: ``` To run the sample, you need specify a model and image: - - you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). - - you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. +- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README). +- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data. > **NOTES**: > -> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). +> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). > -> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > -> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing. +> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. You can do inference of an image using a pre-trained model on a GPU using the following command: -``` +```sh python style_transfer_sample.py -m /fast-neural-style-mosaic-onnx.onnx -i /car.png /cat.jpg -d GPU ``` @@ -100,7 +100,7 @@ python style_transfer_sample.py -m /fast-neural-style-mosaic-onnx The sample application logs each step in a standard output stream and creates an output image (`out_0.bmp`) or a sequence of images (`out_0.bmp`, .., `out_.bmp`) that are redrawn in the style of the style transfer model used. -``` +```sh [ INFO ] Creating Inference Engine [ INFO ] Reading the network: models\fast-neural-style-mosaic-onnx.onnx [ INFO ] Configuring input and output blobs @@ -115,10 +115,10 @@ The sample application logs each step in a standard output stream and creates an ## See Also -* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) -* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) -* [Model Downloader](@ref omz_tools_downloader_README) -* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) +- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) +- [Model Downloader](@ref omz_tools_downloader_README) +- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) [IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html [IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967 diff --git a/inference-engine/samples/classification_sample_async/README.md b/inference-engine/samples/classification_sample_async/README.md index 48f27baf910..9a007fdebb4 100644 --- a/inference-engine/samples/classification_sample_async/README.md +++ b/inference-engine/samples/classification_sample_async/README.md @@ -1,10 +1,10 @@ -# Image Classification C++ Sample Async {#openvino_inference_engine_samples_classification_sample_async_README} +# Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README} This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API. In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model. -Image Classification C++ sample application demonstrates how to use the following Inference Engine C++ API in applications: +Image Classification Async C++ sample application demonstrates how to use the following Inference Engine C++ API in applications: | Feature | API | Description | |:--- |:--- |:--- diff --git a/inference-engine/samples/object_detection_sample_ssd/README.md b/inference-engine/samples/object_detection_sample_ssd/README.md index 7dce55e3bea..a52b7d3fcbb 100644 --- a/inference-engine/samples/object_detection_sample_ssd/README.md +++ b/inference-engine/samples/object_detection_sample_ssd/README.md @@ -1,8 +1,8 @@ -# Object Detection C++ Sample SSD {#openvino_inference_engine_samples_object_detection_sample_ssd_README} +# Object Detection SSD C++ Sample {#openvino_inference_engine_samples_object_detection_sample_ssd_README} This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Synchronous Inference Request API. -Object Detection C++ sample SSD application demonstrates how to use the following Inference Engine C++ API in applications: +Object Detection SSD C++ sample application demonstrates how to use the following Inference Engine C++ API in applications: | Feature | API | Description | |:--- |:--- |:---