fix: doxygen links for samples (#5365)

This commit is contained in:
Kate Generalova 2021-04-23 16:40:08 +03:00 committed by GitHub
parent 9a569805c2
commit 2063f17391
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 123 additions and 121 deletions

View File

@ -1,22 +1,23 @@
# Inference Engine Samples {#openvino_docs_IE_DG_Samples_Overview}
The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc.
The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc.
After installation of Intel® Distribution of OpenVINO™ toolkit, С, C++ and Python* sample applications are available in the following directories, respectively:
* `<INSTALL_DIR>/inference_engine/samples/c`
* `<INSTALL_DIR>/inference_engine/samples/cpp`
* `<INSTALL_DIR>/inference_engine/samples/python`
* `<INSTALL_DIR>/inference_engine/samples/python`
Inference Engine sample applications include the following:
- **[Automatic Speech Recognition C++ Sample](../../inference-engine/samples/speech_sample/README.md)** Acoustic model inference based on Kaldi neural networks and speech feature vectors.
- **Benchmark Application** Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes.
- [Benchmark C++ Application](../../inference-engine/samples/benchmark_app/README.md)
- [Benchmark Python Application](../../inference-engine/tools/benchmark_tool/README.md)
- [Benchmark C++ Tool](../../inference-engine/samples/benchmark_app/README.md)
- [Benchmark Python Tool](../../inference-engine/tools/benchmark_tool/README.md)
- **Hello Classification Sample** Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths).
- [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md)
- [Hello Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_classification/README.md)
- [Hello Classification Python Sample](../../inference-engine/ie_bridges/python/sample/hello_classification/README.md)
- **Hello NV12 Input Classification Sample** Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
- **Hello NV12 Input Classification Sample** Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
- [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md)
- [Hello NV12 Input Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md)
- **Hello Query Device Sample** Query of available Inference Engine devices and their metrics, configuration values.
@ -25,21 +26,21 @@ Inference Engine sample applications include the following:
- **Hello Reshape SSD Sample** Inference of SSD networks resized by ShapeInfer API according to an input size.
- [Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md)
- [Hello Reshape SSD Python Sample**](../../inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md)
- **Image Classification Sample Async** Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
- [Image Classification C++ Sample Async](../../inference-engine/samples/classification_sample_async/README.md)
- [Image Classification Python* Sample Async](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md)
- **Neural Style Transfer Sample** Style Transfer sample (the sample supports only images as inputs).
- [Neural Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md)
- [Neural Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md)
- **Image Classification Sample Async** Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
- [Image Classification Async C++ Sample](../../inference-engine/samples/classification_sample_async/README.md)
- [Image Classification Async Python* Sample](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md)
- **Style Transfer Sample** Style Transfer sample (the sample supports only images as inputs).
- [Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md)
- [Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md)
- **nGraph Function Creation Sample** Construction of the LeNet network using the nGraph function creation sample.
- [nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md)
- [nGraph Function Creation Python Sample](../../inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md)
- **Object Detection for SSD Sample** Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
- [Object Detection for SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md)
- [Object Detection for SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md)
- [Object Detection for SSD Python* Sample](../../inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md)
- **Object Detection for SSD Sample** Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
- [Object Detection SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md)
- [Object Detection SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md)
- [Object Detection SSD Python* Sample](../../inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md)
> **NOTE**: All samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.
> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.
## Media Files Available for Samples
@ -55,7 +56,7 @@ To run the sample, you can use [public](@ref omz_models_group_public) or [Intel'
The officially supported Linux* build environment is the following:
* Ubuntu* 18.04 LTS 64-bit or CentOS* 7.6 64-bit
* Ubuntu* 18.04 LTS 64-bit or CentOS* 7 64-bit
* GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
* CMake* version 3.10 or higher

View File

@ -163,25 +163,25 @@ limitations under the License.
</xi:include>
<!-- IE Code Samples -->
<tab type="usergroup" title="Inference Engine Code Samples" url="@ref openvino_docs_IE_DG_Samples_Overview">
<tab type="user" title="Image Classification C++ Sample Async" url="@ref openvino_inference_engine_samples_classification_sample_async_README"/>
<tab type="user" title="Image Classification Python* Sample Async" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README"/>
<tab type="user" title="Image Classification Async C++ Sample" url="@ref openvino_inference_engine_samples_classification_sample_async_README"/>
<tab type="user" title="Image Classification Async Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README"/>
<tab type="user" title="Hello Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_classification_README"/>
<tab type="user" title="Hello Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_classification_README"/>
<tab type="user" title="Image Classification Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_classification_sample_README"/>
<tab type="user" title="Hello Classification Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_classification_README"/>
<tab type="user" title="Hello Reshape SSD C++ Sample" url="@ref openvino_inference_engine_samples_hello_reshape_ssd_README"/>
<tab type="user" title="Hello Reshape SSD Python Sample" url="@ref openvino_inference_engine_samples_python_hello_reshape_ssd_README"/>
<tab type="user" title="Hello Reshape SSD Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README"/>
<tab type="user" title="Hello NV12 Input Classification C++ Sample" url="@ref openvino_inference_engine_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello NV12 Input Classification C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README"/>
<tab type="user" title="Hello Query Device C++ Sample" url="@ref openvino_inference_engine_samples_hello_query_device_README"/>
<tab type="user" title="Hello Query Device Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README"/>
<tab type="user" title="nGraph Function C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="nGraph Function Python Sample" url="@ref openvino_inference_engine_ie_bridges_python_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection C++ Sample SSD" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection Python* Sample SSD" url="@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection C Sample SSD" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="nGraph Function Creation C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="nGraph Function Creation Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection SSD C++ Sample" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection SSD Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection SSD C Sample" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Automatic Speech Recognition C++ Sample" url="@ref openvino_inference_engine_samples_speech_sample_README"/>
<tab type="user" title="Neural Style Transfer C++ Sample" url="@ref openvino_inference_engine_samples_style_transfer_sample_README"/>
<tab type="user" title="Neural Style Transfer Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README"/>
<tab type="user" title="Style Transfer C++ Sample" url="@ref openvino_inference_engine_samples_style_transfer_sample_README"/>
<tab type="user" title="Style Transfer Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_style_transfer_sample_README"/>
<tab type="user" title="Benchmark C++ Tool" url="@ref openvino_inference_engine_samples_benchmark_app_README"/>
<tab type="user" title="Benchmark Python* Tool" url="@ref openvino_inference_engine_tools_benchmark_tool_README"/>
</tab>

View File

@ -1,8 +1,8 @@
# Object Detection C Sample SSD {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README}
# Object Detection SSD C Sample {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README}
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Asynchronous Inference Request API and [input reshape feature](../../../../../docs/IE_DG/ShapeInference.md).
Object Detection C sample SSD application demonstrates how to use the following Inference Engine C API in applications:
Object Detection SSD C sample application demonstrates how to use the following Inference Engine C API in applications:
| Feature | API | Description |
|:--- |:--- |:---

View File

@ -1,4 +1,4 @@
# Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}
This sample demonstrates how to do inference of image classification networks using Asynchronous Inference Request API.
Models with only 1 input and output are supported.
@ -30,13 +30,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
Run the application with the <code>-h</code> option to see the usage message:
```
```sh
python classification_sample_async.py -h
```
Usage message:
```
```sh
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l EXTENSION] [-c CONFIG] [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
@ -67,20 +67,21 @@ Options:
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
```sh
python classification_sample_async.py -m <path_to_model>/alexnet.xml -i <path_to_image>/cat.bmp <path_to_image>/car.bmp -d GPU
```
@ -88,7 +89,7 @@ python classification_sample_async.py -m <path_to_model>/alexnet.xml -i <path_to
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\alexnet.xml
[ INFO ] Configuring input and output blobs
@ -133,10 +134,10 @@ The sample application logs each step in a standard output stream and outputs to
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

View File

@ -29,13 +29,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
Run the application with the <code>-h</code> option to see the usage message:
```
```sh
python hello_classification.py -h
```
Usage message:
```
```sh
usage: hello_classification.py [-h] -m MODEL -i INPUT [-d DEVICE]
[--labels LABELS] [-nt NUMBER_TOP]
@ -57,20 +57,20 @@ Options:
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
```sh
python hello_classification.py -m <path_to_model>/alexnet.xml -i <path_to_image>/cat.bmp -d GPU
```
@ -78,7 +78,7 @@ python hello_classification.py -m <path_to_model>/alexnet.xml -i <path_to_image>
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\alexnet.xml
[ INFO ] Configuring input and output blobs
@ -105,10 +105,10 @@ The sample application logs each step in a standard output stream and outputs to
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f

View File

@ -22,7 +22,7 @@ The sample queries all available Inference Engine devices and prints their suppo
The sample has no command-line parameters. To see the report, run the following command:
```
```sh
python hello_query_device.py
```
@ -30,7 +30,7 @@ python hello_query_device.py
For example:
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Available devices:
[ INFO ] CPU :
@ -104,7 +104,7 @@ For example:
## See Also
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.get_metric]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#af1cdf2ecbea6399c556957c2c2fdf8eb

View File

@ -31,13 +31,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
Run the application with the <code>-h</code> option to see the usage message:
```
```sh
python hello_reshape_ssd.py -h
```
Usage message:
```
```sh
usage: hello_reshape_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION] [-c CONFIG]
[-d DEVICE] [--labels LABELS]
@ -65,20 +65,20 @@ Options:
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
```sh
python hello_reshape_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/cat.bmp -d GPU
```
@ -86,7 +86,7 @@ python hello_reshape_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_ima
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
@ -102,10 +102,10 @@ The sample application logs each step in a standard output stream and creates an
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

View File

@ -3,7 +3,6 @@
This sample demonstrates how to execute an inference using [nGraph function feature](../../../../../docs/nGraph_DG/build_function.md) to create a network that uses weights from LeNet classification network. So you don't need an XML file, the model will be created from the source code on the fly.
In addition to regular images, the sample also supports single-channel ubyte images as an input.
The following Inference Engine Python API is used in the application:
| Feature | API | Description |
@ -29,13 +28,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
Run the application with the <code>-h</code> option to see the usage message:
```
```sh
python ngraph_function_creation_sample.py -h
```
Usage message:
```
```sh
usage: ngraph_function_creation_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
@ -57,21 +56,22 @@ Options:
```
To run the sample, you need specify a model weights and image:
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTE**:
>
> * This sample supports models with FP32 weights only.
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTE**:
>
> * The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
> - This sample supports models with FP32 weights only.
>
> * The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
> - The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
>
> * The white over black images will be automatically inverted in color for a better predictions.
> - The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
>
> - The white over black images will be automatically inverted in color for a better predictions.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
```sh
python ngraph_function_creation_sample.py -m <path_to_model>/lenet.bin -i <path_to_image>/3.bmp -d GPU
```
@ -79,7 +79,7 @@ python ngraph_function_creation_sample.py -m <path_to_model>/lenet.bin -i <path_
The sample application logs each step in a standard output stream and outputs top-10 inference results.
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Loading the network using ngraph function with weights from <path_to_model>/lenet.bin
[ INFO ] Configuring input and output blobs
@ -116,7 +116,7 @@ The sample application logs each step in a standard output stream and outputs to
<td><strong>Removal Date</strong></td>
<td>December 1, 2020</td>
</tr>
</table>
</table>
*Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.*
@ -124,11 +124,10 @@ The sample application logs each step in a standard output stream and outputs to
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IENetwork]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html

View File

@ -31,13 +31,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
Run the application with the <code>-h</code> option to see the usage message:
```
```sh
python object_detection_sample_ssd.py -h
```
Usage message:
```
```sh
usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [-l EXTENSION]
[-c CONFIG] [-d DEVICE]
[--labels LABELS]
@ -66,20 +66,21 @@ Options:
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
```sh
python object_detection_sample_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <path_to_image>/cat.bmp -d GPU
```
@ -87,7 +88,7 @@ python object_detection_sample_ssd.py -m <path_to_model>/mobilenet-ssd.xml -i <p
The sample application logs each step in a standard output stream and creates an output image, drawing bounding boxes for inference results with an over 50% confidence.
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\mobilenet-ssd.xml
[ INFO ] Configuring input and output blobs
@ -100,10 +101,10 @@ The sample application logs each step in a standard output stream and creates an
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

View File

@ -32,13 +32,13 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
Run the application with the <code>-h</code> option to see the usage message:
```
```sh
python style_transfer_sample.py -h
```
Usage message:
```
```sh
usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l EXTENSION] [-c CONFIG] [-d DEVICE]
[--original_size] [--mean_val_r MEAN_VAL_R]
@ -79,20 +79,20 @@ Options:
```
To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.
> **NOTES**:
>
> * By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> * Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> * The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
You can do inference of an image using a pre-trained model on a GPU using the following command:
```
```sh
python style_transfer_sample.py -m <path_to_model>/fast-neural-style-mosaic-onnx.onnx -i <path_to_image>/car.png <path_to_image>/cat.jpg -d GPU
```
@ -100,7 +100,7 @@ python style_transfer_sample.py -m <path_to_model>/fast-neural-style-mosaic-onnx
The sample application logs each step in a standard output stream and creates an output image (`out_0.bmp`) or a sequence of images (`out_0.bmp`, .., `out_<n>.bmp`) that are redrawn in the style of the style transfer model used.
```
```sh
[ INFO ] Creating Inference Engine
[ INFO ] Reading the network: models\fast-neural-style-mosaic-onnx.onnx
[ INFO ] Configuring input and output blobs
@ -115,10 +115,10 @@ The sample application logs each step in a standard output stream and creates an
## See Also
* [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
* [Model Downloader](@ref omz_tools_downloader_README)
* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

View File

@ -1,10 +1,10 @@
# Image Classification C++ Sample Async {#openvino_inference_engine_samples_classification_sample_async_README}
# Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README}
This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API.
In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model.
Image Classification C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
Image Classification Async C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
| Feature | API | Description |
|:--- |:--- |:---

View File

@ -1,8 +1,8 @@
# Object Detection C++ Sample SSD {#openvino_inference_engine_samples_object_detection_sample_ssd_README}
# Object Detection SSD C++ Sample {#openvino_inference_engine_samples_object_detection_sample_ssd_README}
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Synchronous Inference Request API.
Object Detection C++ sample SSD application demonstrates how to use the following Inference Engine C++ API in applications:
Object Detection SSD C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
| Feature | API | Description |
|:--- |:--- |:---