[Python API] Move samples and docs to the new directory (#7851)

* [Python API] Move samples and docs to the new directory

* move samples to the new directory

* try to fix build and pychecks

* fix links

* fix pychecks

* fix cmake

* fix cpack installation

* Update inference-engine/ie_bridges/python/CMakeLists.txt

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
This commit is contained in:
Anastasia Kuporosova
2021-10-14 14:49:35 +03:00
committed by GitHub
parent eb838d5699
commit 799be77e33
39 changed files with 126 additions and 91 deletions

View File

@@ -5,9 +5,11 @@ on:
push:
paths:
- 'inference-engine/ie_bridges/python/**'
- 'samples/python/**'
pull_request:
paths:
- 'inference-engine/ie_bridges/python/**'
- 'samples/python/**'
jobs:
linters:
runs-on: ubuntu-18.04
@@ -23,14 +25,14 @@ jobs:
- name: Install dependencies
run: python -m pip install -r inference-engine/ie_bridges/python/requirements_dev.txt
- name: Run Flake on samples
run: python -m flake8 ./ --config=../setup.cfg
working-directory: inference-engine/ie_bridges/python/sample
run: python -m flake8 ./ --config=setup.cfg
working-directory: samples/python
- name: Create code style diff for samples
if: failure()
run: |
python -m black -l 160 -S ./
git diff > samples_diff.diff
working-directory: inference-engine/ie_bridges/python/sample
working-directory: samples/python
- uses: actions/upload-artifact@v2
if: failure()
with:

View File

@@ -92,6 +92,7 @@ add_subdirectory(inference-engine)
openvino_developer_export_targets(COMPONENT ngraph TARGETS ngraph_backend interpreter_backend)
include(cmake/extra_modules.cmake)
add_subdirectory(samples)
add_subdirectory(model-optimizer)
add_subdirectory(docs)
add_subdirectory(tools)

View File

@@ -216,6 +216,14 @@ function(build_docs)
"${OpenVINO_SOURCE_DIR}/inference-engine/*.png"
"${OpenVINO_SOURCE_DIR}/inference-engine/*.gif"
"${OpenVINO_SOURCE_DIR}/inference-engine/*.jpg"
"${OpenVINO_SOURCE_DIR}/runtime/*.md"
"${OpenVINO_SOURCE_DIR}/runtime/*.png"
"${OpenVINO_SOURCE_DIR}/runtime/*.gif"
"${OpenVINO_SOURCE_DIR}/runtime/*.jpg"
"${OpenVINO_SOURCE_DIR}/samples/*.md"
"${OpenVINO_SOURCE_DIR}/samples/*.png"
"${OpenVINO_SOURCE_DIR}/samples/*.gif"
"${OpenVINO_SOURCE_DIR}/samples/*.jpg"
"${OpenVINO_SOURCE_DIR}/tools/*.md"
"${OpenVINO_SOURCE_DIR}/tools/*.png"
"${OpenVINO_SOURCE_DIR}/tools/*.gif"

View File

@@ -7,7 +7,7 @@ The OpenVINO™ Python\* package available in the `<INSTALL_DIR>/python/python3.
The OpenVINO™ Python\* package includes the following sub-packages:
- [openvino.inference_engine](../../inference-engine/ie_bridges/python/docs/api_overview.md) - Python\* wrapper on OpenVINO™ Inference Engine.
- [openvino.inference_engine](../../runtime/bindings/python/docs/api_overview.md) - Python\* wrapper on OpenVINO™ Inference Engine.
- `openvino.tools.accuracy_checker` - Measure accuracy.
- `openvino.tools.benchmark` - Measure latency and throughput.

View File

@@ -11,36 +11,36 @@ Inference Engine sample applications include the following:
- **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors.
- [Automatic Speech Recognition C++ Sample](../../inference-engine/samples/speech_sample/README.md)
- [Automatic Speech Recognition Python Sample](../../inference-engine/ie_bridges/python/sample/speech_sample/README.md)
- [Automatic Speech Recognition Python Sample](../../samples/python/speech_sample/README.md)
- **Benchmark Application** Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes.
- [Benchmark C++ Tool](../../inference-engine/samples/benchmark_app/README.md)
- [Benchmark Python Tool](../../tools/benchmark_tool/README.md)
- **Hello Classification Sample** Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths).
- [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md)
- [Hello Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_classification/README.md)
- [Hello Classification Python Sample](../../inference-engine/ie_bridges/python/sample/hello_classification/README.md)
- [Hello Classification Python Sample](../../samples/python/hello_classification/README.md)
- **Hello NV12 Input Classification Sample** Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
- [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md)
- [Hello NV12 Input Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md)
- **Hello Query Device Sample** Query of available Inference Engine devices and their metrics, configuration values.
- [Hello Query Device C++ Sample](../../inference-engine/samples/hello_query_device/README.md)
- [Hello Query Device Python* Sample](../../inference-engine/ie_bridges/python/sample/hello_query_device/README.md)
- [Hello Query Device Python* Sample](../../samples/python/hello_query_device/README.md)
- **Hello Reshape SSD Sample** Inference of SSD networks resized by ShapeInfer API according to an input size.
- [Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md)
- [Hello Reshape SSD Python Sample**](../../inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md)
- [Hello Reshape SSD Python Sample**](../../samples/python/hello_reshape_ssd/README.md)
- **Image Classification Sample Async** Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
- [Image Classification Async C++ Sample](../../inference-engine/samples/classification_sample_async/README.md)
- [Image Classification Async Python* Sample](../../inference-engine/ie_bridges/python/sample/classification_sample_async/README.md)
- [Image Classification Async Python* Sample](../../samples/python/classification_sample_async/README.md)
- **Style Transfer Sample** Style Transfer sample (the sample supports only images as inputs).
- [Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md)
- [Style Transfer Python* Sample](../../inference-engine/ie_bridges/python/sample/style_transfer_sample/README.md)
- [Style Transfer Python* Sample](../../samples/python/style_transfer_sample/README.md)
- **nGraph Function Creation Sample** Construction of the LeNet network using the nGraph function creation sample.
- [nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md)
- [nGraph Function Creation Python Sample](../../inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md)
- [nGraph Function Creation Python Sample](../../samples/python/ngraph_function_creation_sample/README.md)
- **Object Detection for SSD Sample** Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
- [Object Detection SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md)
- [Object Detection SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md)
- [Object Detection SSD Python* Sample](../../inference-engine/ie_bridges/python/sample/object_detection_sample_ssd/README.md)
- [Object Detection SSD Python* Sample](../../samples/python/object_detection_sample_ssd/README.md)
> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.

View File

@@ -274,4 +274,4 @@ exec_net = ie.load_network(network=net, device_name="CPU")
result_ie = exec_net.infer(input_data)
```
For more information about Python API, refer to [Inference Engine Python API Overview](../../../../../inference-engine/ie_bridges/python/docs/api_overview.md).
For more information about Python API, refer to [Inference Engine Python API Overview](../../../../../runtime/bindings/python/docs/api_overview.md).

View File

@@ -18,7 +18,7 @@ Hello Classification C sample application demonstrates how to use the following
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_classification/README.md), [Python](../../../python/sample/hello_classification/README.md) |
| Other language realization | [C++](../../../../samples/hello_classification/README.md), [Python](../../../../../samples/python/hello_classification/README.md) |
## How It Works

View File

@@ -24,7 +24,7 @@ Basic Inference Engine API is covered by [Hello Classification C sample](../hell
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)
| Validated images | The sample uses OpenCV* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (.bmp, .png, .jpg)
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/object_detection_sample_ssd/README.md), [Python](../../../python/sample/object_detection_sample_ssd/README.md) |
| Other language realization | [C++](../../../../samples/object_detection_sample_ssd/README.md), [Python](../../../../../samples/python/object_detection_sample_ssd/README.md) |
## How It Works

View File

@@ -92,13 +92,6 @@ install(PROGRAMS src/openvino/__init__.py
DESTINATION ${PYTHON_BRIDGE_CPACK_PATH}/${PYTHON_VERSION}/openvino
COMPONENT ${PYTHON_COMPONENT})
# install Python samples
# package Python samples
ie_cpack_add_component(python_samples)
install(DIRECTORY sample/
DESTINATION samples/python
USE_SOURCE_PERMISSIONS
COMPONENT python_samples)
ie_cpack(${PYTHON_COMPONENT} python_samples)
ie_cpack(${PYTHON_COMPONENT})

View File

@@ -22,7 +22,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png), single-channel `ubyte` images.
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [Python](../../ie_bridges/python/sample/classification_sample_async/README.md) |
| Other language realization | [Python](../../../samples/python/classification_sample_async/README.md) |
## How It Works

View File

@@ -18,7 +18,7 @@ Hello Classification C++ sample application demonstrates how to use the followin
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C](../../ie_bridges/c/samples/hello_classification/README.md), [Python](../../ie_bridges/python/sample/hello_classification/README.md) |
| Other language realization | [C](../../ie_bridges/c/samples/hello_classification/README.md), [Python](../../../samples/python/hello_classification/README.md) |
## How It Works

View File

@@ -13,7 +13,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Options | Values |
|:--- |:---
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [Python](../../ie_bridges/python/sample/hello_query_device/README.md) |
| Other language realization | [Python](../../../samples/python/hello_query_device/README.md) |
## How It Works

View File

@@ -20,7 +20,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C](../../ie_bridges/c/samples/object_detection_sample_ssd/README.md), [Python](../../ie_bridges/python/sample/hello_reshape_ssd/README.md) |
| Other language realization | [C](../../ie_bridges/c/samples/object_detection_sample_ssd/README.md), [Python](../../../samples/python/hello_reshape_ssd/README.md) |
## How It Works

View File

@@ -23,7 +23,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Network weights file (\*.bin)
| Validated images | single-channel `MNIST ubyte` images
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [Python](../../ie_bridges/python/sample/ngraph_function_creation_sample/README.md) |
| Other language realization | [Python](../../../samples/python/ngraph_function_creation_sample/README.md) |
## How It Works

View File

@@ -20,7 +20,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C](../../ie_bridges/c/samples/object_detection_sample_ssd/README.md), [Python](../../ie_bridges/python/sample/object_detection_sample_ssd/README.md) |
| Other language realization | [C](../../ie_bridges/c/samples/object_detection_sample_ssd/README.md), [Python](../../../samples/python/object_detection_sample_ssd/README.md) |
## How It Works

View File

@@ -19,7 +19,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [Python](../../ie_bridges/python/sample/style_transfer_sample/README.md) |
| Other language realization | [Python](../../../samples/python/style_transfer_sample/README.md) |
## How It Works

12
samples/CMakeLists.txt Normal file
View File

@@ -0,0 +1,12 @@
# Copyright (C) 2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# install Python samples
ie_cpack_add_component(python_samples)
install(DIRECTORY python/
DESTINATION samples/python
USE_SOURCE_PERMISSIONS
COMPONENT python_samples)

View File

@@ -16,15 +16,15 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
| :------------------------- | :-------------------------------------------------------------------------------------------------------- |
| Validated Models | [alexnet](@ref omz_models_model_alexnet) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/classification_sample_async/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/classification_sample_async/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
@@ -73,9 +73,9 @@ To run the sample, you need specify a model and image:
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -146,10 +146,10 @@ The sample application logs each step in a standard output stream and outputs to
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

View File

@@ -15,15 +15,15 @@ The following Inference Engine Python API is used in the application:
| :------------------------- | :-------------------------------------------------------------------------------------------------------- |
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_classification/README.md), [C](../../../c/samples/hello_classification/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [C](../../../inference-engine/ie_bridges/c/samples/hello_classification/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
@@ -62,9 +62,9 @@ To run the sample, you need specify a model and image:
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -117,10 +117,10 @@ The sample application logs each step in a standard output stream and outputs to
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.read_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a0d69c298618fab3a08b855442dca430f

View File

@@ -1,6 +1,6 @@
# Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README}
This sample demonstrates how to show Inference Engine devices and prints their metrics and default configuration values using [Query Device API feature](../../../../../docs/IE_DG/InferenceEngine_QueryAPI.md).
This sample demonstrates how to show Inference Engine devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/IE_DG/InferenceEngine_QueryAPI.md).
The following Inference Engine Python API is used in the application:
@@ -11,8 +11,8 @@ The following Inference Engine Python API is used in the application:
| Options | Values |
| :------------------------- | :---------------------------------------------------------------------- |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_query_device/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/hello_query_device/README.md) |
## How It Works
@@ -103,7 +103,7 @@ The application prints all available devices with their supported metrics and de
```
## See Also
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.get_metric]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#af1cdf2ecbea6399c556957c2c2fdf8eb

View File

@@ -1,6 +1,6 @@
# Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README}
This sample demonstrates how to do synchronous inference of object detection networks using [Shape Inference feature](../../../../../docs/IE_DG/ShapeInference.md).
This sample demonstrates how to do synchronous inference of object detection networks using [Shape Inference feature](../../../docs/IE_DG/ShapeInference.md).
Models with only 1 input and output are supported.
The following Inference Engine Python API is used in the application:
@@ -16,8 +16,8 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/hello_reshape_ssd/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/hello_reshape_ssd/README.md) |
## How It Works
@@ -25,7 +25,7 @@ At startup, the sample application reads command-line parameters, prepares input
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
@@ -70,9 +70,9 @@ To run the sample, you need specify a model and image:
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -114,10 +114,10 @@ The sample application logs each step in a standard output stream and creates an
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

View File

@@ -1,6 +1,6 @@
# nGraph Function Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README}
This sample demonstrates how to execute an inference using [nGraph function feature](../../../../../docs/nGraph_DG/build_function.md) to create a network that uses weights from LeNet classification network, which is known to work well on digit classification tasks. So you don't need an XML file, the model will be created from the source code on the fly.
This sample demonstrates how to execute an inference using [nGraph function feature](../../../docs/nGraph_DG/build_function.md) to create a network that uses weights from LeNet classification network, which is known to work well on digit classification tasks. So you don't need an XML file, the model will be created from the source code on the fly.
In addition to regular grayscale images with a digit, the sample also supports single-channel `ubyte` images as an input.
@@ -18,15 +18,15 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
| Validated Models | LeNet |
| Model Format | Network weights file (\*.bin) |
| Validated images | The sample uses OpenCV\* to [read input grayscale image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png) or single-channel `ubyte` image |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/ngraph_function_creation_sample/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/ngraph_function_creation_sample/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, prepares input data, creates a network using [nGraph function feature](../../../../../docs/nGraph_DG/build_function.md) and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
At startup, the sample application reads command-line parameters, prepares input data, creates a network using [nGraph function feature](../../../docs/nGraph_DG/build_function.md) and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
@@ -67,7 +67,7 @@ To run the sample, you need specify a model weights and image:
>
> - This sample supports models with FP32 weights only.
>
> - The `lenet.bin` weights file was generated by the [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
> - The `lenet.bin` weights file was generated by the [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) tool from the public LeNet model with the `--input_shape [64,1,28,28]` parameter specified.
>
> - The original model is available in the [Caffe* repository](https://github.com/BVLC/caffe/tree/master/examples/mnist) on GitHub\*.
>
@@ -128,10 +128,10 @@ The sample application logs each step in a standard output stream and outputs to
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IENetwork]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html

View File

@@ -16,8 +16,8 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
| :------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd), [face-detection-0206](@ref omz_models_model_face_detection_0206) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/object_detection_sample_ssd/README.md), [C](../../../c/samples/object_detection_sample_ssd/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [C](../../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md) |
## How It Works
@@ -25,7 +25,7 @@ On startup, the sample application reads command-line parameters, prepares input
As a result, the program creates an output image, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
@@ -72,9 +72,9 @@ To run the sample, you need specify a model and image:
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -115,10 +115,10 @@ The sample application logs each step in a standard output stream and creates an
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967

19
samples/python/setup.cfg Normal file
View File

@@ -0,0 +1,19 @@
[flake8]
filename = *.py
max-line-length = 160
ignore = E203
max-parameters-amount = 8
show_source = True
docstring-convention = google
enable-extensions = G
[pydocstyle]
convention = google
[mypy]
ignore_missing_imports = True
disable_error_code = attr-defined
show_column_numbers = True
show_error_context = True
show_absolute_path = True
pretty = True

View File

@@ -19,15 +19,15 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
| :------------------------- | :---------------------------------------------------------------------------------------------------- |
| Validated Models | Acoustic model based on Kaldi* neural networks (see [Model Preparation](#model-preparation) section) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin) |
| Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/speech_sample/README.md) |
| Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/speech_sample/README.md) |
## How It Works
At startup, the sample application reads command-line parameters, loads a specified model and input data to the Inference Engine plugin, performs synchronous inference on all speech utterances stored in the input file, logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## GNA-specific details
@@ -206,10 +206,10 @@ The sample application logs each step in a standard output stream.
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e
[IENetwork.add_outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#ae8024b07f3301d6d5de5c0d153e2e6e6

View File

@@ -17,8 +17,8 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
| :------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Validated Models | [fast-neural-style-mosaic-onnx](@ref omz_models_model_fast_neural_style_mosaic_onnx) |
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../../samples/style_transfer_sample/README.md) |
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
| Other language realization | [C++](../../../inference-engine/samples/style_transfer_sample/README.md) |
## How It Works
@@ -26,7 +26,7 @@ At startup, the sample application reads command-line parameters, prepares input
As a result, the program creates an output image(s), logging each step in a standard output stream.
You can see the explicit description of
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
## Running
@@ -84,9 +84,9 @@ To run the sample, you need specify a model and image:
> **NOTES**:
>
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
>
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -127,10 +127,10 @@ The sample application logs each step in a standard output stream and creates an
## See Also
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
[IECore.add_extension]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#a8a4b671a9928c7c059bd1e76d2333967