[Samples] Move C samples to the new samples directory (#8021)
* [Samples] Move C samples to the new samples directory * fix samples * [Samples] Move C samples to the new samples directory * fix samples * code reivew inspired fixes * rename folder to lower case * move ENABLE_SAMPLES cond level up, fix readmes * fix ref in doc * fix install path * fix install of samples to tests Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
This commit is contained in:
parent
58d845f351
commit
3251e0cbc3
@ -92,7 +92,9 @@ add_subdirectory(inference-engine)
|
||||
openvino_developer_export_targets(COMPONENT ngraph TARGETS ngraph_backend interpreter_backend)
|
||||
include(cmake/extra_modules.cmake)
|
||||
|
||||
if(ENABLE_SAMPLES)
|
||||
add_subdirectory(samples)
|
||||
endif()
|
||||
add_subdirectory(model-optimizer)
|
||||
add_subdirectory(docs)
|
||||
add_subdirectory(tools)
|
||||
|
@ -17,11 +17,11 @@ Inference Engine sample applications include the following:
|
||||
- [Benchmark Python Tool](../../tools/benchmark_tool/README.md)
|
||||
- **Hello Classification Sample** – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths).
|
||||
- [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md)
|
||||
- [Hello Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_classification/README.md)
|
||||
- [Hello Classification C Sample](../../samples/c/hello_classification/README.md)
|
||||
- [Hello Classification Python Sample](../../samples/python/hello_classification/README.md)
|
||||
- **Hello NV12 Input Classification Sample** – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
|
||||
- [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md)
|
||||
- [Hello NV12 Input Classification C Sample](../../inference-engine/ie_bridges/c/samples/hello_nv12_input_classification/README.md)
|
||||
- [Hello NV12 Input Classification C Sample](../../samples/c/hello_nv12_input_classification/README.md)
|
||||
- **Hello Query Device Sample** – Query of available Inference Engine devices and their metrics, configuration values.
|
||||
- [Hello Query Device C++ Sample](../../inference-engine/samples/hello_query_device/README.md)
|
||||
- [Hello Query Device Python* Sample](../../samples/python/hello_query_device/README.md)
|
||||
@ -39,7 +39,7 @@ Inference Engine sample applications include the following:
|
||||
- [nGraph Function Creation Python Sample](../../samples/python/ngraph_function_creation_sample/README.md)
|
||||
- **Object Detection for SSD Sample** – Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs.
|
||||
- [Object Detection SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md)
|
||||
- [Object Detection SSD C Sample](../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md)
|
||||
- [Object Detection SSD C Sample](../../samples/c/object_detection_sample_ssd/README.md)
|
||||
- [Object Detection SSD Python* Sample](../../samples/python/object_detection_sample_ssd/README.md)
|
||||
|
||||
> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.
|
||||
|
@ -18,21 +18,18 @@ add_subdirectory(samples)
|
||||
foreach(sample benchmark_app classification_sample_async hello_classification
|
||||
hello_nv12_input_classification hello_query_device hello_reshape_ssd
|
||||
ngraph_function_creation_sample object_detection_sample_ssd
|
||||
speech_sample style_transfer_sample hello_classification_c
|
||||
object_detection_sample_ssd_c hello_nv12_input_classification_c)
|
||||
speech_sample style_transfer_sample)
|
||||
if(TARGET ${sample})
|
||||
install(TARGETS ${sample}
|
||||
RUNTIME DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
foreach(samples_library opencv_c_wrapper format_reader)
|
||||
if(TARGET ${samples_library})
|
||||
install(TARGETS ${samples_library}
|
||||
if(TARGET format_reader)
|
||||
install(TARGETS format_reader
|
||||
RUNTIME DESTINATION ${IE_CPACK_RUNTIME_PATH} COMPONENT tests EXCLUDE_FROM_ALL
|
||||
LIBRARY DESTINATION ${IE_CPACK_LIBRARY_PATH} COMPONENT tests EXCLUDE_FROM_ALL)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
openvino_developer_export_targets(COMPONENT openvino_common TARGETS format_reader ie_samples_utils)
|
||||
|
||||
@ -67,26 +64,3 @@ elseif(WIN32)
|
||||
PATTERN .clang-format EXCLUDE)
|
||||
endif()
|
||||
|
||||
# install C samples
|
||||
|
||||
ie_cpack_add_component(c_samples DEPENDS core_c)
|
||||
|
||||
if(UNIX)
|
||||
install(PROGRAMS samples/build_samples.sh
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples)
|
||||
elseif(WIN32)
|
||||
install(PROGRAMS samples/build_samples_msvc.bat
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples)
|
||||
endif()
|
||||
|
||||
install(DIRECTORY ie_bridges/c/samples/
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples
|
||||
PATTERN ie_bridges/c/samples/CMakeLists.txt EXCLUDE
|
||||
PATTERN ie_bridges/c/samples/.clang-format EXCLUDE)
|
||||
|
||||
install(FILES samples/CMakeLists.txt
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples)
|
||||
|
@ -9,7 +9,3 @@ add_subdirectory(src)
|
||||
if(ENABLE_TESTS)
|
||||
add_subdirectory(tests)
|
||||
endif()
|
||||
|
||||
if(ENABLE_SAMPLES)
|
||||
add_subdirectory(samples)
|
||||
endif()
|
||||
|
@ -18,7 +18,7 @@ Hello Classification C++ sample application demonstrates how to use the followin
|
||||
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
|
||||
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C](../../ie_bridges/c/samples/hello_classification/README.md), [Python](../../../samples/python/hello_classification/README.md) |
|
||||
| Other language realization | [C](../../../samples/c/hello_classification/README.md), [Python](../../../samples/python/hello_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
|
@ -19,7 +19,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
|
||||
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
|
||||
| Validated images | An uncompressed image in the NV12 color format - \*.yuv
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C](../../ie_bridges/c/samples/hello_nv12_input_classification/README.md) |
|
||||
| Other language realization | [C](../../../samples/c/hello_nv12_input_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
|
@ -20,7 +20,7 @@ Basic Inference Engine API is covered by [Hello Classification C++ sample](../he
|
||||
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
|
||||
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C](../../ie_bridges/c/samples/object_detection_sample_ssd/README.md), [Python](../../../samples/python/object_detection_sample_ssd/README.md) |
|
||||
| Other language realization | [C](../../../samples/c/object_detection_sample_ssd/README.md), [Python](../../../samples/python/object_detection_sample_ssd/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
|
@ -2,6 +2,47 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
add_subdirectory(c)
|
||||
|
||||
# TODO: remove this
|
||||
foreach(sample hello_classification_c
|
||||
object_detection_sample_ssd_c hello_nv12_input_classification_c)
|
||||
if(TARGET ${sample})
|
||||
install(TARGETS ${sample}
|
||||
RUNTIME DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
if(TARGET opencv_c_wrapper)
|
||||
install(TARGETS opencv_c_wrapper
|
||||
RUNTIME DESTINATION ${IE_CPACK_RUNTIME_PATH} COMPONENT tests EXCLUDE_FROM_ALL
|
||||
LIBRARY DESTINATION ${IE_CPACK_LIBRARY_PATH} COMPONENT tests EXCLUDE_FROM_ALL)
|
||||
endif()
|
||||
|
||||
# install C samples
|
||||
|
||||
ie_cpack_add_component(c_samples DEPENDS core_c)
|
||||
|
||||
if(UNIX)
|
||||
install(PROGRAMS ${IE_MAIN_SOURCE_DIR}/samples/build_samples.sh
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples)
|
||||
elseif(WIN32)
|
||||
install(PROGRAMS ${IE_MAIN_SOURCE_DIR}/samples/build_samples_msvc.bat
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples)
|
||||
endif()
|
||||
|
||||
install(DIRECTORY c
|
||||
DESTINATION samples
|
||||
COMPONENT c_samples
|
||||
PATTERN c/CMakeLists.txt EXCLUDE
|
||||
PATTERN c/.clang-format EXCLUDE)
|
||||
|
||||
install(FILES ${IE_MAIN_SOURCE_DIR}/samples/CMakeLists.txt
|
||||
DESTINATION samples/c
|
||||
COMPONENT c_samples)
|
||||
|
||||
# install Python samples
|
||||
|
||||
ie_cpack_add_component(python_samples)
|
||||
|
@ -17,8 +17,8 @@ Hello Classification C sample application demonstrates how to use the following
|
||||
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1)
|
||||
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
|
||||
| Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png)
|
||||
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../../samples/hello_classification/README.md), [Python](../../../../../samples/python/hello_classification/README.md) |
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [Python](../../python/hello_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
@ -26,11 +26,11 @@ Upon the start-up, the sample application reads command line parameters, loads s
|
||||
Then, the sample creates an synchronous inference request object. When inference is done, the application outputs data to the standard output stream.
|
||||
|
||||
You can see the explicit description of
|
||||
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
|
||||
## Building
|
||||
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
|
||||
## Running
|
||||
|
||||
@ -41,9 +41,9 @@ To run the sample, you need specify a model and image:
|
||||
|
||||
> **NOTES**:
|
||||
>
|
||||
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
>
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> - The sample accepts models in ONNX format (\*.onnx) that do not require preprocessing.
|
||||
|
||||
@ -92,10 +92,10 @@ This sample is an API example, for any performance measurements please use the d
|
||||
|
||||
## See Also
|
||||
|
||||
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Model Downloader](@ref omz_tools_downloader)
|
||||
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
|
||||
[ie_core_create]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaab73c7ee3704c742eaac457636259541
|
||||
[ie_core_read_network]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaa40803295255b3926a3d1b8924f26c29
|
@ -15,8 +15,8 @@ Basic Inference Engine API is covered by [Hello Classification C sample](../hell
|
||||
| Validated Models | [alexnet](@ref omz_models_model_alexnet)
|
||||
| Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx)
|
||||
| Validated images | An uncompressed image in the NV12 color format - \*.yuv
|
||||
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../../samples/hello_nv12_input_classification/README.md) |
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/hello_nv12_input_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
@ -29,7 +29,7 @@ each sample step at [Integration Steps](https://docs.openvinotoolkit.org/latest/
|
||||
|
||||
## Building
|
||||
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
|
||||
## Running
|
||||
|
||||
@ -57,8 +57,8 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
|
||||
> model to work with RGB order, you need to reconvert your model using the Model Optimizer tool
|
||||
> with `--reverse_input_channels` argument specified. For more information about the argument,
|
||||
> refer to **When to Reverse Input Channels** section of
|
||||
> [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
> [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
@ -107,10 +107,10 @@ This sample is an API example, for any performance measurements please use the d
|
||||
|
||||
## See Also
|
||||
|
||||
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Model Downloader](@ref omz_tools_downloader)
|
||||
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
|
||||
[ie_network_set_color_format]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga85f3251f1f7b08507c297e73baa58969
|
||||
[ie_blob_make_memory_nv12]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Blob.html#ga0a2d97b0d40a53c01ead771f82ae7f4a
|
@ -1,6 +1,6 @@
|
||||
# Object Detection SSD C Sample {#openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README}
|
||||
|
||||
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Asynchronous Inference Request API and [input reshape feature](../../../../../docs/IE_DG/ShapeInference.md).
|
||||
This sample demonstrates how to execute an inference of object detection networks like SSD-VGG using Asynchronous Inference Request API and [input reshape feature](../../../docs/IE_DG/ShapeInference.md).
|
||||
|
||||
Object Detection SSD C sample application demonstrates how to use the following Inference Engine C API in applications:
|
||||
|
||||
@ -16,15 +16,15 @@ Object Detection SSD C sample application demonstrates how to use the following
|
||||
|
||||
Basic Inference Engine API is covered by [Hello Classification C sample](../hello_classification/README.md).
|
||||
|
||||
> **NOTE**: This sample uses `ie_network_reshape()` to set the batch size. While supported by SSD networks, reshape may not work with arbitrary topologies. See [Shape Inference Guide](../../../../../docs/IE_DG/ShapeInference.md) for more info.
|
||||
> **NOTE**: This sample uses `ie_network_reshape()` to set the batch size. While supported by SSD networks, reshape may not work with arbitrary topologies. See [Shape Inference Guide](../../../docs/IE_DG/ShapeInference.md) for more info.
|
||||
|
||||
| Options | Values |
|
||||
|:--- |:---
|
||||
| Validated Models | [person-detection-retail-0013](@ref omz_models_model_person_detection_retail_0013)
|
||||
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)
|
||||
| Validated images | The sample uses OpenCV* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (.bmp, .png, .jpg)
|
||||
| Supported devices | [All](../../../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../../samples/object_detection_sample_ssd/README.md), [Python](../../../../../samples/python/object_detection_sample_ssd/README.md) |
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [Python](../../python/object_detection_sample_ssd/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
@ -32,11 +32,11 @@ Upon the start-up the sample application reads command line parameters, loads sp
|
||||
Engine plugin. Then, the sample creates an asynchronous inference request object. When inference is done, the application creates output image(s) and output data to the standard output stream.
|
||||
|
||||
You can see the explicit description of
|
||||
each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
each sample step at [Integration Steps](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md) section of "Integrate the Inference Engine with Your Application" guide.
|
||||
|
||||
## Building
|
||||
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
To build the sample, please use instructions available at [Build the Sample Applications](../../../docs/IE_DG/Samples_Overview.md) section in Inference Engine Samples guide.
|
||||
|
||||
## Running
|
||||
|
||||
@ -70,9 +70,9 @@ Options:
|
||||
|
||||
> **NOTES**:
|
||||
>
|
||||
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
> - By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
|
||||
>
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
> - Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
@ -151,10 +151,10 @@ This sample is an API example, for any performance measurements please use the d
|
||||
|
||||
## See Also
|
||||
|
||||
- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Integrate the Inference Engine with Your Application](../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
|
||||
- [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md)
|
||||
- [Model Downloader](@ref omz_tools_downloader)
|
||||
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
- [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
|
||||
[ie_infer_request_infer_async]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#gad2351010e292b6faec959a3d5a8fb60e
|
||||
[ie_infer_request_wait]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#ga0c05e63e63c8d9cdd92900e82b0137c9
|
@ -16,7 +16,7 @@ The following Inference Engine Python API is used in the application:
|
||||
| Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) |
|
||||
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [C](../../../inference-engine/ie_bridges/c/samples/hello_classification/README.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [C](../../c/hello_classification/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
|
@ -17,7 +17,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](.
|
||||
| Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd), [face-detection-0206](@ref omz_models_model_face_detection_0206) |
|
||||
| Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) |
|
||||
| Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [C](../../../inference-engine/ie_bridges/c/samples/object_detection_sample_ssd/README.md) |
|
||||
| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [C](../../c/object_detection_sample_ssd/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user