diff --git a/CMakeLists.txt b/CMakeLists.txt index b0c6d805e6d..e22069d7c29 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -87,10 +87,8 @@ add_subdirectory(openvino) add_subdirectory(ngraph) add_subdirectory(inference-engine) add_subdirectory(runtime) +add_subdirectory(samples) include(cmake/extra_modules.cmake) -if(ENABLE_SAMPLES) - add_subdirectory(samples) -endif() add_subdirectory(model-optimizer) add_subdirectory(docs) add_subdirectory(tools) diff --git a/docs/IE_DG/Bfloat16Inference.md b/docs/IE_DG/Bfloat16Inference.md index 0461c6ee2b7..dcca07a47eb 100644 --- a/docs/IE_DG/Bfloat16Inference.md +++ b/docs/IE_DG/Bfloat16Inference.md @@ -60,7 +60,7 @@ Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee an adequate performance. To enable Bfloat16 simulator: -* In [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option +* In [Benchmark App](../../samples/cpp/benchmark_app/README.md), add the `-enforcebf16=true` option * In C++ API, set `KEY_ENFORCE_BF16` to `YES` * In C API: ``` diff --git a/docs/IE_DG/Extensibility_DG/Intro.md b/docs/IE_DG/Extensibility_DG/Intro.md index aa2e7d87ba3..7411233605e 100644 --- a/docs/IE_DG/Extensibility_DG/Intro.md +++ b/docs/IE_DG/Extensibility_DG/Intro.md @@ -45,4 +45,4 @@ The following pages describe how to integrate custom _kernels_ into the Inferenc * [Build an extension library using CMake*](Building.md) * [Using Inference Engine Samples](../Samples_Overview.md) -* [Hello Shape Infer SSD sample](../../../inference-engine/samples/hello_reshape_ssd/README.md) +* [Hello Shape Infer SSD sample](../../../samples/cpp/hello_reshape_ssd/README.md) diff --git a/docs/IE_DG/InferenceEngine_QueryAPI.md b/docs/IE_DG/InferenceEngine_QueryAPI.md index f5b9399a240..23360cc62ab 100644 --- a/docs/IE_DG/InferenceEngine_QueryAPI.md +++ b/docs/IE_DG/InferenceEngine_QueryAPI.md @@ -2,7 +2,7 @@ Introduction to Inference Engine Device Query API {#openvino_docs_IE_DG_Inferenc =============================== This section provides a high-level description of the process of querying of different device properties and configuration values. -Refer to the [Hello Query Device Sample](../../inference-engine/samples/hello_query_device/README.md) sources and [Multi-Device Plugin guide](supported_plugins/MULTI.md) for example of using the Inference Engine Query API in user applications. +Refer to the [Hello Query Device Sample](../../samples/cpp/hello_query_device/README.md) sources and [Multi-Device Plugin guide](supported_plugins/MULTI.md) for example of using the Inference Engine Query API in user applications. ## Using the Inference Engine Query API in Your Code diff --git a/docs/IE_DG/Int8Inference.md b/docs/IE_DG/Int8Inference.md index 2577e7dc4ec..32d410b09ef 100644 --- a/docs/IE_DG/Int8Inference.md +++ b/docs/IE_DG/Int8Inference.md @@ -41,7 +41,7 @@ After that, you should quantize the model by the [Model Quantizer](@ref omz_tool ## Inference -The simplest way to infer the model and collect performance counters is the [C++ Benchmark Application](../../inference-engine/samples/benchmark_app/README.md). +The simplest way to infer the model and collect performance counters is the [C++ Benchmark Application](../../samples/cpp/benchmark_app/README.md). ```sh ./benchmark_app -m resnet-50-tf.xml -d CPU -niter 1 -api sync -report_type average_counters -report_folder pc_report_dir ``` diff --git a/docs/IE_DG/Integrate_with_customer_application_new_API.md b/docs/IE_DG/Integrate_with_customer_application_new_API.md index 870d840c95c..4d543a9e891 100644 --- a/docs/IE_DG/Integrate_with_customer_application_new_API.md +++ b/docs/IE_DG/Integrate_with_customer_application_new_API.md @@ -2,7 +2,7 @@ Integrate the Inference Engine with Your Application {#openvino_docs_IE_DG_Integ =============================== This section provides a high-level description of the process of integrating the Inference Engine into your application. -Refer to the [Hello Classification Sample](../../inference-engine/samples/hello_classification/README.md) sources +Refer to the [Hello Classification Sample](../../samples/cpp/hello_classification/README.md) sources for example of using the Inference Engine in applications. ## Use the Inference Engine API in Your Code @@ -73,7 +73,7 @@ methods: > Inference Engine expects two separate image planes (Y and UV). You must use a specific > `InferenceEngine::NV12Blob` object instead of default blob object and set this blob to > the Inference Engine Infer Request using `InferenceEngine::InferRequest::SetBlob()`. -> Refer to [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md) +> Refer to [Hello NV12 Input Classification C++ Sample](../../samples/cpp/hello_nv12_input_classification/README.md) > for more details. If you skip this step, the default values are set: @@ -209,6 +209,6 @@ It's allowed to specify additional build options (e.g. to build CMake project on ### Run Your Application -Before running, make sure you completed **Set the Environment Variables** section in [OpenVINO Installation](../../inference-engine/samples/hello_nv12_input_classification/README.md) document so that the application can find the libraries. +Before running, make sure you completed **Set the Environment Variables** section in [OpenVINO Installation](../../samples/cpp/hello_nv12_input_classification/README.md) document so that the application can find the libraries. [integration_process]: img/integration_process.png diff --git a/docs/IE_DG/Intro_to_Performance.md b/docs/IE_DG/Intro_to_Performance.md index ca360d0d06f..5a714240470 100644 --- a/docs/IE_DG/Intro_to_Performance.md +++ b/docs/IE_DG/Intro_to_Performance.md @@ -29,7 +29,7 @@ Refer to the ENABLE_FP16_FOR_QUANTIZED_MODELS key in the [GPU Plugin documentati One way to increase computational efficiency is batching, which combines many (potentially tens) of input images to achieve optimal throughput. However, high batch size also comes with a latency penalty. So, for more real-time oriented usages, lower batch sizes (as low as a single input) are used. -Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample, which allows latency vs. throughput measuring. +Refer to the [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample, which allows latency vs. throughput measuring. ## Using Caching API for first inference latency optimization Since with the 2021.4 release, Inference Engine provides an ability to enable internal caching of loaded networks. @@ -42,7 +42,7 @@ To gain better performance on accelerators, such as VPU, the Inference Engine us [Integrating Inference Engine in Your Application (current API)](Integrate_with_customer_application_new_API.md)). The point is amortizing the costs of data transfers, by pipe-lining, see [Async API explained](@ref omz_demos_object_detection_demo_cpp). Since the pipe-lining relies on the availability of the parallel slack, running multiple inference requests in parallel is essential. -Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample, which enables running a number of inference requests in parallel. Specifying different number of request produces different throughput measurements. +Refer to the [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample, which enables running a number of inference requests in parallel. Specifying different number of request produces different throughput measurements. ## Best Latency on the Multi-Socket CPUs Note that when latency is of concern, there are additional tips for multi-socket systems. @@ -70,7 +70,7 @@ OpenVINO™ toolkit provides a "throughput" mode that allows running multiple in Internally, the execution resources are split/pinned into execution "streams". Using this feature gains much better performance for the networks that originally are not scaled well with a number of threads (for example, lightweight topologies). This is especially pronounced for the many-core server machines. -Run the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) and play with number of infer requests running in parallel, next section. +Run the [Benchmark App](../../samples/cpp/benchmark_app/README.md) and play with number of infer requests running in parallel, next section. Try different values of the `-nstreams` argument from `1` to a number of CPU cores and find one that provides the best performance. The throughput mode relaxes the requirement to saturate the CPU by using a large batch: running multiple independent inference requests in parallel often gives much better performance, than using a batch only. @@ -78,7 +78,7 @@ This allows you to simplify the app-logic, as you don't need to combine multiple Instead, it is possible to keep a separate infer request per camera or another source of input and process the requests in parallel using Async API. ## Benchmark App -[Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample is the best performance reference. +[Benchmark App](../../samples/cpp/benchmark_app/README.md) sample is the best performance reference. It has a lot of device-specific knobs, but the primary usage is as simple as: ```bash $ ./benchmark_app –d GPU –m -i diff --git a/docs/IE_DG/Samples_Overview.md b/docs/IE_DG/Samples_Overview.md index 06c5472566e..35fae289667 100644 --- a/docs/IE_DG/Samples_Overview.md +++ b/docs/IE_DG/Samples_Overview.md @@ -10,35 +10,35 @@ After installation of Intel® Distribution of OpenVINO™ toolkit, С, C++ and P Inference Engine sample applications include the following: - **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors. - - [Automatic Speech Recognition C++ Sample](../../inference-engine/samples/speech_sample/README.md) + - [Automatic Speech Recognition C++ Sample](../../samples/cpp/speech_sample/README.md) - [Automatic Speech Recognition Python Sample](../../samples/python/speech_sample/README.md) - **Benchmark Application** – Estimates deep learning inference performance on supported devices for synchronous and asynchronous modes. - - [Benchmark C++ Tool](../../inference-engine/samples/benchmark_app/README.md) + - [Benchmark C++ Tool](../../samples/cpp/benchmark_app/README.md) - [Benchmark Python Tool](../../tools/benchmark_tool/README.md) - **Hello Classification Sample** – Inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API. Input of any size and layout can be set to an infer request which will be pre-processed automatically during inference (the sample supports only images as inputs and supports Unicode paths). - - [Hello Classification C++ Sample](../../inference-engine/samples/hello_classification/README.md) + - [Hello Classification C++ Sample](../../samples/cpp/hello_classification/README.md) - [Hello Classification C Sample](../../samples/c/hello_classification/README.md) - [Hello Classification Python Sample](../../samples/python/hello_classification/README.md) - **Hello NV12 Input Classification Sample** – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs. - - [Hello NV12 Input Classification C++ Sample](../../inference-engine/samples/hello_nv12_input_classification/README.md) + - [Hello NV12 Input Classification C++ Sample](../../samples/cpp/hello_nv12_input_classification/README.md) - [Hello NV12 Input Classification C Sample](../../samples/c/hello_nv12_input_classification/README.md) - **Hello Query Device Sample** – Query of available Inference Engine devices and their metrics, configuration values. - - [Hello Query Device C++ Sample](../../inference-engine/samples/hello_query_device/README.md) + - [Hello Query Device C++ Sample](../../samples/cpp/hello_query_device/README.md) - [Hello Query Device Python* Sample](../../samples/python/hello_query_device/README.md) - **Hello Reshape SSD Sample** – Inference of SSD networks resized by ShapeInfer API according to an input size. - - [Hello Reshape SSD C++ Sample**](../../inference-engine/samples/hello_reshape_ssd/README.md) + - [Hello Reshape SSD C++ Sample**](../../samples/cpp/hello_reshape_ssd/README.md) - [Hello Reshape SSD Python Sample**](../../samples/python/hello_reshape_ssd/README.md) - **Image Classification Sample Async** – Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs). - - [Image Classification Async C++ Sample](../../inference-engine/samples/classification_sample_async/README.md) + - [Image Classification Async C++ Sample](../../samples/cpp/classification_sample_async/README.md) - [Image Classification Async Python* Sample](../../samples/python/classification_sample_async/README.md) - **Style Transfer Sample** – Style Transfer sample (the sample supports only images as inputs). - - [Style Transfer C++ Sample](../../inference-engine/samples/style_transfer_sample/README.md) + - [Style Transfer C++ Sample](../../samples/cpp/style_transfer_sample/README.md) - [Style Transfer Python* Sample](../../samples/python/style_transfer_sample/README.md) - **nGraph Function Creation Sample** – Construction of the LeNet network using the nGraph function creation sample. - - [nGraph Function Creation C++ Sample](../../inference-engine/samples/ngraph_function_creation_sample/README.md) + - [nGraph Function Creation C++ Sample](../../samples/cpp/ngraph_function_creation_sample/README.md) - [nGraph Function Creation Python Sample](../../samples/python/ngraph_function_creation_sample/README.md) - **Object Detection for SSD Sample** – Inference of object detection networks based on the SSD, this sample is simplified version that supports only images as inputs. - - [Object Detection SSD C++ Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md) + - [Object Detection SSD C++ Sample](../../samples/cpp/object_detection_sample_ssd/README.md) - [Object Detection SSD C Sample](../../samples/c/object_detection_sample_ssd/README.md) - [Object Detection SSD Python* Sample](../../samples/python/object_detection_sample_ssd/README.md) diff --git a/docs/IE_DG/supported_plugins/AUTO.md b/docs/IE_DG/supported_plugins/AUTO.md index ce795fa06e3..99b5388094f 100644 --- a/docs/IE_DG/supported_plugins/AUTO.md +++ b/docs/IE_DG/supported_plugins/AUTO.md @@ -48,7 +48,7 @@ Auto-device supports query device optimization capabilities in metric; ### Enumerating Available Devices Inference Engine now features a dedicated API to enumerate devices and their capabilities. -See [Hello Query Device C++ Sample](../../../inference-engine/samples/hello_query_device/README.md). +See [Hello Query Device C++ Sample](../../../samples/cpp/hello_query_device/README.md). This is the example output from the sample (truncated to the devices' names only): ```sh diff --git a/docs/IE_DG/supported_plugins/CPU.md b/docs/IE_DG/supported_plugins/CPU.md index 12b005099ba..3ee93613cc9 100644 --- a/docs/IE_DG/supported_plugins/CPU.md +++ b/docs/IE_DG/supported_plugins/CPU.md @@ -99,7 +99,7 @@ CPU plugin removes a Power layer from a topology if it has the following paramet The plugin supports the configuration parameters listed below. All parameters must be set with the InferenceEngine::Core::LoadNetwork() method. When specifying key values as raw strings (that is, when using Python API), omit the `KEY_` prefix. -Refer to the OpenVINO samples for usage examples: [Benchmark App](../../../inference-engine/samples/benchmark_app/README.md). +Refer to the OpenVINO samples for usage examples: [Benchmark App](../../../samples/cpp/benchmark_app/README.md). These are general options, also supported by other plugins: diff --git a/docs/IE_DG/supported_plugins/GPU.md b/docs/IE_DG/supported_plugins/GPU.md index 28bee7d5316..1c4c17430bf 100644 --- a/docs/IE_DG/supported_plugins/GPU.md +++ b/docs/IE_DG/supported_plugins/GPU.md @@ -12,7 +12,7 @@ For an in-depth description of clDNN, see [Inference Engine source files](https: * "GPU" is an alias for "GPU.0" * If the system doesn't have an integrated GPU, then devices are enumerated starting from 0. -For demonstration purposes, see the [Hello Query Device C++ Sample](../../../inference-engine/samples/hello_query_device/README.md) that can print out the list of available devices with associated indices. Below is an example output (truncated to the device names only): +For demonstration purposes, see the [Hello Query Device C++ Sample](../../../samples/cpp/hello_query_device/README.md) that can print out the list of available devices with associated indices. Below is an example output (truncated to the device names only): ```sh ./hello_query_device diff --git a/docs/IE_DG/supported_plugins/MULTI.md b/docs/IE_DG/supported_plugins/MULTI.md index cebc03ba135..eabc36e4d0a 100644 --- a/docs/IE_DG/supported_plugins/MULTI.md +++ b/docs/IE_DG/supported_plugins/MULTI.md @@ -38,7 +38,7 @@ Notice that the priorities of the devices can be changed in real time for the ex Finally, there is a way to specify number of requests that the multi-device will internally keep for each device. Suppose your original app was running 4 cameras with 4 inference requests. You would probably want to share these 4 requests between 2 devices used in the MULTI. The easiest way is to specify a number of requests for each device using parentheses: "MULTI:CPU(2),GPU(2)" and use the same 4 requests in your app. However, such an explicit configuration is not performance-portable and hence not recommended. Instead, the better way is to configure the individual devices and query the resulting number of requests to be used at the application level (see [Configuring the Individual Devices and Creating the Multi-Device On Top](#configuring-the-individual-devices-and-creating-the-multi-device-on-top)). ## Enumerating Available Devices -Inference Engine now features a dedicated API to enumerate devices and their capabilities. See [Hello Query Device C++ Sample](../../../inference-engine/samples/hello_query_device/README.md). This is example output from the sample (truncated to the devices' names only): +Inference Engine now features a dedicated API to enumerate devices and their capabilities. See [Hello Query Device C++ Sample](../../../samples/cpp/hello_query_device/README.md). This is example output from the sample (truncated to the devices' names only): ```sh ./hello_query_device @@ -86,7 +86,7 @@ Notice that until R2 you had to calculate number of requests in your application ## Using the Multi-Device with OpenVINO Samples and Benchmarking the Performance Notice that every OpenVINO sample that supports "-d" (which stands for "device") command-line option transparently accepts the multi-device. -The [Benchmark Application](../../../inference-engine/samples/benchmark_app/README.md) is the best reference to the optimal usage of the multi-device. As discussed multiple times earlier, you don't need to setup number of requests, CPU streams or threads as the application provides optimal out of the box performance. +The [Benchmark Application](../../../samples/cpp/benchmark_app/README.md) is the best reference to the optimal usage of the multi-device. As discussed multiple times earlier, you don't need to setup number of requests, CPU streams or threads as the application provides optimal out of the box performance. Below is example command-line to evaluate HDDL+GPU performance with that: ```sh diff --git a/docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md b/docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md index f6709865b5c..3f79aa04626 100644 --- a/docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md +++ b/docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md @@ -14,7 +14,7 @@ The IR will have two inputs: `input` for data and `ivector` for ivectors. ## Example: Run ASpIRE Chain TDNN Model with the Speech Recognition Sample -These instructions show how to run the converted model with the [Speech Recognition sample](../../../../../inference-engine/samples/speech_sample/README.md). +These instructions show how to run the converted model with the [Speech Recognition sample](../../../../../samples/cpp/speech_sample/README.md). In this example, the input data contains one utterance from one speaker. To follow the steps described below, you must first do the following: @@ -109,4 +109,4 @@ speech_sample -i feats.ark,ivector_online_ie.ark -m final.xml -d CPU -o predicti ``` Results can be decoded as described in "Use of Sample in Kaldi* Speech Recognition Pipeline" chapter -in [the Speech Recognition Sample description](../../../../../inference-engine/samples/speech_sample/README.md). +in [the Speech Recognition Sample description](../../../../../samples/cpp/speech_sample/README.md). diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md index 076fe4716cc..3a3ae03f114 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md @@ -59,7 +59,7 @@ For example, if you downloaded the [pre-trained SSD InceptionV2 topology](http:/ Inference Engine comes with a number of samples to infer Object Detection API models including: -* [Object Detection for SSD Sample](../../../../../inference-engine/samples/object_detection_sample_ssd/README.md) --- for RFCN, SSD and Faster R-CNNs +* [Object Detection for SSD Sample](../../../../../samples/cpp/object_detection_sample_ssd/README.md) --- for RFCN, SSD and Faster R-CNNs * [Mask R-CNN Sample for TensorFlow* Object Detection API Models](@ref omz_demos_mask_rcnn_demo_cpp) --- for Mask R-CNNs There are several important notes about feeding input images to the samples: diff --git a/docs/benchmarks/performance_benchmarks_faq.md b/docs/benchmarks/performance_benchmarks_faq.md index b833f03c531..e7fd00866b1 100644 --- a/docs/benchmarks/performance_benchmarks_faq.md +++ b/docs/benchmarks/performance_benchmarks_faq.md @@ -15,7 +15,7 @@ The models used in the performance benchmarks were chosen based on general adopt CF means Caffe*, while TF means TensorFlow*. #### 5. How can I run the benchmark results on my own? -All of the performance benchmarks were generated using the open-sourced tool within the Intel® Distribution of OpenVINO™ toolkit called `benchmark_app`, which is available in both [C++](../../inference-engine/samples/benchmark_app/README.md) and [Python](../../tools/benchmark_tool/README.md). +All of the performance benchmarks were generated using the open-sourced tool within the Intel® Distribution of OpenVINO™ toolkit called `benchmark_app`, which is available in both [C++](../../samples/cpp/benchmark_app/README.md) and [Python](../../tools/benchmark_tool/README.md). #### 6. What image sizes are used for the classification network models? The image size used in the inference depends on the network being benchmarked. The following table shows the list of input sizes for each network model. diff --git a/docs/doxygen/doxygen-ignore.txt b/docs/doxygen/doxygen-ignore.txt index 023d555cf04..f623f4826e1 100644 --- a/docs/doxygen/doxygen-ignore.txt +++ b/docs/doxygen/doxygen-ignore.txt @@ -1,4 +1,4 @@ -openvino/inference-engine/samples/hello_reshape_ssd/README.md +openvino/samples/cpp/hello_reshape_ssd/README.md openvino/docs/index.md inference-engine/include/ie_icnn_network.hpp openvino/docs/get_started/get_started_dl_workbench.md diff --git a/docs/get_started/get_started_linux.md b/docs/get_started/get_started_linux.md index 10b1b79aebe..ee99e45f8cc 100644 --- a/docs/get_started/get_started_linux.md +++ b/docs/get_started/get_started_linux.md @@ -113,7 +113,7 @@ When the script completes, you see the label and confidence for the top-10 categ Top 10 results: -Image /home/user/dldt/inference-engine/samples/sample_data/car.png +Image /home/user/openvino/samples/cpp/sample_data/car.png classid probability label ------- ----------- ----- @@ -366,7 +366,7 @@ When the Sample Application completes, you see the label and confidence for the ```sh Top 10 results: -Image /home/user/dldt/inference-engine/samples/sample_data/car.png +Image /home/user/openvino/samples/cpp/sample_data/car.png classid probability label ------- ----------- ----- diff --git a/docs/optimization_guide/dldt_optimization_guide.md b/docs/optimization_guide/dldt_optimization_guide.md index 8b68d53c0fc..8ad2a9fc558 100644 --- a/docs/optimization_guide/dldt_optimization_guide.md +++ b/docs/optimization_guide/dldt_optimization_guide.md @@ -53,9 +53,9 @@ When evaluating performance of your model with the Inference Engine, you must me In the asynchronous case (see Request-Based API and “GetBlob” Idiom), the performance of an individual infer request is usually of less concern. Instead, you typically execute multiple requests asynchronously and measure the throughput in images per second by dividing the number of images that were processed by the processing time. In contrast, for latency-oriented tasks, the time to a single frame is more important. -Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample, which allows latency vs. throughput measuring. +Refer to the [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample, which allows latency vs. throughput measuring. -> **NOTE**: The [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample also supports batching, that is, automatically packing multiple input images into a single request. However, high batch size results in a latency penalty. So for more real-time oriented usages, batch sizes that are as low as a single input are usually used. Still, devices like CPU, Intel®Movidius™ Myriad™ 2 VPU, Intel® Movidius™ Myriad™ X VPU, or Intel® Vision Accelerator Design with Intel® Movidius™ VPU require a number of parallel requests instead of batching to leverage the performance. Running multiple requests should be coupled with a device configured to the corresponding number of streams. See details on CPU streams for an example. +> **NOTE**: The [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample also supports batching, that is, automatically packing multiple input images into a single request. However, high batch size results in a latency penalty. So for more real-time oriented usages, batch sizes that are as low as a single input are usually used. Still, devices like CPU, Intel®Movidius™ Myriad™ 2 VPU, Intel® Movidius™ Myriad™ X VPU, or Intel® Vision Accelerator Design with Intel® Movidius™ VPU require a number of parallel requests instead of batching to leverage the performance. Running multiple requests should be coupled with a device configured to the corresponding number of streams. See details on CPU streams for an example. [OpenVINO™ Deep Learning Workbench tool](https://docs.openvinotoolkit.org/latest/workbench_docs_Workbench_DG_Introduction.html) provides throughput versus latency charts for different numbers of streams, requests, and batch sizes to find the performance sweet spot. @@ -63,7 +63,7 @@ Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README When comparing the Inference Engine performance with the framework or another reference code, make sure that both versions are as similar as possible: -- Wrap exactly the inference execution (refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample for an example). +- Wrap exactly the inference execution (refer to the [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample for an example). - Track model loading time separately. - Ensure the inputs are identical for the Inference Engine and the framework. For example, Caffe\* allows you to auto-populate the input with random values. Notice that it might give different performance than on real images. - Similarly, for correct performance comparison, make sure the access pattern, for example, input layouts, is optimal for Inference Engine (currently, it is NCHW). @@ -79,7 +79,7 @@ You need to build your performance conclusions on reproducible data. Do the perf - If the warm-up run does not help or execution time still varies, you can try running a large number of iterations and then average or find a mean of the results. - For time values that range too much, use geomean. -Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) for code examples of performance measurements. Almost every sample, except interactive demos, has the `-ni` option to specify the number of iterations. +Refer to the [Benchmark App](../../samples/cpp/benchmark_app/README.md) for code examples of performance measurements. Almost every sample, except interactive demos, has the `-ni` option to specify the number of iterations. ## Model Optimizer Knobs Related to Performance @@ -121,9 +121,9 @@ for the multi-device execution: (e.g., the number of request in the flight is not enough to saturate all devices). - It is highly recommended to query the optimal number of inference requests directly from the instance of the ExecutionNetwork (resulted from the LoadNetwork call with the specific multi-device configuration as a parameter). -Please refer to the code of the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample for details. +Please refer to the code of the [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample for details. - Notice that for example CPU+GPU execution performs better with certain knobs - which you can find in the code of the same [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample. + which you can find in the code of the same [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample. One specific example is disabling GPU driver polling, which in turn requires multiple GPU streams (which is already a default for the GPU) to amortize slower inference completion from the device to the host. - Multi-device logic always attempts to save on the (e.g., inputs) data copies between device-agnostic, user-facing inference requests @@ -169,7 +169,7 @@ This feature usually provides much better performance for the networks than batc Compared with the batching, the parallelism is somewhat transposed (i.e. performed over inputs, and much less within CNN ops): ![](../img/cpu_streams_explained.png) -Try the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample and play with the number of streams running in parallel. The rule of thumb is tying up to a number of CPU cores on your machine. +Try the [Benchmark App](../../samples/cpp/benchmark_app/README.md) sample and play with the number of streams running in parallel. The rule of thumb is tying up to a number of CPU cores on your machine. For example, on an 8-core CPU, compare the `-nstreams 1` (which is a legacy, latency-oriented scenario) to the 2, 4, and 8 streams. Notice that on a multi-socket machine, the bare minimum of streams for a latency scenario equals the number of sockets. @@ -190,13 +190,13 @@ Inference Engine relies on the [Compute Library for Deep Neural Networks (clDNN) - If your application is simultaneously using the inference on the CPU or otherwise loads the host heavily, make sure that the OpenCL driver threads do not starve. You can use [CPU configuration options](../IE_DG/supported_plugins/CPU.md) to limit number of inference threads for the CPU plugin. - In the GPU-only scenario, a GPU driver might occupy a CPU core with spin-looped polling for completion. If the _CPU_ utilization is a concern, consider the `KEY_CLDND_PLUGIN_THROTTLE` configuration option. -> **NOTE**: See the [Benchmark App Sample](../../inference-engine/samples/benchmark_app/README.md) code for a usage example. +> **NOTE**: See the [Benchmark App Sample](../../samples/cpp/benchmark_app/README.md) code for a usage example. Notice that while disabling the polling, this option might reduce the GPU performance, so usually this option is used with multiple [GPU streams](../IE_DG/supported_plugins/GPU.md). ### Intel® Movidius™ Myriad™ X Visual Processing Unit and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs -Since Intel® Movidius™ Myriad™ X Visual Processing Unit (Intel® Movidius™ Myriad™ 2 VPU) communicates with the host over USB, minimum four infer requests in flight are recommended to hide the data transfer costs. See Request-Based API and “GetBlob” Idiom and [Benchmark App Sample](../../inference-engine/samples/benchmark_app/README.md) for more information. +Since Intel® Movidius™ Myriad™ X Visual Processing Unit (Intel® Movidius™ Myriad™ 2 VPU) communicates with the host over USB, minimum four infer requests in flight are recommended to hide the data transfer costs. See Request-Based API and “GetBlob” Idiom and [Benchmark App Sample](../../samples/cpp/benchmark_app/README.md) for more information. Intel® Vision Accelerator Design with Intel® Movidius™ VPUs requires keeping at least 32 inference requests in flight to fully saturate the device. @@ -240,7 +240,7 @@ For general details on the heterogeneous plugin, refer to the [corresponding sec Every Inference Engine sample supports the `-d` (device) option. -For example, here is a command to run an [Object Detection Sample SSD Sample](../../inference-engine/samples/object_detection_sample_ssd/README.md): +For example, here is a command to run an [Object Detection Sample SSD Sample](../../samples/cpp/object_detection_sample_ssd/README.md): ```sh ./object_detection_sample_ssd -m /ModelSSD.xml -i /picture.jpg -d HETERO:GPU,CPU @@ -284,7 +284,7 @@ You can use the GraphViz\* utility or `.dot` converters (for example, to `.png` ![](../img/output_trimmed.png) -You can also use performance data (in the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), it is an option `-pc`) to get performance data on each subgraph. Again, refer to the [HETERO plugin documentation](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_HETERO.html#analyzing_heterogeneous_execution) and to Internal Inference Performance Counters for information on general counters. +You can also use performance data (in the [Benchmark App](../../samples/cpp/benchmark_app/README.md), it is an option `-pc`) to get performance data on each subgraph. Again, refer to the [HETERO plugin documentation](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_HETERO.html#analyzing_heterogeneous_execution) and to Internal Inference Performance Counters for information on general counters. ## Optimizing Custom Kernels @@ -430,7 +430,7 @@ There are important performance caveats though: for example, the tasks that run Also, if the inference is performed on the graphics processing unit (GPU), there is little gain in doing the encoding of the resulting video on the same GPU in parallel, for instance, because the device is already busy. -Refer to the [Object Detection SSD Demo](@ref omz_demos_object_detection_demo_cpp) (latency-oriented Async API showcase) and [Benchmark App Sample](../../inference-engine/samples/benchmark_app/README.md) (which has both latency and throughput-oriented modes) for complete examples of the Async API in action. +Refer to the [Object Detection SSD Demo](@ref omz_demos_object_detection_demo_cpp) (latency-oriented Async API showcase) and [Benchmark App Sample](../../samples/cpp/benchmark_app/README.md) (which has both latency and throughput-oriented modes) for complete examples of the Async API in action. ## Using Tools diff --git a/inference-engine/CMakeLists.txt b/inference-engine/CMakeLists.txt index 942907895b4..71d80f1f974 100644 --- a/inference-engine/CMakeLists.txt +++ b/inference-engine/CMakeLists.txt @@ -12,54 +12,7 @@ if(ENABLE_PYTHON) add_subdirectory(ie_bridges/python) endif() -add_subdirectory(samples) - -# TODO: remove this -foreach(sample benchmark_app classification_sample_async hello_classification - hello_nv12_input_classification hello_query_device hello_reshape_ssd - ngraph_function_creation_sample object_detection_sample_ssd - speech_sample style_transfer_sample) - if(TARGET ${sample}) - install(TARGETS ${sample} - RUNTIME DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL) - endif() -endforeach() - -if(TARGET format_reader) - install(TARGETS format_reader - RUNTIME DESTINATION ${IE_CPACK_RUNTIME_PATH} COMPONENT tests EXCLUDE_FROM_ALL - LIBRARY DESTINATION ${IE_CPACK_LIBRARY_PATH} COMPONENT tests EXCLUDE_FROM_ALL) -endif() - -openvino_developer_export_targets(COMPONENT openvino_common TARGETS format_reader ie_samples_utils) - if(ENABLE_TESTS) add_subdirectory(tests_deprecated) add_subdirectory(tests) endif() - -# -# Install -# - -# install C++ samples - -ie_cpack_add_component(cpp_samples DEPENDS cpp_samples_deps core) - -if(UNIX) - install(DIRECTORY samples/ - DESTINATION samples/cpp - COMPONENT cpp_samples - USE_SOURCE_PERMISSIONS - PATTERN *.bat EXCLUDE - PATTERN speech_libs_and_demos EXCLUDE - PATTERN .clang-format EXCLUDE) -elseif(WIN32) - install(DIRECTORY samples/ - DESTINATION samples/cpp - COMPONENT cpp_samples - USE_SOURCE_PERMISSIONS - PATTERN *.sh EXCLUDE - PATTERN speech_libs_and_demos EXCLUDE - PATTERN .clang-format EXCLUDE) -endif() diff --git a/samples/CMakeLists.txt b/samples/CMakeLists.txt index 31a71211b37..ab1e43ddc12 100644 --- a/samples/CMakeLists.txt +++ b/samples/CMakeLists.txt @@ -2,6 +2,27 @@ # SPDX-License-Identifier: Apache-2.0 # +add_subdirectory(cpp) + +# TODO: remove this +foreach(sample benchmark_app classification_sample_async hello_classification + hello_nv12_input_classification hello_query_device hello_reshape_ssd + ngraph_function_creation_sample object_detection_sample_ssd + speech_sample style_transfer_sample) + if(TARGET ${sample}) + install(TARGETS ${sample} + RUNTIME DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL) + endif() +endforeach() + +if(TARGET format_reader) + install(TARGETS format_reader + RUNTIME DESTINATION ${IE_CPACK_RUNTIME_PATH} COMPONENT tests EXCLUDE_FROM_ALL + LIBRARY DESTINATION ${IE_CPACK_LIBRARY_PATH} COMPONENT tests EXCLUDE_FROM_ALL) +endif() + +openvino_developer_export_targets(COMPONENT openvino_common TARGETS format_reader ie_samples_utils) + add_subdirectory(c) # TODO: remove this @@ -18,17 +39,40 @@ if(TARGET opencv_c_wrapper) RUNTIME DESTINATION ${IE_CPACK_RUNTIME_PATH} COMPONENT tests EXCLUDE_FROM_ALL LIBRARY DESTINATION ${IE_CPACK_LIBRARY_PATH} COMPONENT tests EXCLUDE_FROM_ALL) endif() +# +# Install +# + +# install C++ samples + +ie_cpack_add_component(cpp_samples DEPENDS cpp_samples_deps core) + +if(UNIX) + install(DIRECTORY cpp/ + DESTINATION samples/cpp + COMPONENT cpp_samples + USE_SOURCE_PERMISSIONS + PATTERN *.bat EXCLUDE + PATTERN .clang-format EXCLUDE) +elseif(WIN32) + install(DIRECTORY cpp/ + DESTINATION samples/cpp + COMPONENT cpp_samples + USE_SOURCE_PERMISSIONS + PATTERN *.sh EXCLUDE + PATTERN .clang-format EXCLUDE) +endif() # install C samples ie_cpack_add_component(c_samples DEPENDS core_c) if(UNIX) - install(PROGRAMS ${IE_MAIN_SOURCE_DIR}/samples/build_samples.sh + install(PROGRAMS cpp/build_samples.sh DESTINATION samples/c COMPONENT c_samples) elseif(WIN32) - install(PROGRAMS ${IE_MAIN_SOURCE_DIR}/samples/build_samples_msvc.bat + install(PROGRAMS cpp/build_samples_msvc.bat DESTINATION samples/c COMPONENT c_samples) endif() @@ -39,7 +83,7 @@ install(DIRECTORY c PATTERN c/CMakeLists.txt EXCLUDE PATTERN c/.clang-format EXCLUDE) -install(FILES ${IE_MAIN_SOURCE_DIR}/samples/CMakeLists.txt +install(FILES cpp/CMakeLists.txt DESTINATION samples/c COMPONENT c_samples) diff --git a/samples/c/CMakeLists.txt b/samples/c/CMakeLists.txt index e7e91882aa1..439b7528dc5 100644 --- a/samples/c/CMakeLists.txt +++ b/samples/c/CMakeLists.txt @@ -2,4 +2,4 @@ # SPDX-License-Identifier: Apache-2.0 # -include("${InferenceEngine_SOURCE_DIR}/samples/CMakeLists.txt") +include("${OpenVINO_SOURCE_DIR}/samples/cpp/CMakeLists.txt") diff --git a/samples/c/hello_classification/README.md b/samples/c/hello_classification/README.md index 00e75e1c350..cb51e675be7 100644 --- a/samples/c/hello_classification/README.md +++ b/samples/c/hello_classification/README.md @@ -18,7 +18,7 @@ Hello Classification C sample application demonstrates how to use the following | Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png) | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [Python](../../python/hello_classification/README.md) | +| Other language realization | [C++](../../../samples/cpp/hello_classification/README.md), [Python](../../python/hello_classification/README.md) | ## How It Works diff --git a/samples/c/hello_nv12_input_classification/README.md b/samples/c/hello_nv12_input_classification/README.md index c07cdca8b3e..4f536beb6dc 100644 --- a/samples/c/hello_nv12_input_classification/README.md +++ b/samples/c/hello_nv12_input_classification/README.md @@ -16,7 +16,7 @@ Basic Inference Engine API is covered by [Hello Classification C sample](../hell | Model Format | Inference Engine Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | Validated images | An uncompressed image in the NV12 color format - \*.yuv | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/hello_nv12_input_classification/README.md) | +| Other language realization | [C++](../../../samples/cpp/hello_nv12_input_classification/README.md) | ## How It Works diff --git a/samples/c/object_detection_sample_ssd/README.md b/samples/c/object_detection_sample_ssd/README.md index 1506390b9fe..52f2d036266 100644 --- a/samples/c/object_detection_sample_ssd/README.md +++ b/samples/c/object_detection_sample_ssd/README.md @@ -24,7 +24,7 @@ Basic Inference Engine API is covered by [Hello Classification C sample](../hell | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | Validated images | The sample uses OpenCV* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (.bmp, .png, .jpg) | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [Python](../../python/object_detection_sample_ssd/README.md) | +| Other language realization | [C++](../../../samples/cpp/object_detection_sample_ssd/README.md), [Python](../../python/object_detection_sample_ssd/README.md) | ## How It Works diff --git a/inference-engine/samples/.clang-format b/samples/cpp/.clang-format similarity index 100% rename from inference-engine/samples/.clang-format rename to samples/cpp/.clang-format diff --git a/inference-engine/samples/CMakeLists.txt b/samples/cpp/CMakeLists.txt similarity index 100% rename from inference-engine/samples/CMakeLists.txt rename to samples/cpp/CMakeLists.txt diff --git a/inference-engine/samples/benchmark_app/CMakeLists.txt b/samples/cpp/benchmark_app/CMakeLists.txt similarity index 100% rename from inference-engine/samples/benchmark_app/CMakeLists.txt rename to samples/cpp/benchmark_app/CMakeLists.txt diff --git a/inference-engine/samples/benchmark_app/README.md b/samples/cpp/benchmark_app/README.md similarity index 100% rename from inference-engine/samples/benchmark_app/README.md rename to samples/cpp/benchmark_app/README.md diff --git a/inference-engine/samples/benchmark_app/benchmark_app.hpp b/samples/cpp/benchmark_app/benchmark_app.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/benchmark_app.hpp rename to samples/cpp/benchmark_app/benchmark_app.hpp diff --git a/inference-engine/samples/benchmark_app/infer_request_wrap.hpp b/samples/cpp/benchmark_app/infer_request_wrap.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/infer_request_wrap.hpp rename to samples/cpp/benchmark_app/infer_request_wrap.hpp diff --git a/inference-engine/samples/benchmark_app/inputs_filling.cpp b/samples/cpp/benchmark_app/inputs_filling.cpp similarity index 100% rename from inference-engine/samples/benchmark_app/inputs_filling.cpp rename to samples/cpp/benchmark_app/inputs_filling.cpp diff --git a/inference-engine/samples/benchmark_app/inputs_filling.hpp b/samples/cpp/benchmark_app/inputs_filling.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/inputs_filling.hpp rename to samples/cpp/benchmark_app/inputs_filling.hpp diff --git a/inference-engine/samples/benchmark_app/main.cpp b/samples/cpp/benchmark_app/main.cpp similarity index 100% rename from inference-engine/samples/benchmark_app/main.cpp rename to samples/cpp/benchmark_app/main.cpp diff --git a/inference-engine/samples/benchmark_app/progress_bar.hpp b/samples/cpp/benchmark_app/progress_bar.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/progress_bar.hpp rename to samples/cpp/benchmark_app/progress_bar.hpp diff --git a/inference-engine/samples/benchmark_app/remote_blobs_filling.cpp b/samples/cpp/benchmark_app/remote_blobs_filling.cpp similarity index 100% rename from inference-engine/samples/benchmark_app/remote_blobs_filling.cpp rename to samples/cpp/benchmark_app/remote_blobs_filling.cpp diff --git a/inference-engine/samples/benchmark_app/remote_blobs_filling.hpp b/samples/cpp/benchmark_app/remote_blobs_filling.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/remote_blobs_filling.hpp rename to samples/cpp/benchmark_app/remote_blobs_filling.hpp diff --git a/inference-engine/samples/benchmark_app/statistics_report.cpp b/samples/cpp/benchmark_app/statistics_report.cpp similarity index 100% rename from inference-engine/samples/benchmark_app/statistics_report.cpp rename to samples/cpp/benchmark_app/statistics_report.cpp diff --git a/inference-engine/samples/benchmark_app/statistics_report.hpp b/samples/cpp/benchmark_app/statistics_report.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/statistics_report.hpp rename to samples/cpp/benchmark_app/statistics_report.hpp diff --git a/inference-engine/samples/benchmark_app/utils.cpp b/samples/cpp/benchmark_app/utils.cpp similarity index 100% rename from inference-engine/samples/benchmark_app/utils.cpp rename to samples/cpp/benchmark_app/utils.cpp diff --git a/inference-engine/samples/benchmark_app/utils.hpp b/samples/cpp/benchmark_app/utils.hpp similarity index 100% rename from inference-engine/samples/benchmark_app/utils.hpp rename to samples/cpp/benchmark_app/utils.hpp diff --git a/inference-engine/samples/build_samples.sh b/samples/cpp/build_samples.sh similarity index 100% rename from inference-engine/samples/build_samples.sh rename to samples/cpp/build_samples.sh diff --git a/inference-engine/samples/build_samples_msvc.bat b/samples/cpp/build_samples_msvc.bat similarity index 100% rename from inference-engine/samples/build_samples_msvc.bat rename to samples/cpp/build_samples_msvc.bat diff --git a/inference-engine/samples/classification_sample_async/CMakeLists.txt b/samples/cpp/classification_sample_async/CMakeLists.txt similarity index 100% rename from inference-engine/samples/classification_sample_async/CMakeLists.txt rename to samples/cpp/classification_sample_async/CMakeLists.txt diff --git a/inference-engine/samples/classification_sample_async/README.md b/samples/cpp/classification_sample_async/README.md similarity index 100% rename from inference-engine/samples/classification_sample_async/README.md rename to samples/cpp/classification_sample_async/README.md diff --git a/inference-engine/samples/classification_sample_async/classification_sample_async.h b/samples/cpp/classification_sample_async/classification_sample_async.h similarity index 100% rename from inference-engine/samples/classification_sample_async/classification_sample_async.h rename to samples/cpp/classification_sample_async/classification_sample_async.h diff --git a/inference-engine/samples/classification_sample_async/main.cpp b/samples/cpp/classification_sample_async/main.cpp similarity index 100% rename from inference-engine/samples/classification_sample_async/main.cpp rename to samples/cpp/classification_sample_async/main.cpp diff --git a/inference-engine/samples/common/format_reader/CMakeLists.txt b/samples/cpp/common/format_reader/CMakeLists.txt similarity index 100% rename from inference-engine/samples/common/format_reader/CMakeLists.txt rename to samples/cpp/common/format_reader/CMakeLists.txt diff --git a/inference-engine/samples/common/format_reader/MnistUbyte.cpp b/samples/cpp/common/format_reader/MnistUbyte.cpp similarity index 100% rename from inference-engine/samples/common/format_reader/MnistUbyte.cpp rename to samples/cpp/common/format_reader/MnistUbyte.cpp diff --git a/inference-engine/samples/common/format_reader/MnistUbyte.h b/samples/cpp/common/format_reader/MnistUbyte.h similarity index 100% rename from inference-engine/samples/common/format_reader/MnistUbyte.h rename to samples/cpp/common/format_reader/MnistUbyte.h diff --git a/inference-engine/samples/common/format_reader/bmp.cpp b/samples/cpp/common/format_reader/bmp.cpp similarity index 100% rename from inference-engine/samples/common/format_reader/bmp.cpp rename to samples/cpp/common/format_reader/bmp.cpp diff --git a/inference-engine/samples/common/format_reader/bmp.h b/samples/cpp/common/format_reader/bmp.h similarity index 100% rename from inference-engine/samples/common/format_reader/bmp.h rename to samples/cpp/common/format_reader/bmp.h diff --git a/inference-engine/samples/common/format_reader/format_reader.cpp b/samples/cpp/common/format_reader/format_reader.cpp similarity index 100% rename from inference-engine/samples/common/format_reader/format_reader.cpp rename to samples/cpp/common/format_reader/format_reader.cpp diff --git a/inference-engine/samples/common/format_reader/format_reader.h b/samples/cpp/common/format_reader/format_reader.h similarity index 100% rename from inference-engine/samples/common/format_reader/format_reader.h rename to samples/cpp/common/format_reader/format_reader.h diff --git a/inference-engine/samples/common/format_reader/format_reader_ptr.h b/samples/cpp/common/format_reader/format_reader_ptr.h similarity index 100% rename from inference-engine/samples/common/format_reader/format_reader_ptr.h rename to samples/cpp/common/format_reader/format_reader_ptr.h diff --git a/inference-engine/samples/common/format_reader/opencv_wrapper.cpp b/samples/cpp/common/format_reader/opencv_wrapper.cpp similarity index 100% rename from inference-engine/samples/common/format_reader/opencv_wrapper.cpp rename to samples/cpp/common/format_reader/opencv_wrapper.cpp diff --git a/inference-engine/samples/common/format_reader/opencv_wrapper.h b/samples/cpp/common/format_reader/opencv_wrapper.h similarity index 100% rename from inference-engine/samples/common/format_reader/opencv_wrapper.h rename to samples/cpp/common/format_reader/opencv_wrapper.h diff --git a/inference-engine/samples/common/format_reader/register.h b/samples/cpp/common/format_reader/register.h similarity index 100% rename from inference-engine/samples/common/format_reader/register.h rename to samples/cpp/common/format_reader/register.h diff --git a/inference-engine/samples/common/utils/CMakeLists.txt b/samples/cpp/common/utils/CMakeLists.txt similarity index 100% rename from inference-engine/samples/common/utils/CMakeLists.txt rename to samples/cpp/common/utils/CMakeLists.txt diff --git a/inference-engine/samples/common/utils/include/samples/args_helper.hpp b/samples/cpp/common/utils/include/samples/args_helper.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/args_helper.hpp rename to samples/cpp/common/utils/include/samples/args_helper.hpp diff --git a/inference-engine/samples/common/utils/include/samples/classification_results.h b/samples/cpp/common/utils/include/samples/classification_results.h similarity index 100% rename from inference-engine/samples/common/utils/include/samples/classification_results.h rename to samples/cpp/common/utils/include/samples/classification_results.h diff --git a/inference-engine/samples/common/utils/include/samples/common.hpp b/samples/cpp/common/utils/include/samples/common.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/common.hpp rename to samples/cpp/common/utils/include/samples/common.hpp diff --git a/inference-engine/samples/common/utils/include/samples/console_progress.hpp b/samples/cpp/common/utils/include/samples/console_progress.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/console_progress.hpp rename to samples/cpp/common/utils/include/samples/console_progress.hpp diff --git a/inference-engine/samples/common/utils/include/samples/csv_dumper.hpp b/samples/cpp/common/utils/include/samples/csv_dumper.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/csv_dumper.hpp rename to samples/cpp/common/utils/include/samples/csv_dumper.hpp diff --git a/inference-engine/samples/common/utils/include/samples/ocv_common.hpp b/samples/cpp/common/utils/include/samples/ocv_common.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/ocv_common.hpp rename to samples/cpp/common/utils/include/samples/ocv_common.hpp diff --git a/inference-engine/samples/common/utils/include/samples/os/windows/w_dirent.h b/samples/cpp/common/utils/include/samples/os/windows/w_dirent.h similarity index 100% rename from inference-engine/samples/common/utils/include/samples/os/windows/w_dirent.h rename to samples/cpp/common/utils/include/samples/os/windows/w_dirent.h diff --git a/inference-engine/samples/common/utils/include/samples/slog.hpp b/samples/cpp/common/utils/include/samples/slog.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/slog.hpp rename to samples/cpp/common/utils/include/samples/slog.hpp diff --git a/inference-engine/samples/common/utils/include/samples/vpu/vpu_tools_common.hpp b/samples/cpp/common/utils/include/samples/vpu/vpu_tools_common.hpp similarity index 100% rename from inference-engine/samples/common/utils/include/samples/vpu/vpu_tools_common.hpp rename to samples/cpp/common/utils/include/samples/vpu/vpu_tools_common.hpp diff --git a/inference-engine/samples/common/utils/src/args_helper.cpp b/samples/cpp/common/utils/src/args_helper.cpp similarity index 100% rename from inference-engine/samples/common/utils/src/args_helper.cpp rename to samples/cpp/common/utils/src/args_helper.cpp diff --git a/inference-engine/samples/common/utils/src/common.cpp b/samples/cpp/common/utils/src/common.cpp similarity index 100% rename from inference-engine/samples/common/utils/src/common.cpp rename to samples/cpp/common/utils/src/common.cpp diff --git a/inference-engine/samples/common/utils/src/slog.cpp b/samples/cpp/common/utils/src/slog.cpp similarity index 100% rename from inference-engine/samples/common/utils/src/slog.cpp rename to samples/cpp/common/utils/src/slog.cpp diff --git a/inference-engine/samples/hello_classification/CMakeLists.txt b/samples/cpp/hello_classification/CMakeLists.txt similarity index 100% rename from inference-engine/samples/hello_classification/CMakeLists.txt rename to samples/cpp/hello_classification/CMakeLists.txt diff --git a/inference-engine/samples/hello_classification/README.md b/samples/cpp/hello_classification/README.md similarity index 100% rename from inference-engine/samples/hello_classification/README.md rename to samples/cpp/hello_classification/README.md diff --git a/inference-engine/samples/hello_classification/main.cpp b/samples/cpp/hello_classification/main.cpp similarity index 100% rename from inference-engine/samples/hello_classification/main.cpp rename to samples/cpp/hello_classification/main.cpp diff --git a/inference-engine/samples/hello_nv12_input_classification/CMakeLists.txt b/samples/cpp/hello_nv12_input_classification/CMakeLists.txt similarity index 100% rename from inference-engine/samples/hello_nv12_input_classification/CMakeLists.txt rename to samples/cpp/hello_nv12_input_classification/CMakeLists.txt diff --git a/inference-engine/samples/hello_nv12_input_classification/README.md b/samples/cpp/hello_nv12_input_classification/README.md similarity index 100% rename from inference-engine/samples/hello_nv12_input_classification/README.md rename to samples/cpp/hello_nv12_input_classification/README.md diff --git a/inference-engine/samples/hello_nv12_input_classification/main.cpp b/samples/cpp/hello_nv12_input_classification/main.cpp similarity index 100% rename from inference-engine/samples/hello_nv12_input_classification/main.cpp rename to samples/cpp/hello_nv12_input_classification/main.cpp diff --git a/inference-engine/samples/hello_query_device/CMakeLists.txt b/samples/cpp/hello_query_device/CMakeLists.txt similarity index 100% rename from inference-engine/samples/hello_query_device/CMakeLists.txt rename to samples/cpp/hello_query_device/CMakeLists.txt diff --git a/inference-engine/samples/hello_query_device/README.md b/samples/cpp/hello_query_device/README.md similarity index 100% rename from inference-engine/samples/hello_query_device/README.md rename to samples/cpp/hello_query_device/README.md diff --git a/inference-engine/samples/hello_query_device/main.cpp b/samples/cpp/hello_query_device/main.cpp similarity index 100% rename from inference-engine/samples/hello_query_device/main.cpp rename to samples/cpp/hello_query_device/main.cpp diff --git a/inference-engine/samples/hello_reshape_ssd/CMakeLists.txt b/samples/cpp/hello_reshape_ssd/CMakeLists.txt similarity index 100% rename from inference-engine/samples/hello_reshape_ssd/CMakeLists.txt rename to samples/cpp/hello_reshape_ssd/CMakeLists.txt diff --git a/inference-engine/samples/hello_reshape_ssd/README.md b/samples/cpp/hello_reshape_ssd/README.md similarity index 100% rename from inference-engine/samples/hello_reshape_ssd/README.md rename to samples/cpp/hello_reshape_ssd/README.md diff --git a/inference-engine/samples/hello_reshape_ssd/main.cpp b/samples/cpp/hello_reshape_ssd/main.cpp similarity index 100% rename from inference-engine/samples/hello_reshape_ssd/main.cpp rename to samples/cpp/hello_reshape_ssd/main.cpp diff --git a/inference-engine/samples/hello_reshape_ssd/reshape_ssd_extension.hpp b/samples/cpp/hello_reshape_ssd/reshape_ssd_extension.hpp similarity index 100% rename from inference-engine/samples/hello_reshape_ssd/reshape_ssd_extension.hpp rename to samples/cpp/hello_reshape_ssd/reshape_ssd_extension.hpp diff --git a/inference-engine/samples/ngraph_function_creation_sample/CMakeLists.txt b/samples/cpp/ngraph_function_creation_sample/CMakeLists.txt similarity index 100% rename from inference-engine/samples/ngraph_function_creation_sample/CMakeLists.txt rename to samples/cpp/ngraph_function_creation_sample/CMakeLists.txt diff --git a/inference-engine/samples/ngraph_function_creation_sample/README.md b/samples/cpp/ngraph_function_creation_sample/README.md similarity index 100% rename from inference-engine/samples/ngraph_function_creation_sample/README.md rename to samples/cpp/ngraph_function_creation_sample/README.md diff --git a/inference-engine/samples/ngraph_function_creation_sample/lenet.bin b/samples/cpp/ngraph_function_creation_sample/lenet.bin similarity index 100% rename from inference-engine/samples/ngraph_function_creation_sample/lenet.bin rename to samples/cpp/ngraph_function_creation_sample/lenet.bin diff --git a/inference-engine/samples/ngraph_function_creation_sample/lenet.labels b/samples/cpp/ngraph_function_creation_sample/lenet.labels similarity index 100% rename from inference-engine/samples/ngraph_function_creation_sample/lenet.labels rename to samples/cpp/ngraph_function_creation_sample/lenet.labels diff --git a/inference-engine/samples/ngraph_function_creation_sample/main.cpp b/samples/cpp/ngraph_function_creation_sample/main.cpp similarity index 100% rename from inference-engine/samples/ngraph_function_creation_sample/main.cpp rename to samples/cpp/ngraph_function_creation_sample/main.cpp diff --git a/inference-engine/samples/ngraph_function_creation_sample/ngraph_function_creation_sample.hpp b/samples/cpp/ngraph_function_creation_sample/ngraph_function_creation_sample.hpp similarity index 100% rename from inference-engine/samples/ngraph_function_creation_sample/ngraph_function_creation_sample.hpp rename to samples/cpp/ngraph_function_creation_sample/ngraph_function_creation_sample.hpp diff --git a/inference-engine/samples/object_detection_sample_ssd/CMakeLists.txt b/samples/cpp/object_detection_sample_ssd/CMakeLists.txt similarity index 100% rename from inference-engine/samples/object_detection_sample_ssd/CMakeLists.txt rename to samples/cpp/object_detection_sample_ssd/CMakeLists.txt diff --git a/inference-engine/samples/object_detection_sample_ssd/README.md b/samples/cpp/object_detection_sample_ssd/README.md similarity index 100% rename from inference-engine/samples/object_detection_sample_ssd/README.md rename to samples/cpp/object_detection_sample_ssd/README.md diff --git a/inference-engine/samples/object_detection_sample_ssd/main.cpp b/samples/cpp/object_detection_sample_ssd/main.cpp similarity index 100% rename from inference-engine/samples/object_detection_sample_ssd/main.cpp rename to samples/cpp/object_detection_sample_ssd/main.cpp diff --git a/inference-engine/samples/object_detection_sample_ssd/object_detection_sample_ssd.h b/samples/cpp/object_detection_sample_ssd/object_detection_sample_ssd.h similarity index 100% rename from inference-engine/samples/object_detection_sample_ssd/object_detection_sample_ssd.h rename to samples/cpp/object_detection_sample_ssd/object_detection_sample_ssd.h diff --git a/inference-engine/samples/speech_sample/CMakeLists.txt b/samples/cpp/speech_sample/CMakeLists.txt similarity index 100% rename from inference-engine/samples/speech_sample/CMakeLists.txt rename to samples/cpp/speech_sample/CMakeLists.txt diff --git a/inference-engine/samples/speech_sample/README.md b/samples/cpp/speech_sample/README.md similarity index 100% rename from inference-engine/samples/speech_sample/README.md rename to samples/cpp/speech_sample/README.md diff --git a/inference-engine/samples/speech_sample/fileutils.cpp b/samples/cpp/speech_sample/fileutils.cpp similarity index 100% rename from inference-engine/samples/speech_sample/fileutils.cpp rename to samples/cpp/speech_sample/fileutils.cpp diff --git a/inference-engine/samples/speech_sample/fileutils.hpp b/samples/cpp/speech_sample/fileutils.hpp similarity index 100% rename from inference-engine/samples/speech_sample/fileutils.hpp rename to samples/cpp/speech_sample/fileutils.hpp diff --git a/inference-engine/samples/speech_sample/main.cpp b/samples/cpp/speech_sample/main.cpp similarity index 100% rename from inference-engine/samples/speech_sample/main.cpp rename to samples/cpp/speech_sample/main.cpp diff --git a/inference-engine/samples/speech_sample/speech_sample.hpp b/samples/cpp/speech_sample/speech_sample.hpp similarity index 100% rename from inference-engine/samples/speech_sample/speech_sample.hpp rename to samples/cpp/speech_sample/speech_sample.hpp diff --git a/inference-engine/samples/style_transfer_sample/CMakeLists.txt b/samples/cpp/style_transfer_sample/CMakeLists.txt similarity index 100% rename from inference-engine/samples/style_transfer_sample/CMakeLists.txt rename to samples/cpp/style_transfer_sample/CMakeLists.txt diff --git a/inference-engine/samples/style_transfer_sample/README.md b/samples/cpp/style_transfer_sample/README.md similarity index 100% rename from inference-engine/samples/style_transfer_sample/README.md rename to samples/cpp/style_transfer_sample/README.md diff --git a/inference-engine/samples/style_transfer_sample/main.cpp b/samples/cpp/style_transfer_sample/main.cpp similarity index 100% rename from inference-engine/samples/style_transfer_sample/main.cpp rename to samples/cpp/style_transfer_sample/main.cpp diff --git a/inference-engine/samples/style_transfer_sample/style_transfer_sample.h b/samples/cpp/style_transfer_sample/style_transfer_sample.h similarity index 100% rename from inference-engine/samples/style_transfer_sample/style_transfer_sample.h rename to samples/cpp/style_transfer_sample/style_transfer_sample.h diff --git a/samples/python/classification_sample_async/README.md b/samples/python/classification_sample_async/README.md index cd2d09331d4..91db2152077 100644 --- a/samples/python/classification_sample_async/README.md +++ b/samples/python/classification_sample_async/README.md @@ -17,7 +17,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](. | Validated Models | [alexnet](@ref omz_models_model_alexnet) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/classification_sample_async/README.md) | +| Other language realization | [C++](../../../samples/cpp/classification_sample_async/README.md) | ## How It Works @@ -164,4 +164,4 @@ The sample application logs each step in a standard output stream and outputs to [InferRequest.async_infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a95ebe0368cdf4d5d64f9fddc8ee1cd0e [InferRequest.wait]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InferRequest.html#a936fa50a7531e2f9a9e9c3d45afc9b43 -[Blob.buffer]:https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Blob.html#a0cad47b43204b115b4017b6b2564fa7e \ No newline at end of file +[Blob.buffer]:https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1Blob.html#a0cad47b43204b115b4017b6b2564fa7e diff --git a/samples/python/hello_classification/README.md b/samples/python/hello_classification/README.md index 06506cf97a4..810b24368c2 100644 --- a/samples/python/hello_classification/README.md +++ b/samples/python/hello_classification/README.md @@ -16,7 +16,7 @@ The following Inference Engine Python API is used in the application: | Validated Models | [alexnet](@ref omz_models_model_alexnet), [googlenet-v1](@ref omz_models_model_googlenet_v1) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/hello_classification/README.md), [C](../../c/hello_classification/README.md) | +| Other language realization | [C++](../../../samples/cpp/hello_classification/README.md), [C](../../c/hello_classification/README.md) | ## How It Works diff --git a/samples/python/hello_query_device/README.md b/samples/python/hello_query_device/README.md index dd136a4422e..04d374872ce 100644 --- a/samples/python/hello_query_device/README.md +++ b/samples/python/hello_query_device/README.md @@ -12,7 +12,7 @@ The following Inference Engine Python API is used in the application: | Options | Values | | :------------------------- | :---------------------------------------------------------------------- | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/hello_query_device/README.md) | +| Other language realization | [C++](../../../samples/cpp/hello_query_device/README.md) | ## How It Works diff --git a/samples/python/hello_reshape_ssd/README.md b/samples/python/hello_reshape_ssd/README.md index 00ac3913689..d0a99e3d1a1 100644 --- a/samples/python/hello_reshape_ssd/README.md +++ b/samples/python/hello_reshape_ssd/README.md @@ -17,7 +17,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](. | Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/hello_reshape_ssd/README.md) | +| Other language realization | [C++](../../../samples/cpp/hello_reshape_ssd/README.md) | ## How It Works diff --git a/samples/python/ngraph_function_creation_sample/README.md b/samples/python/ngraph_function_creation_sample/README.md index 1a12f420687..b38287ed63c 100644 --- a/samples/python/ngraph_function_creation_sample/README.md +++ b/samples/python/ngraph_function_creation_sample/README.md @@ -19,7 +19,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](. | Model Format | Network weights file (\*.bin) | | Validated images | The sample uses OpenCV\* to [read input grayscale image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png) or single-channel `ubyte` image | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/ngraph_function_creation_sample/README.md) | +| Other language realization | [C++](../../../samples/cpp/ngraph_function_creation_sample/README.md) | ## How It Works diff --git a/samples/python/object_detection_sample_ssd/README.md b/samples/python/object_detection_sample_ssd/README.md index 5aa41a1371a..99dc0354b2f 100644 --- a/samples/python/object_detection_sample_ssd/README.md +++ b/samples/python/object_detection_sample_ssd/README.md @@ -17,7 +17,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](. | Validated Models | [mobilenet-ssd](@ref omz_models_model_mobilenet_ssd), [face-detection-0206](@ref omz_models_model_face_detection_0206) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/object_detection_sample_ssd/README.md), [C](../../c/object_detection_sample_ssd/README.md) | +| Other language realization | [C++](../../../samples/cpp/object_detection_sample_ssd/README.md), [C](../../c/object_detection_sample_ssd/README.md) | ## How It Works @@ -130,4 +130,4 @@ The sample application logs each step in a standard output stream and creates an [DataPtr.precision]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1DataPtr.html#data_fields [IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc [InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields -[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519 \ No newline at end of file +[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519 diff --git a/samples/python/speech_sample/README.md b/samples/python/speech_sample/README.md index ec938af8f0c..5f97ffa1dc0 100644 --- a/samples/python/speech_sample/README.md +++ b/samples/python/speech_sample/README.md @@ -20,7 +20,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](. | Validated Models | Acoustic model based on Kaldi* neural networks (see [Model Preparation](#model-preparation) section) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin) | | Supported devices | See [Execution Modes](#execution-modes) section below and [List Supported Devices](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/speech_sample/README.md) | +| Other language realization | [C++](../../../samples/cpp/speech_sample/README.md) | ## How It Works @@ -217,4 +217,4 @@ The sample application logs each step in a standard output stream. [ExecutableNetwork.input_info]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#ac76a04c2918607874018d2e15a2f274f [ExecutableNetwork.outputs]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#a4a631776df195004b1523e6ae91a65c1 [IECore.import_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#afdeac5192bb1d9e64722f1071fb0a64a -[ExecutableNetwork.export]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#afa78158252f0d8070181bafec4318413 \ No newline at end of file +[ExecutableNetwork.export]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#afa78158252f0d8070181bafec4318413 diff --git a/samples/python/style_transfer_sample/README.md b/samples/python/style_transfer_sample/README.md index b75529bbd5e..30cf0b1f418 100644 --- a/samples/python/style_transfer_sample/README.md +++ b/samples/python/style_transfer_sample/README.md @@ -18,7 +18,7 @@ Basic Inference Engine API is covered by [Hello Classification Python* Sample](. | Validated Models | [fast-neural-style-mosaic-onnx](@ref omz_models_model_fast_neural_style_mosaic_onnx) | | Model Format | Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx) | | Supported devices | [All](../../../docs/IE_DG/supported_plugins/Supported_Devices.md) | -| Other language realization | [C++](../../../inference-engine/samples/style_transfer_sample/README.md) | +| Other language realization | [C++](../../../samples/cpp/style_transfer_sample/README.md) | ## How It Works @@ -143,4 +143,4 @@ The sample application logs each step in a standard output stream and creates an [IENetwork.batch_size]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IENetwork.html#a79a647cb1b49645616eaeb2ca255ef2e [IECore.load_network]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html#ac9a2e043d14ccfa9c6bbf626cfd69fcc [InputInfoPtr.input_data.shape]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1InputInfoPtr.html#data_fields -[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519 \ No newline at end of file +[ExecutableNetwork.infer]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1ExecutableNetwork.html#aea96e8e534c8e23d8b257bad11063519 diff --git a/tools/benchmark_tool/README.md b/tools/benchmark_tool/README.md index cccf1aaca0b..681a1d0e66a 100644 --- a/tools/benchmark_tool/README.md +++ b/tools/benchmark_tool/README.md @@ -3,7 +3,7 @@ This topic demonstrates how to run the Benchmark Python* Tool, which performs inference using convolutional networks. Performance can be measured for two inference modes: latency- and throughput-oriented. -> **NOTE:** This topic describes usage of Python implementation of the Benchmark Tool. For the C++ implementation, refer to [Benchmark C++ Tool](../../inference-engine/samples/benchmark_app/README.md). +> **NOTE:** This topic describes usage of Python implementation of the Benchmark Tool. For the C++ implementation, refer to [Benchmark C++ Tool](../../samples/cpp/benchmark_app/README.md). > **TIP**: You can quick start with the Benchmark Tool inside the OpenVINO™ [Deep Learning Workbench](@ref openvino_docs_get_started_get_started_dl_workbench) (DL Workbench). > [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is the OpenVINO™ toolkit UI you to