From 9df5856849afdc9d9d978352ab92d41a804e1f5a Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Tue, 25 Oct 2022 12:41:33 +0200 Subject: [PATCH] DOCS: Fix for Runtime Inference - master (#13551) Port from #13549 Fixed the following issues: Switching between C++ and Python docs for "Shape Inference", Removed repetitions, Quote background in bullet list at the beginning of "Multi-device execution", Broken note directives, Fixed video player size in "Inference with OpenVINO Runtime", Standardize Additional Resources throughout Runtime Inference. --- docs/OV_Runtime_UG/ShapeInference.md | 2 +- docs/OV_Runtime_UG/automatic_batching.md | 3 ++- docs/OV_Runtime_UG/hetero_execution.md | 5 +++-- docs/OV_Runtime_UG/integrate_with_your_application.md | 2 +- docs/OV_Runtime_UG/multi_device.md | 6 +++--- docs/OV_Runtime_UG/openvino_intro.md | 2 +- docs/OV_Runtime_UG/performance_hints.md | 4 ++-- docs/OV_Runtime_UG/preprocessing_details.md | 2 +- 8 files changed, 14 insertions(+), 12 deletions(-) diff --git a/docs/OV_Runtime_UG/ShapeInference.md b/docs/OV_Runtime_UG/ShapeInference.md index ad3973a969c..96f185dac52 100644 --- a/docs/OV_Runtime_UG/ShapeInference.md +++ b/docs/OV_Runtime_UG/ShapeInference.md @@ -1,6 +1,6 @@ # Changing Input Shapes {#openvino_docs_OV_UG_ShapeInference} - +## Introduction (C++) @sphinxdirective .. raw:: html diff --git a/docs/OV_Runtime_UG/automatic_batching.md b/docs/OV_Runtime_UG/automatic_batching.md index 6709b27ef73..836c6daf231 100644 --- a/docs/OV_Runtime_UG/automatic_batching.md +++ b/docs/OV_Runtime_UG/automatic_batching.md @@ -151,4 +151,5 @@ This value also exposed as the final execution statistics on the `benchmark_app` This is NOT the actual latency of the batched execution, so you are recommended to refer to other metrics in the same log, for example, "Median" or "Average" execution. ### Additional Resources -[Supported Devices](supported_plugins/Supported_Devices.md) + +* [Supported Devices](supported_plugins/Supported_Devices.md) diff --git a/docs/OV_Runtime_UG/hetero_execution.md b/docs/OV_Runtime_UG/hetero_execution.md index 7994e5e803d..9dcbf4bec6c 100644 --- a/docs/OV_Runtime_UG/hetero_execution.md +++ b/docs/OV_Runtime_UG/hetero_execution.md @@ -167,5 +167,6 @@ where: You can also point to more than two devices: `-d HETERO:MYRIAD,GPU,CPU` -### See Also -[Supported Devices](supported_plugins/Supported_Devices.md) +### Additional Resources + +* [Supported Devices](supported_plugins/Supported_Devices.md) diff --git a/docs/OV_Runtime_UG/integrate_with_your_application.md b/docs/OV_Runtime_UG/integrate_with_your_application.md index 7a9c07b542f..3834880e900 100644 --- a/docs/OV_Runtime_UG/integrate_with_your_application.md +++ b/docs/OV_Runtime_UG/integrate_with_your_application.md @@ -359,9 +359,9 @@ cmake --build . - See the [OpenVINO Samples](Samples_Overview.md) page or the [Open Model Zoo Demos](https://docs.openvino.ai/nightly/omz_demos.html) page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others. - [OpenVINO™ Runtime Preprocessing](./preprocessing_overview.md) - [Using Encrypted Models with OpenVINO™](./protecting_model_guide.md) - - [OpenVINO Samples](Samples_Overview.md) - [Open Model Zoo Demos](https://docs.openvino.ai/nightly/omz_demos.html) + [ie_api_flow_cpp]: img/BASIC_IE_API_workflow_Cpp.svg [ie_api_use_cpp]: img/IMPLEMENT_PIPELINE_with_API_C.svg [ie_api_flow_python]: img/BASIC_IE_API_workflow_Python.svg diff --git a/docs/OV_Runtime_UG/multi_device.md b/docs/OV_Runtime_UG/multi_device.md index 8c3a8d49637..c239a85db15 100644 --- a/docs/OV_Runtime_UG/multi_device.md +++ b/docs/OV_Runtime_UG/multi_device.md @@ -4,8 +4,8 @@ To run inference on multiple devices, you can choose either of the following ways: - - Use the :ref:`CUMULATIVE_THROUGHPUT option ` of the Automatic Device Selection mode. This way, you can use all available devices in the system without the need to specify them. - - Use the Multi-Device execution mode. This page will explain how it works and how to use it. +- Use the :ref:`CUMULATIVE_THROUGHPUT option ` of the Automatic Device Selection mode. This way, you can use all available devices in the system without the need to specify them, +- Use the Multi-Device execution mode. This page will explain how it works and how to use it. @endsphinxdirective @@ -162,7 +162,7 @@ To facilitate the copy savings, it is recommended to run the requests in the ord -## See Also +## Additional Resources - [Supported Devices](supported_plugins/Supported_Devices.md) - [Automatic Device Selection](./auto_device_selection.md) diff --git a/docs/OV_Runtime_UG/openvino_intro.md b/docs/OV_Runtime_UG/openvino_intro.md index f7fa57a0510..b3c879da7f0 100644 --- a/docs/OV_Runtime_UG/openvino_intro.md +++ b/docs/OV_Runtime_UG/openvino_intro.md @@ -37,7 +37,7 @@ The scheme below illustrates the typical workflow for deploying a trained deep l * - .. raw:: html - * - **OpenVINO Runtime Concept**. Duration: 3:43 diff --git a/docs/OV_Runtime_UG/performance_hints.md b/docs/OV_Runtime_UG/performance_hints.md index 14669d39832..88f2b9c100e 100644 --- a/docs/OV_Runtime_UG/performance_hints.md +++ b/docs/OV_Runtime_UG/performance_hints.md @@ -131,5 +131,5 @@ The `benchmark_app`, that exists in both [C++](../../samples/cpp/benchmark_app/ - - benchmark_app **-hint none -nstreams 1** -d 'device' -m 'path to your model' -### See Also -[Supported Devices](./supported_plugins/Supported_Devices.md) +### Additional Resources +* [Supported Devices](./supported_plugins/Supported_Devices.md) diff --git a/docs/OV_Runtime_UG/preprocessing_details.md b/docs/OV_Runtime_UG/preprocessing_details.md index 4e23ac48822..a75ef73bbd9 100644 --- a/docs/OV_Runtime_UG/preprocessing_details.md +++ b/docs/OV_Runtime_UG/preprocessing_details.md @@ -290,7 +290,7 @@ C++ references: Pre-processing API also allows adding `custom` preprocessing steps into an execution graph. The `custom` function accepts the current `input` node, applies the defined preprocessing operations, and returns a new node. -> **Note:** Custom pre-processing function should only insert node(s) after the input. It is done during model compilation. This function will NOT be called during the execution phase. This may appear to be complicated and require knowledge of [OpenVINO™ operations](../ops/opset.md). +> **NOTE** : Custom pre-processing function should only insert node(s) after the input. It is done during model compilation. This function will NOT be called during the execution phase. This may appear to be complicated and require knowledge of [OpenVINO™ operations](../ops/opset.md). If there is a need to insert additional operations to the execution graph right after the input, like some specific crops and/or resizes - Pre-processing API can be a good choice to implement this.