DOCS: Fix for Runtime Inference - master (#13551)

Port from #13549

Fixed the following issues:

Switching between C++ and Python docs for "Shape Inference",
Removed repetitions,
Quote background in bullet list at the beginning of "Multi-device execution",
Broken note directives,
Fixed video player size in "Inference with OpenVINO Runtime",
Standardize Additional Resources throughout Runtime Inference.
This commit is contained in:
Maciej Smyk
2022-10-25 12:41:33 +02:00
committed by GitHub
parent 37abc159ef
commit 9df5856849
8 changed files with 14 additions and 12 deletions

View File

@@ -1,6 +1,6 @@
# Changing Input Shapes {#openvino_docs_OV_UG_ShapeInference}
## Introduction (C++)
@sphinxdirective
.. raw:: html

View File

@@ -151,4 +151,5 @@ This value also exposed as the final execution statistics on the `benchmark_app`
This is NOT the actual latency of the batched execution, so you are recommended to refer to other metrics in the same log, for example, "Median" or "Average" execution.
### Additional Resources
[Supported Devices](supported_plugins/Supported_Devices.md)
* [Supported Devices](supported_plugins/Supported_Devices.md)

View File

@@ -167,5 +167,6 @@ where:
You can also point to more than two devices: `-d HETERO:MYRIAD,GPU,CPU`
### See Also
[Supported Devices](supported_plugins/Supported_Devices.md)
### Additional Resources
* [Supported Devices](supported_plugins/Supported_Devices.md)

View File

@@ -359,9 +359,9 @@ cmake --build .
- See the [OpenVINO Samples](Samples_Overview.md) page or the [Open Model Zoo Demos](https://docs.openvino.ai/nightly/omz_demos.html) page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others.
- [OpenVINO™ Runtime Preprocessing](./preprocessing_overview.md)
- [Using Encrypted Models with OpenVINO™](./protecting_model_guide.md)
- [OpenVINO Samples](Samples_Overview.md)
- [Open Model Zoo Demos](https://docs.openvino.ai/nightly/omz_demos.html)
[ie_api_flow_cpp]: img/BASIC_IE_API_workflow_Cpp.svg
[ie_api_use_cpp]: img/IMPLEMENT_PIPELINE_with_API_C.svg
[ie_api_flow_python]: img/BASIC_IE_API_workflow_Python.svg

View File

@@ -4,8 +4,8 @@
To run inference on multiple devices, you can choose either of the following ways:
- Use the :ref:`CUMULATIVE_THROUGHPUT option <cumulative throughput>` of the Automatic Device Selection mode. This way, you can use all available devices in the system without the need to specify them.
- Use the Multi-Device execution mode. This page will explain how it works and how to use it.
- Use the :ref:`CUMULATIVE_THROUGHPUT option <cumulative throughput>` of the Automatic Device Selection mode. This way, you can use all available devices in the system without the need to specify them,
- Use the Multi-Device execution mode. This page will explain how it works and how to use it.
@endsphinxdirective
@@ -162,7 +162,7 @@ To facilitate the copy savings, it is recommended to run the requests in the ord
## See Also
## Additional Resources
- [Supported Devices](supported_plugins/Supported_Devices.md)
- [Automatic Device Selection](./auto_device_selection.md)

View File

@@ -37,7 +37,7 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
* - .. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="100%"
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="560"
src="https://www.youtube.com/embed/e6R13V8nbak">
</iframe>
* - **OpenVINO Runtime Concept**. Duration: 3:43

View File

@@ -131,5 +131,5 @@ The `benchmark_app`, that exists in both [C++](../../samples/cpp/benchmark_app/
- - benchmark_app **-hint none -nstreams 1** -d 'device' -m 'path to your model'
### See Also
[Supported Devices](./supported_plugins/Supported_Devices.md)
### Additional Resources
* [Supported Devices](./supported_plugins/Supported_Devices.md)

View File

@@ -290,7 +290,7 @@ C++ references:
Pre-processing API also allows adding `custom` preprocessing steps into an execution graph. The `custom` function accepts the current `input` node, applies the defined preprocessing operations, and returns a new node.
> **Note:** Custom pre-processing function should only insert node(s) after the input. It is done during model compilation. This function will NOT be called during the execution phase. This may appear to be complicated and require knowledge of [OpenVINO™ operations](../ops/opset.md).
> **NOTE** : Custom pre-processing function should only insert node(s) after the input. It is done during model compilation. This function will NOT be called during the execution phase. This may appear to be complicated and require knowledge of [OpenVINO™ operations](../ops/opset.md).
If there is a need to insert additional operations to the execution graph right after the input, like some specific crops and/or resizes - Pre-processing API can be a good choice to implement this.