diff --git a/docs/Documentation/inference_modes_overview.md b/docs/Documentation/inference_modes_overview.md index 970820236d7..7372a047466 100644 --- a/docs/Documentation/inference_modes_overview.md +++ b/docs/Documentation/inference_modes_overview.md @@ -10,15 +10,15 @@ openvino_docs_OV_UG_Running_on_multiple_devices openvino_docs_OV_UG_Hetero_execution openvino_docs_OV_UG_Automatic_Batching - -@endsphinxdirective -OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md). + +OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the :doc:`guide on inference devices `. The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are: -* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md) -* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md) -* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md) -* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md) +* :doc:`Automatic Device Selection (AUTO) ` +* :doc:``Multi-Device Execution (MULTI) ` +* :doc:`Heterogeneous Execution (HETERO) ` +* :doc:`Automatic Batching Execution (Auto-batching) ` +@endsphinxdirective diff --git a/docs/OV_Runtime_UG/img/BASIC_FLOW_IE_C.svg b/docs/OV_Runtime_UG/img/BASIC_FLOW_IE_C.svg deleted file mode 100644 index 6b8ad0ef282..00000000000 --- a/docs/OV_Runtime_UG/img/BASIC_FLOW_IE_C.svg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ccc7704d2a27f7491729767443f3d2bdd0ccc930f16fde631a7f9c67d158297a -size 71369 diff --git a/docs/OV_Runtime_UG/openvino_intro.md b/docs/OV_Runtime_UG/openvino_intro.md index b226ac39773..62c394eb070 100644 --- a/docs/OV_Runtime_UG/openvino_intro.md +++ b/docs/OV_Runtime_UG/openvino_intro.md @@ -14,22 +14,21 @@ openvino_docs_OV_UG_ShapeInference openvino_docs_OV_UG_DynamicShapes openvino_docs_OV_UG_model_state_intro - -@endsphinxdirective + OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices. OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular IntelĀ® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend. - -The scheme below illustrates the typical workflow for deploying a trained deep learning model: - -![](img/BASIC_FLOW_IE_C.svg) +The scheme below illustrates the typical workflow for deploying a trained deep learning model: -## Video +.. image:: _static/images/BASIC_FLOW_IE_C.svg + + +Video +#################### -@sphinxdirective .. list-table:: @@ -39,5 +38,5 @@ The scheme below illustrates the typical workflow for deploying a trained deep l src="https://www.youtube.com/embed/e6R13V8nbak"> * - **OpenVINO Runtime Concept**. Duration: 3:43 - + @endsphinxdirective diff --git a/docs/img/BASIC_FLOW_IE_C.svg b/docs/_static/images/BASIC_FLOW_IE_C.svg similarity index 100% rename from docs/img/BASIC_FLOW_IE_C.svg rename to docs/_static/images/BASIC_FLOW_IE_C.svg diff --git a/docs/resources/tensorflow_frontend.md b/docs/resources/tensorflow_frontend.md index 15926715f5f..47f9cce8ca2 100644 --- a/docs/resources/tensorflow_frontend.md +++ b/docs/resources/tensorflow_frontend.md @@ -1,8 +1,10 @@ # OpenVINO TensorFlow Frontend Capabilities and Limitations {#openvino_docs_MO_DG_TensorFlow_Frontend} +@sphinxdirective + TensorFlow Frontend is C++ based Frontend for conversion of TensorFlow models and is available as a preview feature starting from 2022.3. -That means that you can start experimenting with `--use_new_frontend` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models -or directly loading TensorFlow models through `read_model()` method. +That means that you can start experimenting with ``--use_new_frontend`` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models +or directly loading TensorFlow models through ``read_model()`` method. The current limitations: @@ -10,4 +12,6 @@ The current limitations: * There is no full parity yet between legacy Model Optimizer TensorFlow Frontend and new TensorFlow Frontend so primary path for model conversion is still legacy frontend * Model coverage and performance is continuously improving so some conversion phase failures, performance and accuracy issues might occur in case model is not yet covered. Known unsupported models: object detection models and all models with transformation configs, models with TF1/TF2 control flow, Complex type and training parts -* `read_model()` method supports only `*.pb` format while Model Optimizer (or `convert_model` call) will accept other formats as well which are accepted by existing legacy frontend +* ``read_model()`` method supports only ``*.pb`` format while Model Optimizer (or ``convert_model`` call) will accept other formats as well which are accepted by existing legacy frontend + +@endsphinxdirective