From 77cde47801930578e1fe4d906a3eb0fc6b4a58d1 Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Thu, 5 Oct 2023 09:01:40 +0200 Subject: [PATCH] [DOCS] Inference with OpenVINO Runtime update for master (#20237) * Update openvino_intro.md * Update docs/articles_en/openvino_workflow/openvino_intro.md Co-authored-by: Karol Blaszczak * Update docs/articles_en/openvino_workflow/openvino_intro.md Co-authored-by: Karol Blaszczak --------- Co-authored-by: Karol Blaszczak --- .../openvino_workflow/openvino_intro.md | 26 +++++++++---------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/docs/articles_en/openvino_workflow/openvino_intro.md b/docs/articles_en/openvino_workflow/openvino_intro.md index 79395a748b4..40db0d15b52 100644 --- a/docs/articles_en/openvino_workflow/openvino_intro.md +++ b/docs/articles_en/openvino_workflow/openvino_intro.md @@ -22,7 +22,18 @@ on different platforms. -OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, TensorFlow Lite, ONNX, or PaddlePaddle model and execute it on preferred devices. +OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read PyTorch, TensorFlow, TensorFlow Lite, ONNX, and PaddlePaddle models and execute them on preferred devices. OpenVINO gives you the option to use these models directly or convert them to the OpenVINO IR (Intermediate Representation) format explicitly, for maximum performance. + + +.. note:: + + For more detailed information on how to convert, read, and compile supported model formats + see the :doc:`Supported Formats article `. + + Note that TensorFlow models can be run using the + :doc:`torch.compile feature `, as well as the standard ways of + :doc:`converting TensorFlow ` + or reading them directly. OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular IntelĀ® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend. @@ -32,17 +43,4 @@ The scheme below illustrates the typical workflow for deploying a trained deep l .. image:: _static/images/BASIC_FLOW_IE_C.svg -Video -#################### - - -.. list-table:: - - * - .. raw:: html - - - * - **OpenVINO Runtime Concept**. Duration: 3:43 - @endsphinxdirective