DOCS: Add text for doc headers (#14671) (#14748)

port https://github.com/openvinotoolkit/openvino/pull/14671

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
This commit is contained in:
Karol Blaszczak 2022-12-20 23:05:11 +01:00 committed by GitHub
parent 63e0b47a7d
commit d7c3e0acaf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 65 additions and 63 deletions

View File

@ -17,20 +17,21 @@ Once you have a model that meets both OpenVINO™ and your requirements, you can
@sphinxdirective
.. panels::
`Deploy Locally <openvino_deployment_guide>`_
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
^^^^^^^^^^^^^^
Local deployment uses OpenVINO Runtime installed on the device. It utilizes resources available to the system and provides the quickest way of launching inference.
Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
It utilizes resources available to the system and provides the quickest way of launching inference.
---
`Deploy by Model Serving <ovms_what_is_openvino_model_server>`_
:doc:`Deploy via Model Server <ovms_what_is_openvino_model_server>`
^^^^^^^^^^^^^^
Deployment via OpenVINO Model Server allows the device to connect to the server set up remotely. This way inference uses external resources instead of the ones provided by the device itself.
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
This way inference can use external resources instead of those available to the application itself.
@endsphinxdirective
Apart from the default deployment options, you may also [deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).
Apart from the default deployment options, you may also [deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).

View File

@ -19,15 +19,6 @@
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
### OpenVINO™ Model Server (OVMS)
OpenVINO Model Server is a scalable, high-performance solution for serving deep learning models optimized for Intel® architectures. The server uses Inference Engine libraries as a backend and exposes gRPC and HTTP/REST interfaces for inference that are fully compatible with TensorFlow Serving.
More resources:
* [OpenVINO documentation](https://docs.openvino.ai/latest/openvino_docs_ovms.html)
* [Docker Hub](https://hub.docker.com/r/openvino/model_server)
* [GitHub](https://github.com/openvinotoolkit/model_server)
* [Red Hat Ecosystem Catalog](https://catalog.redhat.com/software/container-stacks/detail/60649e41ccfb383fe395a167)
### Neural Network Compression Framework (NNCF)
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.

View File

@ -1,4 +1,4 @@
# OPENVINO Workflow {#openvino_workflow}
# OpenVINO Workflow {#openvino_workflow}
@sphinxdirective
@ -11,43 +11,14 @@
Model Optimization and Compression <openvino_docs_model_optimization_guide>
Deployment <openvino_docs_deployment_guide_introduction>
@endsphinxdirective
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`
| With Model Downloader and Model Optimizer guides, you will learn to download pre-trained models and convert them for use with OpenVINO™. You can use your own models or choose some from a broad selection provided in the Open Model Zoo.
| :doc:`Model Optimization and Compression <openvino_docs_model_optimization_guide>`
| In this section you will find out how to optimize a model to achieve better inference performance. It describes multiple optimization methods for both the training and post-training stages.
THIS IS A PAGE ABOUT THE WORKFLOW
@sphinxdirective
.. raw:: html
<div class="section" id="welcome-to-openvino-toolkit-s-documentation">
<link rel="stylesheet" type="text/css" href="_static/css/homepage_style.css">
<div style="clear:both;"> </div>
<div id="HP_flow-container">
<div class="HP_flow-btn">
<a href="https://docs.openvino.ai/latest/openvino_docs_model_processing_introduction.html">
<img src="_static/images/OV_flow_model_hvr.svg" alt="link to model processing introduction" />
</a>
</div>
<div class="HP_flow-arrow" >
<img src="_static/images/OV_flow_arrow.svg" alt="" />
</div>
<div class="HP_flow-btn">
<a href="https://docs.openvino.ai/latest/openvino_docs_deployment_optimization_guide_dldt_optimization_guide.html">
<img src="_static/images/OV_flow_optimization_hvr.svg" alt="link to an optimization guide" />
</a>
</div>
<div class="HP_flow-arrow" >
<img src="_static/images/OV_flow_arrow.svg" alt="" />
</div>
<div class="HP_flow-btn">
<a href="https://docs.openvino.ai/latest/openvino_docs_deployment_guide_introduction.html">
<img src="_static/images/OV_flow_deployment_hvr.svg" alt="link to deployment introduction" />
</a>
</div>
</div>
| :doc:`Deployment <openvino_docs_deployment_guide_introduction>`
| This section explains the process of deploying your own inference application using either OpenVINO Runtime or OpenVINO Model Server. It describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
@endsphinxdirective

View File

@ -9,6 +9,7 @@
openvino_docs_OV_UG_Model_Representation
openvino_docs_OV_UG_Infer_request
openvino_docs_OV_UG_Python_API_exclusives
openvino_docs_MO_DG_TensorFlow_Frontend
@endsphinxdirective

View File

@ -3,7 +3,7 @@ Supported Devices {#openvino_docs_OV_UG_supported_plugins_Supported_Devices}
The OpenVINO Runtime can infer models in different formats with various input and output formats. This section provides supported and optimal configurations per device. In OpenVINO™ documentation, "device" refers to an Intel® processors used for inference, which can be a supported CPU, GPU, VPU (vision processing unit), or GNA (Gaussian neural accelerator coprocessor), or a combination of those devices.
> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
> **NOTE**: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick support has been cancelled.
The OpenVINO Runtime provides unique capabilities to infer deep learning models on the following device types with corresponding plugins:
@ -18,6 +18,8 @@ The OpenVINO Runtime provides unique capabilities to infer deep learning models
|[Auto-Device plugin](../auto_device_selection.md) |Auto-Device plugin enables selecting Intel&reg; device for inference automatically |
|[Heterogeneous plugin](../hetero_execution.md) |Heterogeneous execution enables automatic inference splitting between several devices (for example if a device doesn't [support certain operation](#supported-layers)). |
> **NOTE**: ARM® CPU plugin is a community-level add-on to OpenVINO™. Intel® welcomes community participation in the OpenVINO™ ecosystem, technical questions and code contributions on community forums. However, this component has not undergone full release validation or qualification from Intel®, hence no official support is offered.
Devices similar to the ones we have used for benchmarking can be accessed using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/), a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. [Learn more](https://devcloud.intel.com/edge/get_started/devcloud/) or [Register here](https://inteliot.force.com/DevcloudForEdge/s/).
## Supported Configurations

View File

@ -17,4 +17,24 @@
This section provides reference documents that guide you through the OpenVINO toolkit workflow, from preparing models, optimizing them, to deploying them in your own deep learning applications.
@sphinxdirective
| :doc:`API Reference doc path <api/api_reference>`
| A collection of reference articles for OpenVINO C++, C, and Python APIs.
| :doc:`OpenVINO Ecosystem <openvino_ecosystem>`
| Apart from the core components, OpenVINO offers tools, plugins, and expansions revolving around it, even if not constituting necessary parts of its workflow. This section gives you an overview of what makes up the OpenVINO toolkit.
| :doc:`OpenVINO Extensibility Mechanism <openvino_docs_Extensibility_UG_Intro>`
| The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. Learn how to extend OpenVINO functionality with custom settings.
| :doc:`Media Processing and Computer Vision Libraries <media_processing_cv_libraries>`
| The OpenVINO™ toolkit also works with the following media processing frameworks and libraries:
| • Intel® Deep Learning Streamer (Intel® DL Streamer) — A streaming media analytics framework based on GStreamer, for creating complex media analytics pipelines optimized for Intel hardware platforms. Go to the Intel® DL Streamer documentation website to learn more.
| • Intel® oneAPI Video Processing Library (oneVPL) — A programming interface for video decoding, encoding, and processing to build portable media pipelines on CPUs, GPUs, and other accelerators.
| You can also add computer vision capabilities to your application using optimized versions of OpenCV.
| :doc:`OpenVINO™ Security <openvino_docs_security_guide_introduction>`
| Learn how to use OpenVINO securely and protect your data to meet specific security and privacy requirements.
@endsphinxdirective

View File

@ -88,5 +88,5 @@ Pipeline and model configuration features in OpenVINO Runtime allow you to easil
### <a name="additional-resources"></a>Additional Resources
* [OpenVINO Success Stories](https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/success-stories.html) - See how Intel partners have successfully used OpenVINO in production applications to solve real-world problems.
* OpenVINO Supported Models (coming soon!) - Check which models OpenVINO supports on your hardware
* [Performance Benchmarks](./benchmarks/performance_benchmarks.md) - View results from benchmarking models with OpenVINO on Intel hardware
* [OpenVINO Supported Models](./resources/supported_models.md) - Check which models OpenVINO supports on your hardware.
* [Performance Benchmarks](./benchmarks/performance_benchmarks.md) - View results from benchmarking models with OpenVINO on Intel hardware.

View File

@ -15,4 +15,17 @@
This section will help you get a hands-on experience with OpenVINO even if you are just starting
to learn what OpenVINO is and how it works. It includes various types of learning materials
accommodating different learning needs, which means you should find it useful if you are a beginning,
as well as an experienced user.
as well as an experienced user.
@sphinxdirective
| :doc:`Tutorials <tutorials>`
| A collection of interactive Python tutorials. It introduces you to the OpenVINO™ toolkit explaining how to use the Python API and tools for optimized deep learning inference. The tutorials are available in Jupyter notebooks and can be run in your browser. No installation required.
| :doc:`OpenVINO Samples <openvino_docs_OV_UG_Samples_Overview>`
| The OpenVINO samples (Python and C++) are simple console applications that show how to use specific OpenVINO API features. They can assist you in executing tasks such as loading a model, running inference, querying particular device capabilities, etc.
| :doc:`OpenVINO™ API 2.0 Transition Guide <openvino_2_0_transition_guide>`
| With the release of 2022.1 OpenVINO introduced its improved API 2.0 and its new OpenVINO IR model format: IR v11. This tutorial will instruct you on how to adopt the new solution, as well as show you the benefits of the new logic of working with models.
@endsphinxdirective

View File

@ -25,25 +25,28 @@
openvino_docs_OV_Glossary
openvino_docs_Legal_Information
openvino_docs_telemetry_information
openvino_docs_MO_DG_TensorFlow_Frontend
Case Studies <https://www.intel.com/openvino-success-stories>
@endsphinxdirective
This section includes a variety of reference information focusing mostly on describing OpenVINO
and its proprietary model format, OpenVINO IR.
[Performance Benchmarks](../benchmarks/performance_benchmarks.md) contain results from benchmarking models with OpenVINO on Intel hardware.
[OpenVINO IR format](openvino_ir.md) is the proprietary model format of OpenVINO. Read more details on its operations and usage.
[Supported Devices](../OV_Runtime_UG/supported_plugins/Supported_Devices.md) is compatibility information about supported hardware accelerators.
[Supported Models](supported_models.md) is a table of models officially supported by OpenVINO.
[Supported Framework Layers](../MO_DG/prepare_model/Supported_Frameworks_Layers.md) are lists of framework layers supported by OpenVINO.
[Glossary](../glossary.md) contains terms used in OpenVINO.
[Legal Information](../Legal_Information.md) has trademark information and other legal statements.
[Available Operation Sets](../ops/opset.md) is a list of supported operations and explanation of supported capabilities.
[OpenVINO™ Telemetry](telemetry_information.md) has detailed information on the telemetry data collection.
[Broadcast Rules for Elementwise Operations](../ops/broadcast_rules.md) explains the rules used for to support an arbitrary number of dimensions in neural nets.
Links to [articles](https://www.intel.com/openvino-success-stories) about real-world examples of OpenVINO™ usage.
[Release Notes](https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html) contains change logs and notes for each OpenVINO release.
[Case Studies](https://www.intel.com/openvino-success-stories) are articles about real-world examples of OpenVINO™ usage.