* Added migration for deployment (#10800) * Added migration for deployment * Addressed comments * more info after the What's new Sessions' questions (#10803) * more info after the What's new Sessions' questions * generalizing the optimal_batch_size vs explicit value message * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Update docs/OV_Runtime_UG/automatic_batching.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Perf Hints docs and General Opt Guide refactoring (#10815) * Brushed the general optimization page * Opt GUIDE, WIP * perf hints doc placeholder * WIP * WIP2 * WIP 3 * added streams and few other details * fixed titles, misprints etc * Perf hints * movin the runtime optimizations intro * fixed link * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * some details on the FIL and other means when pure inference time is not the only factor * shuffled according to general->use-case->device-specifics flow, minor brushing * next iter * section on optimizing for tput and latency * couple of links to the features support matrix * Links, brushing, dedicated subsections for Latency/FIL/Tput * had to make the link less specific (otherwise docs compilations fails) * removing the Temp/Should be moved to the Opt Guide * shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins - openvino_docs_IE_DG_Model_caching_overview - openvino_docs_IE_DG_Int8Inference - openvino_docs_IE_DG_Bfloat16Inference - openvino_docs_OV_UG_NoDynamicShapes * fixed toc for ov_dynamic_shapes.md * referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors * fixed main product TOC, removed ref from the second-level items * reviewers remarks * reverted the openvino_docs_OV_UG_NoDynamicShapes * reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference * "No dynamic shapes" to the "Dynamic shapes" as TOC * removed duplication * minor brushing * Caching to the next level in TOC * brushing * more on the perf counters ( for latency and dynamic cases) Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Updated common IE pipeline infer-request section (#10844) * Updated common IE pipeline infer-reqest section * Update ov_infer_request.md * Apply suggestions from code review Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> * DOCS: Removed useless 4 spaces in snippets (#10870) * Updated snippets * Added link to encryption * [DOCS] ARM CPU plugin docs (#10885) * initial commit ARM_CPU.md added ARM CPU is added to the list of supported devices * Update the list of supported properties * Update Device_Plugins.md * Update CODEOWNERS * Removed quotes in limitations section * NVIDIA and Android are added to the list of supported devices * Added See Also section and reg sign to arm * Added Preprocessing acceleration section * Update the list of supported layers * updated list of supported layers * fix typos * Added support disclaimer * update trade and reg symbols * fixed typos * fix typos * reg fix * add reg symbol back Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com> * Try to fix visualization (#10896) * Try to fix visualization * New try * Update Install&Deployment for migration guide to 22/1 (#10933) * updates * update * Getting started improvements (#10948) * Onnx updates (#10962) * onnx changes * onnx updates * onnx updates * fix broken anchors api reference (#10976) * add ote repo (#10979) * DOCS: Increase content width (#10995) * fixes * fix * Fixed compilation Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Aleksandr Voron <aleksandr.voron@intel.com> Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com> Co-authored-by: Ilya Churaev <ilya.churaev@intel.com> Co-authored-by: Yuan Xu <yuan1.xu@intel.com> Co-authored-by: Victoria Yashina <victoria.yashina@intel.com> Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
104 lines
4.2 KiB
ReStructuredText
104 lines
4.2 KiB
ReStructuredText
.. OpenVINO Toolkit documentation master file, created by
|
|
sphinx-quickstart on Wed Jul 7 10:46:56 2021.
|
|
You can adapt this file completely to your liking, but it should at least
|
|
contain the root `toctree` directive.
|
|
|
|
.. meta::
|
|
:google-site-verification: _YqumYQ98cmXUTwtzM_0WIIadtDc6r_TMYGbmGgNvrk
|
|
|
|
OpenVINO™ Documentation
|
|
=======================
|
|
|
|
.. raw:: html
|
|
|
|
<div class="section" id="welcome-to-openvino-toolkit-s-documentation">
|
|
|
|
<link rel="stylesheet" type="text/css" href="_static/css/homepage_style.css">
|
|
<div style="clear:both;"> </div>
|
|
|
|
<p>
|
|
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
|
|
<ul>
|
|
<li>Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks </li>
|
|
<li>Use models trained with popular frameworks like TensorFlow, PyTorch and more </li>
|
|
<li>Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud </li>
|
|
</ul>
|
|
</p>
|
|
|
|
<img class="HP_img_chart" src="_static/images/ov_chart.png"
|
|
alt="OpenVINO allows to process models built with Caffe, Keras, mxnet, TensorFlow, ONNX, and PyTorch. They can be easily optimized and deployed on devices running Windows, Linux, or MacOS." />
|
|
<div style="clear:both;"> </div>
|
|
<p>Check the full range of supported hardware in the
|
|
<a href="openvino_docs_IE_DG_supported_plugins_Supported_Devices.html"> Supported Devices page</a> and see how it stacks up in our
|
|
<a href="openvino_docs_performance_benchmarks.html"> Performance Benchmarks page.</a> <br />
|
|
Supports deployment on Windows, Linux, and macOS.
|
|
</p>
|
|
<div class="HP_separator-header">
|
|
<p> Train, Optimize, Deploy </p>
|
|
</div>
|
|
<div style="clear:both;"> </div>
|
|
<img class="HP_img_chart" src="_static/images/HP_ov_flow.svg" alt="" />
|
|
<p>* The ONNX format is also supported, but conversion to OpenVINO is recommended for better performance.</p>
|
|
<div style="clear:both;"> </div>
|
|
|
|
<div style="clear:both;"> </div>
|
|
<div class="HP_separator-header">
|
|
<p> Want to know more? </p>
|
|
</div>
|
|
<div style="clear:both;"> </div>
|
|
|
|
<div class="HP_infoboxes">
|
|
<a href="get_started.html">
|
|
<h3>Get Started </H3>
|
|
<p> Learn how to download, install, and configure OpenVINO. </p>
|
|
</a>
|
|
<a href="model_zoo.html" >
|
|
<h3>Open Model Zoo </h3>
|
|
<p> Browse through over 200 publicly available neural networks and pick the right one for your solution. </p>
|
|
</a>
|
|
<a href="openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html" >
|
|
<h3>Model Optimizer </h3>
|
|
<p> Learn how to convert your model and optimize it for use with OpenVINO. </p>
|
|
</a>
|
|
<a href="tutorials.html" >
|
|
<h3>Tutorials </h3>
|
|
<p> Learn how to use OpenVINO based on our training material. </p>
|
|
</a>
|
|
<a href="openvino_docs_IE_DG_Samples_Overview.html" >
|
|
<h3>Samples </h3>
|
|
<p> Try OpenVINO using ready-made applications explaining various use cases. </p>
|
|
</a>
|
|
<a href="workbench_docs_Workbench_DG_Introduction.html" >
|
|
<h3>DL Workbench </h3>
|
|
<p> Learn about the alternative, web-based version of OpenVINO. DL Workbench container installation Required. </p>
|
|
</a>
|
|
<a href="openvino_docs_OV_Runtime_User_Guide.html" >
|
|
<h3>OpenVINO™ Runtime </h3>
|
|
<p> Learn about OpenVINO's inference mechanism which executes the IR, ONNX, Paddle models on target devices. </p>
|
|
</a>
|
|
<a href="openvino_docs_optimization_guide_dldt_optimization_guide.html" >
|
|
<h3>Tune & Optimize </h3>
|
|
<p> Model-level (e.g. quantization) and Runtime (i.e. application) -level optimizations to make your inference as fast as possible. </p>
|
|
</a>
|
|
<a href="openvino_docs_performance_benchmarks.html" >
|
|
<h3>Performance<br /> Benchmarks </h3>
|
|
<p> View performance benchmark results for various models on Intel platforms. </p>
|
|
</a>
|
|
</div>
|
|
<div style="clear:both;"> </div>
|
|
</div>
|
|
|
|
|
|
|
|
|
|
.. toctree::
|
|
:maxdepth: 2
|
|
:hidden:
|
|
|
|
get_started
|
|
documentation
|
|
tutorials
|
|
api/api_reference
|
|
model_zoo
|
|
resources
|