.. OpenVINO Toolkit documentation master file, created by sphinx-quickstart on Wed Jul 7 10:46:56 2021. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. .. meta:: :google-site-verification: _YqumYQ98cmXUTwtzM_0WIIadtDc6r_TMYGbmGgNvrk OpenVINO™ Documentation ======================= .. raw:: html
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.
Check the full range of supported hardware in the
Supported Devices page and see how it stacks up in our
Performance Benchmarks page.
Supports deployment on Windows, Linux, and macOS.
Train, Optimize, Deploy
* The ONNX format is also supported, but conversion to OpenVINO is recommended for better performance.
Want to know more?
Learn how to download, install, and configure OpenVINO.
Browse through over 200 publicly available neural networks and pick the right one for your solution.
Learn how to convert your model and optimize it for use with OpenVINO.
Learn how to use OpenVINO based on our training material.
Try OpenVINO using ready-made applications explaining various use cases.
Learn about the alternative, web-based version of OpenVINO. DL Workbench container installation Required.
Learn about OpenVINO's inference mechanism which executes the IR, ONNX, Paddle models on target devices.
Model-level (e.g. quantization) and Runtime (i.e. application) -level optimizations to make your inference as fast as possible.
View performance benchmark results for various models on Intel platforms.