.. OpenVINO Toolkit documentation master file, created by sphinx-quickstart on Wed Jul 7 10:46:56 2021. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. .. meta:: :google-site-verification: _YqumYQ98cmXUTwtzM_0WIIadtDc6r_TMYGbmGgNvrk OpenVINO™ Documentation ======================= .. raw:: html

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference.

OpenVINO allows to process models built with Caffe, Keras, mxnet, TensorFlow, ONNX, and PyTorch. They can be easily optimized and deployed on devices running Windows, Linux, or MacOS.

Check the full range of supported hardware in the Supported Devices page and see how it stacks up in our Performance Benchmarks page.
Supports deployment on Windows, Linux, and macOS.

Train, Optimize, Deploy

* The ONNX format is also supported, but conversion to OpenVINO is recommended for better performance.

Want to know more?

Get Started

Learn how to download, install, and configure OpenVINO.

Open Model Zoo

Browse through over 200 publicly available neural networks and pick the right one for your solution.

Model Optimizer

Learn how to convert your model and optimize it for use with OpenVINO.

Tutorials

Learn how to use OpenVINO based on our training material.

Samples

Try OpenVINO using ready-made applications explaining various use cases.

DL Workbench

Learn about the alternative, web-based version of OpenVINO. DL Workbench container installation Required.

OpenVINO™ Runtime

Learn about OpenVINO's inference mechanism which executes the IR, ONNX, Paddle models on target devices.

Tune & Optimize

Model-level (e.g. quantization) and Runtime (i.e. application) -level optimizations to make your inference as fast as possible.

Performance
Benchmarks

View performance benchmark results for various models on Intel platforms.

.. toctree:: :maxdepth: 2 :hidden: get_started documentation tutorials api/api_reference model_zoo resources