Adding Quantizing with Accuracy Control using NNCF notebook (#19585)
This commit is contained in:
parent
8f4d72826a
commit
2d760ba1bf
@ -1,7 +1,7 @@
|
|||||||
Hello Image Classification
|
Hello Image Classification
|
||||||
==========================
|
==========================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This basic introduction to OpenVINO™ shows how to do inference with an
|
This basic introduction to OpenVINO™ shows how to do inference with an
|
||||||
image classification model.
|
image classification model.
|
||||||
@ -15,6 +15,10 @@ created, refer to the `TensorFlow to
|
|||||||
OpenVINO <101-tensorflow-classification-to-openvino-with-output.html>`__
|
OpenVINO <101-tensorflow-classification-to-openvino-with-output.html>`__
|
||||||
tutorial.
|
tutorial.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Hello Image Segmentation
|
Hello Image Segmentation
|
||||||
========================
|
========================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
A very basic introduction to using segmentation models with OpenVINO™.
|
A very basic introduction to using segmentation models with OpenVINO™.
|
||||||
|
|
||||||
@ -12,6 +12,10 @@ Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__ is used.
|
|||||||
ADAS stands for Advanced Driver Assistance Services. The model
|
ADAS stands for Advanced Driver Assistance Services. The model
|
||||||
recognizes four classes: background, road, curb and mark.
|
recognizes four classes: background, road, curb and mark.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Hello Object Detection
|
Hello Object Detection
|
||||||
======================
|
======================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
A very basic introduction to using object detection models with
|
A very basic introduction to using object detection models with
|
||||||
OpenVINO™.
|
OpenVINO™.
|
||||||
@ -18,6 +18,10 @@ corner, ``(x_max, y_max)`` are the coordinates of the bottom right
|
|||||||
bounding box corner and ``conf`` is the confidence for the predicted
|
bounding box corner and ``conf`` is the confidence for the predicted
|
||||||
class.
|
class.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert a TensorFlow Model to OpenVINO™
|
Convert a TensorFlow Model to OpenVINO™
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
| This short tutorial shows how to convert a TensorFlow
|
| This short tutorial shows how to convert a TensorFlow
|
||||||
`MobileNetV3 <https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html>`__
|
`MobileNetV3 <https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html>`__
|
||||||
@ -13,7 +13,11 @@ Convert a TensorFlow Model to OpenVINO™
|
|||||||
Runtime <https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html>`__
|
Runtime <https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html>`__
|
||||||
and do inference with a sample image.
|
and do inference with a sample image.
|
||||||
|
|
||||||
| **Table of contents**:
|
|
||||||
|
|
||||||
|
| .. _top:
|
||||||
|
|
||||||
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
- `Settings <#settings>`__
|
- `Settings <#settings>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert a PyTorch Model to ONNX and OpenVINO™ IR
|
Convert a PyTorch Model to ONNX and OpenVINO™ IR
|
||||||
================================================
|
================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates step-by-step instructions on how to do
|
This tutorial demonstrates step-by-step instructions on how to do
|
||||||
inference on a PyTorch semantic segmentation model, using OpenVINO
|
inference on a PyTorch semantic segmentation model, using OpenVINO
|
||||||
@ -35,6 +35,10 @@ plant, sheep, sofa, train, tv monitor**
|
|||||||
More information about the model is available in the `torchvision
|
More information about the model is available in the `torchvision
|
||||||
documentation <https://pytorch.org/vision/main/models/lraspp.html>`__
|
documentation <https://pytorch.org/vision/main/models/lraspp.html>`__
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert a PyTorch Model to OpenVINO™ IR
|
Convert a PyTorch Model to OpenVINO™ IR
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates step-by-step instructions on how to do
|
This tutorial demonstrates step-by-step instructions on how to do
|
||||||
inference on a PyTorch classification model using OpenVINO Runtime.
|
inference on a PyTorch classification model using OpenVINO Runtime.
|
||||||
@ -31,6 +31,10 @@ but elevated to the design space level. The RegNet design space provides
|
|||||||
simple and fast networks that work well across a wide range of flop
|
simple and fast networks that work well across a wide range of flop
|
||||||
regimes.
|
regimes.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert a PaddlePaddle Model to OpenVINO™ IR
|
Convert a PaddlePaddle Model to OpenVINO™ IR
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook shows how to convert a MobileNetV3 model from
|
This notebook shows how to convert a MobileNetV3 model from
|
||||||
`PaddleHub <https://github.com/PaddlePaddle/PaddleHub>`__, pre-trained
|
`PaddleHub <https://github.com/PaddlePaddle/PaddleHub>`__, pre-trained
|
||||||
@ -16,6 +16,10 @@ IR model.
|
|||||||
Source of the
|
Source of the
|
||||||
`model <https://www.paddlepaddle.org.cn/hubdetail?name=mobilenet_v3_large_imagenet_ssld&en_category=ImageClassification>`__.
|
`model <https://www.paddlepaddle.org.cn/hubdetail?name=mobilenet_v3_large_imagenet_ssld&en_category=ImageClassification>`__.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,13 +1,15 @@
|
|||||||
Working with Open Model Zoo Models
|
Working with Open Model Zoo Models
|
||||||
==================================
|
==================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial shows how to download a model from `Open Model
|
This tutorial shows how to download a model from `Open Model
|
||||||
Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__, convert it
|
Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__, convert it
|
||||||
to OpenVINO™ IR format, show information about the model, and benchmark
|
to OpenVINO™ IR format, show information about the model, and benchmark
|
||||||
the model.
|
the model.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `OpenVINO and Open Model Zoo Tools <#openvino-and-open-model-zoo-tools>`__
|
- `OpenVINO and Open Model Zoo Tools <#openvino-and-open-model-zoo-tools>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Quantize NLP models with Post-Training Quantization in NNCF
|
Quantize NLP models with Post-Training Quantization in NNCF
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to apply ``INT8`` quantization to the
|
This tutorial demonstrates how to apply ``INT8`` quantization to the
|
||||||
Natural Language Processing model known as
|
Natural Language Processing model known as
|
||||||
@ -24,6 +24,10 @@ and datasets. It consists of the following steps:
|
|||||||
- Compare the performance of the original, converted and quantized
|
- Compare the performance of the original, converted and quantized
|
||||||
models.
|
models.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Automatic Device Selection with OpenVINO™
|
Automatic Device Selection with OpenVINO™
|
||||||
=========================================
|
=========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The `Auto
|
The `Auto
|
||||||
device <https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html>`__
|
device <https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html>`__
|
||||||
(or AUTO in short) selects the most suitable device for inference by
|
(or AUTO in short) selects the most suitable device for inference by
|
||||||
@ -32,6 +30,10 @@ first inference.
|
|||||||
|
|
||||||
auto
|
auto
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Import modules and create Core <#import-modules-and-create-core>`__
|
- `Import modules and create Core <#import-modules-and-create-core>`__
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Quantize Speech Recognition Models using NNCF PTQ API
|
Quantize Speech Recognition Models using NNCF PTQ API
|
||||||
=====================================================
|
=====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to use the NNCF (Neural Network
|
This tutorial demonstrates how to use the NNCF (Neural Network
|
||||||
Compression Framework) 8-bit quantization in post-training mode (without
|
Compression Framework) 8-bit quantization in post-training mode (without
|
||||||
the fine-tuning pipeline) to optimize the speech recognition model,
|
the fine-tuning pipeline) to optimize the speech recognition model,
|
||||||
@ -21,6 +19,10 @@ steps:
|
|||||||
- Compare performance of the original and quantized models.
|
- Compare performance of the original and quantized models.
|
||||||
- Compare Accuracy of the Original and Quantized Models.
|
- Compare Accuracy of the Original and Quantized Models.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Download and prepare model <#download-and-prepare-model>`__
|
- `Download and prepare model <#download-and-prepare-model>`__
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
Working with GPUs in OpenVINO™
|
Working with GPUs in OpenVINO™
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. _top:
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Performance tricks in OpenVINO for latency mode
|
Performance tricks in OpenVINO for latency mode
|
||||||
===============================================
|
===============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The goal of this notebook is to provide a step-by-step tutorial for
|
The goal of this notebook is to provide a step-by-step tutorial for
|
||||||
improving performance for inferencing in a latency mode. Low latency is
|
improving performance for inferencing in a latency mode. Low latency is
|
||||||
especially desired in real-time applications when the results are needed
|
especially desired in real-time applications when the results are needed
|
||||||
@ -51,6 +49,10 @@ optimize performance on OpenVINO IR files in
|
|||||||
A similar notebook focused on the throughput mode is available
|
A similar notebook focused on the throughput mode is available
|
||||||
`here <109-throughput-tricks-with-output.html>`__.
|
`here <109-throughput-tricks-with-output.html>`__.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Data <#data>`__
|
- `Data <#data>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Performance tricks in OpenVINO for throughput mode
|
Performance tricks in OpenVINO for throughput mode
|
||||||
==================================================
|
==================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The goal of this notebook is to provide a step-by-step tutorial for
|
The goal of this notebook is to provide a step-by-step tutorial for
|
||||||
improving performance for inferencing in a throughput mode. High
|
improving performance for inferencing in a throughput mode. High
|
||||||
@ -46,6 +46,10 @@ optimize performance on OpenVINO IR files in
|
|||||||
A similar notebook focused on the latency mode is available
|
A similar notebook focused on the latency mode is available
|
||||||
`here <109-latency-tricks-with-output.html>`__.
|
`here <109-latency-tricks-with-output.html>`__.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Data <#data>`__
|
- `Data <#data>`__
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Live Inference and Benchmark CT-scan Data with OpenVINO™
|
Live Inference and Benchmark CT-scan Data with OpenVINO™
|
||||||
========================================================
|
========================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 4
|
Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 4
|
||||||
-----------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
@ -30,6 +28,10 @@ notebook.
|
|||||||
For demonstration purposes, this tutorial will download one converted CT
|
For demonstration purposes, this tutorial will download one converted CT
|
||||||
scan to use for inference.
|
scan to use for inference.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Quantize a Segmentation Model and Show Live Inference
|
Quantize a Segmentation Model and Show Live Inference
|
||||||
=====================================================
|
=====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 3
|
Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 3
|
||||||
-----------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
@ -55,6 +53,10 @@ demonstration purposes, this tutorial will download one converted CT
|
|||||||
scan and use that scan for quantization and inference. For production
|
scan and use that scan for quantization and inference. For production
|
||||||
purposes, use a representative dataset for quantizing the model.
|
purposes, use a representative dataset for quantizing the model.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Migrate quantization from POT API to NNCF API
|
Migrate quantization from POT API to NNCF API
|
||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to migrate quantization pipeline written
|
This tutorial demonstrates how to migrate quantization pipeline written
|
||||||
using the OpenVINO `Post-Training Optimization Tool (POT) <https://docs.openvino.ai/2023.0/pot_introduction.html>`__ to
|
using the OpenVINO `Post-Training Optimization Tool (POT) <https://docs.openvino.ai/2023.0/pot_introduction.html>`__ to
|
||||||
`NNCF Post-Training Quantization API <https://docs.openvino.ai/nightly/basic_quantization_flow.html>`__.
|
`NNCF Post-Training Quantization API <https://docs.openvino.ai/nightly/basic_quantization_flow.html>`__.
|
||||||
@ -23,6 +21,9 @@ The tutorial consists from the following parts:
|
|||||||
7. Compare performance FP32 and INT8 models
|
7. Compare performance FP32 and INT8 models
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Post-Training Quantization of PyTorch models with NNCF
|
Post-Training Quantization of PyTorch models with NNCF
|
||||||
======================================================
|
======================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The goal of this tutorial is to demonstrate how to use the NNCF (Neural
|
The goal of this tutorial is to demonstrate how to use the NNCF (Neural
|
||||||
Network Compression Framework) 8-bit quantization in post-training mode
|
Network Compression Framework) 8-bit quantization in post-training mode
|
||||||
(without the fine-tuning pipeline) to optimize a PyTorch model for the
|
(without the fine-tuning pipeline) to optimize a PyTorch model for the
|
||||||
@ -27,6 +25,9 @@ quantization, not demanding the fine-tuning of the model.
|
|||||||
notebook.
|
notebook.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparations <#preparations>`__
|
- `Preparations <#preparations>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Quantization of Image Classification Models
|
Quantization of Image Classification Models
|
||||||
===========================================
|
===========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to apply ``INT8`` quantization to Image
|
This tutorial demonstrates how to apply ``INT8`` quantization to Image
|
||||||
Classification model using
|
Classification model using
|
||||||
@ -21,6 +21,8 @@ This tutorial consists of the following steps:
|
|||||||
- Compare performance of the original and quantized models.
|
- Compare performance of the original and quantized models.
|
||||||
- Compare results on one picture.
|
- Compare results on one picture.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prepare the Model <#prepare-the-model>`__
|
- `Prepare the Model <#prepare-the-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Asynchronous Inference with OpenVINO™
|
Asynchronous Inference with OpenVINO™
|
||||||
=====================================
|
=====================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates how to use the `Async
|
This notebook demonstrates how to use the `Async
|
||||||
API <https://docs.openvino.ai/nightly/openvino_docs_deployment_optimization_guide_common.html>`__
|
API <https://docs.openvino.ai/nightly/openvino_docs_deployment_optimization_guide_common.html>`__
|
||||||
@ -14,6 +14,8 @@ in parallel (for example, populating inputs or scheduling other
|
|||||||
requests) rather than wait for the current inference to complete first.
|
requests) rather than wait for the current inference to complete first.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Accelerate Inference of Sparse Transformer Models with OpenVINO™ and 4th Gen Intel® Xeon® Scalable Processors
|
Accelerate Inference of Sparse Transformer Models with OpenVINO™ and 4th Gen Intel® Xeon® Scalable Processors
|
||||||
=============================================================================================================
|
=============================================================================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to improve performance of sparse
|
This tutorial demonstrates how to improve performance of sparse
|
||||||
Transformer models with `OpenVINO <https://docs.openvino.ai/>`__ on 4th
|
Transformer models with `OpenVINO <https://docs.openvino.ai/>`__ on 4th
|
||||||
@ -21,6 +21,8 @@ consists of the following steps:
|
|||||||
integration with Hugging Face Optimum.
|
integration with Hugging Face Optimum.
|
||||||
- Compare sparse 8-bit vs. dense 8-bit inference performance.
|
- Compare sparse 8-bit vs. dense 8-bit inference performance.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Hello Model Server
|
Hello Model Server
|
||||||
==================
|
==================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Introduction to OpenVINO™ Model Server (OVMS).
|
Introduction to OpenVINO™ Model Server (OVMS).
|
||||||
|
|
||||||
@ -33,6 +33,8 @@ deployment:
|
|||||||
|
|
||||||
|ovms_diagram|
|
|ovms_diagram|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Serving with OpenVINO Model Server <#serving-with-openvino-model-server1>`__
|
- `Serving with OpenVINO Model Server <#serving-with-openvino-model-server1>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Optimize Preprocessing
|
Optimize Preprocessing
|
||||||
======================
|
======================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
When input data does not fit the model input tensor perfectly,
|
When input data does not fit the model input tensor perfectly,
|
||||||
additional operations/steps are needed to transform the data to the
|
additional operations/steps are needed to transform the data to the
|
||||||
@ -27,6 +27,8 @@ This tutorial include following steps:
|
|||||||
- Comparing results on one picture.
|
- Comparing results on one picture.
|
||||||
- Comparing performance.
|
- Comparing performance.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Settings <#settings>`__
|
- `Settings <#settings>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert a Tensorflow Lite Model to OpenVINO™
|
Convert a Tensorflow Lite Model to OpenVINO™
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
`TensorFlow Lite <https://www.tensorflow.org/lite/guide>`__, often
|
`TensorFlow Lite <https://www.tensorflow.org/lite/guide>`__, often
|
||||||
referred to as TFLite, is an open source library developed for deploying
|
referred to as TFLite, is an open source library developed for deploying
|
||||||
@ -17,6 +17,8 @@ After creating the OpenVINO IR, load the model in `OpenVINO
|
|||||||
Runtime <https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html>`__
|
Runtime <https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html>`__
|
||||||
and do inference with a sample image.
|
and do inference with a sample image.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert a TensorFlow Object Detection Model to OpenVINO™
|
Convert a TensorFlow Object Detection Model to OpenVINO™
|
||||||
========================================================
|
========================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
`TensorFlow <https://www.tensorflow.org/>`__, or TF for short, is an
|
`TensorFlow <https://www.tensorflow.org/>`__, or TF for short, is an
|
||||||
open-source framework for machine learning.
|
open-source framework for machine learning.
|
||||||
@ -26,6 +26,8 @@ After creating the OpenVINO IR, load the model in `OpenVINO
|
|||||||
Runtime <https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html>`__
|
Runtime <https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html>`__
|
||||||
and do inference with a sample image.
|
and do inference with a sample image.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -4,6 +4,8 @@ OpenVINO™ model conversion API
|
|||||||
This notebook shows how to convert a model from original framework
|
This notebook shows how to convert a model from original framework
|
||||||
format to OpenVINO Intermediate Representation (IR).
|
format to OpenVINO Intermediate Representation (IR).
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `OpenVINO IR format <#openvino-ir-format>`__
|
- `OpenVINO IR format <#openvino-ir-format>`__
|
||||||
|
@ -0,0 +1,309 @@
|
|||||||
|
Quantize Speech Recognition Models with accuracy control using NNCF PTQ API
|
||||||
|
===========================================================================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
This tutorial demonstrates how to apply ``INT8`` quantization with
|
||||||
|
accuracy control to the speech recognition model, known as
|
||||||
|
`Wav2Vec2 <https://huggingface.co/docs/transformers/model_doc/wav2vec2>`__,
|
||||||
|
using the NNCF (Neural Network Compression Framework) 8-bit quantization
|
||||||
|
with accuracy control in post-training mode (without the fine-tuning
|
||||||
|
pipeline). This notebook uses a fine-tuned
|
||||||
|
`Wav2Vec2-Base-960h <https://huggingface.co/facebook/wav2vec2-base-960h>`__
|
||||||
|
`PyTorch <https://pytorch.org/>`__ model trained on the `LibriSpeech ASR
|
||||||
|
corpus <https://www.openslr.org/12>`__. The tutorial is designed to be
|
||||||
|
extendable to custom models and datasets. It consists of the following
|
||||||
|
steps:
|
||||||
|
|
||||||
|
- Download and prepare the Wav2Vec2 model and LibriSpeech dataset.
|
||||||
|
- Define data loading and accuracy validation functionality.
|
||||||
|
- Model quantization with accuracy control.
|
||||||
|
- Compare Accuracy of original PyTorch model, OpenVINO FP16 and INT8
|
||||||
|
models.
|
||||||
|
- Compare performance of the original and quantized models.
|
||||||
|
|
||||||
|
The advanced quantization flow allows to apply 8-bit quantization to the
|
||||||
|
model with control of accuracy metric. This is achieved by keeping the
|
||||||
|
most impactful operations within the model in the original precision.
|
||||||
|
The flow is based on the `Basic 8-bit
|
||||||
|
quantization <https://docs.openvino.ai/2023.0/basic_quantization_flow.html>`__
|
||||||
|
and has the following differences:
|
||||||
|
|
||||||
|
- Besides the calibration dataset, a validation dataset is required to
|
||||||
|
compute the accuracy metric. Both datasets can refer to the same data
|
||||||
|
in the simplest case.
|
||||||
|
- Validation function, used to compute accuracy metric is required. It
|
||||||
|
can be a function that is already available in the source framework
|
||||||
|
or a custom function.
|
||||||
|
- Since accuracy validation is run several times during the
|
||||||
|
quantization process, quantization with accuracy control can take
|
||||||
|
more time than the Basic 8-bit quantization flow.
|
||||||
|
- The resulted model can provide smaller performance improvement than
|
||||||
|
the Basic 8-bit quantization flow because some of the operations are
|
||||||
|
kept in the original precision.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Currently, 8-bit quantization with accuracy control in NNCF
|
||||||
|
is available only for models in OpenVINO representation.
|
||||||
|
|
||||||
|
The steps for the quantization with accuracy control are described
|
||||||
|
below.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
|
**Table of contents**:
|
||||||
|
|
||||||
|
- `Imports <#imports>`__
|
||||||
|
- `Prepare the Model <#prepare-the-model>`__
|
||||||
|
- `Prepare LibriSpeech Dataset <#prepare-librispeech-dataset>`__
|
||||||
|
- `Prepare calibration and validation datasets <#prepare-calibration-and-validation-datasets>`__
|
||||||
|
- `Prepare validation function <#prepare-validation-function>`__
|
||||||
|
- `Run quantization with accuracy control <#run-quantization-with-accuracy-control>`__
|
||||||
|
- `Model Usage Example <#model-usage-example>`__
|
||||||
|
- `Compare Accuracy of the Original and Quantized Models <#compare-accuracy-of-the-original-and-quantized-models>`__
|
||||||
|
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
# !pip install -q "openvino-dev>=2023.1.0" "nncf>=2.6.0"
|
||||||
|
!pip install -q "openvino==2023.1.0.dev20230811"
|
||||||
|
!pip install git+https://github.com/openvinotoolkit/nncf.git@develop
|
||||||
|
!pip install -q soundfile librosa transformers torch datasets torchmetrics
|
||||||
|
|
||||||
|
Imports `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
import torch
|
||||||
|
|
||||||
|
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
||||||
|
|
||||||
|
Prepare the Model `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
For instantiating PyTorch model class,
|
||||||
|
we should use ``Wav2Vec2ForCTC.from_pretrained`` method with providing
|
||||||
|
model ID for downloading from HuggingFace hub. Model weights and
|
||||||
|
configuration files will be downloaded automatically in first time
|
||||||
|
usage. Keep in mind that downloading the files can take several minutes
|
||||||
|
and depends on your internet connection.
|
||||||
|
|
||||||
|
Additionally, we can create processor class which is responsible for
|
||||||
|
model specific pre- and post-processing steps.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
BATCH_SIZE = 1
|
||||||
|
MAX_SEQ_LENGTH = 30480
|
||||||
|
|
||||||
|
|
||||||
|
torch_model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h", ctc_loss_reduction="mean")
|
||||||
|
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
|
||||||
|
|
||||||
|
Convert it to the OpenVINO Intermediate Representation (OpenVINO IR)
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import openvino
|
||||||
|
|
||||||
|
|
||||||
|
default_input = torch.zeros([1, MAX_SEQ_LENGTH], dtype=torch.float)
|
||||||
|
ov_model = openvino.convert_model(torch_model, example_input=default_input)
|
||||||
|
|
||||||
|
Prepare LibriSpeech Dataset `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
For demonstration purposes, we will use short dummy version of
|
||||||
|
LibriSpeech dataset - ``patrickvonplaten/librispeech_asr_dummy`` to
|
||||||
|
speed up model evaluation. Model accuracy can be different from reported
|
||||||
|
in the paper. For reproducing original accuracy, use ``librispeech_asr``
|
||||||
|
dataset.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
from datasets import load_dataset
|
||||||
|
|
||||||
|
|
||||||
|
dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
|
||||||
|
test_sample = dataset[0]["audio"]
|
||||||
|
|
||||||
|
|
||||||
|
# define preprocessing function for converting audio to input values for model
|
||||||
|
def map_to_input(batch):
|
||||||
|
preprocessed_signal = processor(batch["audio"]["array"], return_tensors="pt", padding="longest", sampling_rate=batch['audio']['sampling_rate'])
|
||||||
|
input_values = preprocessed_signal.input_values
|
||||||
|
batch['input_values'] = input_values
|
||||||
|
return batch
|
||||||
|
|
||||||
|
|
||||||
|
# apply preprocessing function to dataset and remove audio column, to save memory as we do not need it anymore
|
||||||
|
dataset = dataset.map(map_to_input, batched=False, remove_columns=["audio"])
|
||||||
|
|
||||||
|
Prepare calibration dataset `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import nncf
|
||||||
|
|
||||||
|
|
||||||
|
def transform_fn(data_item):
|
||||||
|
"""
|
||||||
|
Extract the model's input from the data item.
|
||||||
|
The data item here is the data item that is returned from the data source per iteration.
|
||||||
|
This function should be passed when the data item cannot be used as model's input.
|
||||||
|
"""
|
||||||
|
return np.array(data_item["input_values"])
|
||||||
|
|
||||||
|
|
||||||
|
calibration_dataset = nncf.Dataset(dataset, transform_fn)
|
||||||
|
|
||||||
|
Prepare validation function `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
Define the validation function.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
from torchmetrics import WordErrorRate
|
||||||
|
from tqdm.notebook import tqdm
|
||||||
|
|
||||||
|
|
||||||
|
def validation_fn(model, dataset):
|
||||||
|
"""
|
||||||
|
Calculate and returns a metric for the model.
|
||||||
|
"""
|
||||||
|
wer = WordErrorRate()
|
||||||
|
for sample in tqdm(dataset):
|
||||||
|
# run infer function on sample
|
||||||
|
output = model.output(0)
|
||||||
|
logits = model(np.array(sample['input_values']))[output]
|
||||||
|
predicted_ids = np.argmax(logits, axis=-1)
|
||||||
|
transcription = processor.batch_decode(torch.from_numpy(predicted_ids))
|
||||||
|
|
||||||
|
# update metric on sample result
|
||||||
|
wer.update(transcription, [sample['text']])
|
||||||
|
|
||||||
|
result = wer.compute()
|
||||||
|
|
||||||
|
return 1 - result
|
||||||
|
|
||||||
|
Run quantization with accuracy control `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
You should provide
|
||||||
|
the calibration dataset and the validation dataset. It can be the same
|
||||||
|
dataset. - parameter ``max_drop`` defines the accuracy drop threshold.
|
||||||
|
The quantization process stops when the degradation of accuracy metric
|
||||||
|
on the validation dataset is less than the ``max_drop``. The default
|
||||||
|
value is 0.01. NNCF will stop the quantization and report an error if
|
||||||
|
the ``max_drop`` value can’t be reached. - ``drop_type`` defines how the
|
||||||
|
accuracy drop will be calculated: ABSOLUTE (used by default) or
|
||||||
|
RELATIVE. - ``ranking_subset_size`` - size of a subset that is used to
|
||||||
|
rank layers by their contribution to the accuracy drop. Default value is
|
||||||
|
300, and the more samples it has the better ranking, potentially. Here
|
||||||
|
we use the value 25 to speed up the execution.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Execution can take tens of minutes and requires up to 10 GB
|
||||||
|
of free memory
|
||||||
|
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
from nncf.quantization.advanced_parameters import AdvancedAccuracyRestorerParameters
|
||||||
|
from nncf.parameters import ModelType
|
||||||
|
|
||||||
|
quantized_model = nncf.quantize_with_accuracy_control(
|
||||||
|
ov_model,
|
||||||
|
calibration_dataset=calibration_dataset,
|
||||||
|
validation_dataset=calibration_dataset,
|
||||||
|
validation_fn=validation_fn,
|
||||||
|
max_drop=0.01,
|
||||||
|
drop_type=nncf.DropType.ABSOLUTE,
|
||||||
|
model_type=ModelType.TRANSFORMER,
|
||||||
|
advanced_accuracy_restorer_parameters=AdvancedAccuracyRestorerParameters(
|
||||||
|
ranking_subset_size=25
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
Model Usage Example `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import IPython.display as ipd
|
||||||
|
|
||||||
|
|
||||||
|
ipd.Audio(test_sample["array"], rate=16000)
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
core = openvino.Core()
|
||||||
|
|
||||||
|
compiled_quantized_model = core.compile_model(model=quantized_model, device_name='CPU')
|
||||||
|
|
||||||
|
input_data = np.expand_dims(test_sample["array"], axis=0)
|
||||||
|
|
||||||
|
Next, make a prediction.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
predictions = compiled_quantized_model([input_data])[0]
|
||||||
|
predicted_ids = np.argmax(predictions, axis=-1)
|
||||||
|
transcription = processor.batch_decode(torch.from_numpy(predicted_ids))
|
||||||
|
transcription
|
||||||
|
|
||||||
|
Compare Accuracy of the Original and Quantized Models `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
- Define dataloader for test dataset.
|
||||||
|
- Define functions to get inference for PyTorch and OpenVINO models.
|
||||||
|
- Define functions to compute Word Error Rate.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
# inference function for pytorch
|
||||||
|
def torch_infer(model, sample):
|
||||||
|
logits = model(torch.Tensor(sample['input_values'])).logits
|
||||||
|
# take argmax and decode
|
||||||
|
predicted_ids = torch.argmax(logits, dim=-1)
|
||||||
|
transcription = processor.batch_decode(predicted_ids)
|
||||||
|
return transcription
|
||||||
|
|
||||||
|
|
||||||
|
# inference function for openvino
|
||||||
|
def ov_infer(model, sample):
|
||||||
|
output = model.output(0)
|
||||||
|
logits = model(np.array(sample['input_values']))[output]
|
||||||
|
predicted_ids = np.argmax(logits, axis=-1)
|
||||||
|
transcription = processor.batch_decode(torch.from_numpy(predicted_ids))
|
||||||
|
return transcription
|
||||||
|
|
||||||
|
|
||||||
|
def compute_wer(dataset, model, infer_fn):
|
||||||
|
wer = WordErrorRate()
|
||||||
|
for sample in tqdm(dataset):
|
||||||
|
# run infer function on sample
|
||||||
|
transcription = infer_fn(model, sample)
|
||||||
|
# update metric on sample result
|
||||||
|
wer.update(transcription, [sample['text']])
|
||||||
|
# finalize metric calculation
|
||||||
|
result = wer.compute()
|
||||||
|
return result
|
||||||
|
|
||||||
|
Now, compute WER for the original PyTorch model and quantized model.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
pt_result = compute_wer(dataset, torch_model, torch_infer)
|
||||||
|
quantized_result = compute_wer(dataset, compiled_quantized_model, ov_infer)
|
||||||
|
|
||||||
|
print(f'[PyTorch] Word Error Rate: {pt_result:.4f}')
|
||||||
|
print(f'[Quantized OpenVino] Word Error Rate: {quantized_result:.4f}')
|
@ -0,0 +1,306 @@
|
|||||||
|
Convert and Optimize YOLOv8 with OpenVINO™
|
||||||
|
==========================================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
The YOLOv8 algorithm developed by Ultralytics is a cutting-edge,
|
||||||
|
state-of-the-art (SOTA) model that is designed to be fast, accurate, and
|
||||||
|
easy to use, making it an excellent choice for a wide range of object
|
||||||
|
detection, image segmentation, and image classification tasks. More
|
||||||
|
details about its realization can be found in the original model
|
||||||
|
`repository <https://github.com/ultralytics/ultralytics>`__.
|
||||||
|
|
||||||
|
This tutorial demonstrates step-by-step instructions on how to run apply
|
||||||
|
quantization with accuracy control to PyTorch YOLOv8. The advanced
|
||||||
|
quantization flow allows to apply 8-bit quantization to the model with
|
||||||
|
control of accuracy metric. This is achieved by keeping the most
|
||||||
|
impactful operations within the model in the original precision. The
|
||||||
|
flow is based on the `Basic 8-bit
|
||||||
|
quantization <https://docs.openvino.ai/2023.0/basic_quantization_flow.html>`__
|
||||||
|
and has the following differences:
|
||||||
|
|
||||||
|
- Besides the calibration dataset, a validation dataset is required to
|
||||||
|
compute the accuracy metric. Both datasets can refer to the same data
|
||||||
|
in the simplest case.
|
||||||
|
- Validation function, used to compute accuracy metric is required. It
|
||||||
|
can be a function that is already available in the source framework
|
||||||
|
or a custom function.
|
||||||
|
- Since accuracy validation is run several times during the
|
||||||
|
quantization process, quantization with accuracy control can take
|
||||||
|
more time than the Basic 8-bit quantization flow.
|
||||||
|
- The resulted model can provide smaller performance improvement than
|
||||||
|
the Basic 8-bit quantization flow because some of the operations are
|
||||||
|
kept in the original precision.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Currently, 8-bit quantization with accuracy control in NNCF
|
||||||
|
is available only for models in OpenVINO representation.
|
||||||
|
|
||||||
|
The steps for the quantization with accuracy control are described
|
||||||
|
below.
|
||||||
|
|
||||||
|
The tutorial consists of the following steps:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
- `Get Pytorch model and OpenVINO IR model <#get-pytorch-model-and-openvino-ir-model>`__
|
||||||
|
- `Define validator and data loader <#define-validator-and-data-loader>`__
|
||||||
|
- `Prepare calibration and validation datasets <#prepare-calibration-and-validation-datasets>`__
|
||||||
|
- `Prepare validation function <#prepare-validation-function>`__
|
||||||
|
- `Run quantization with accuracy control <#run-quantization-with-accuracy-control>`__
|
||||||
|
- `Compare Accuracy and Performance of the Original and Quantized Models <#compare-accuracy-and-performance-of-the-original-and-quantized-models>`__
|
||||||
|
|
||||||
|
Prerequisites `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
|
||||||
|
Install necessary packages.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
!pip install -q "openvino==2023.1.0.dev20230811"
|
||||||
|
!pip install git+https://github.com/openvinotoolkit/nncf.git@develop
|
||||||
|
!pip install -q "ultralytics==8.0.43"
|
||||||
|
|
||||||
|
Get Pytorch model and OpenVINO IR model `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
Generally, PyTorch models represent an instance of the
|
||||||
|
`torch.nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html>`__
|
||||||
|
class, initialized by a state dictionary with model weights. We will use
|
||||||
|
the YOLOv8 nano model (also known as ``yolov8n``) pre-trained on a COCO
|
||||||
|
dataset, which is available in this
|
||||||
|
`repo <https://github.com/ultralytics/ultralytics>`__. Similar steps are
|
||||||
|
also applicable to other YOLOv8 models. Typical steps to obtain a
|
||||||
|
pre-trained model:
|
||||||
|
|
||||||
|
1. Create an instance of a model class.
|
||||||
|
2. Load a checkpoint state dict, which contains the pre-trained model
|
||||||
|
weights.
|
||||||
|
|
||||||
|
In this case, the creators of the model provide an API that enables
|
||||||
|
converting the YOLOv8 model to ONNX and then to OpenVINO IR. Therefore,
|
||||||
|
we do not need to do these steps manually.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from ultralytics import YOLO
|
||||||
|
from ultralytics.yolo.cfg import get_cfg
|
||||||
|
from ultralytics.yolo.data.utils import check_det_dataset
|
||||||
|
from ultralytics.yolo.engine.validator import BaseValidator as Validator
|
||||||
|
from ultralytics.yolo.utils import DATASETS_DIR
|
||||||
|
from ultralytics.yolo.utils import DEFAULT_CFG
|
||||||
|
from ultralytics.yolo.utils import ops
|
||||||
|
from ultralytics.yolo.utils.metrics import ConfusionMatrix
|
||||||
|
|
||||||
|
ROOT = os.path.abspath('')
|
||||||
|
|
||||||
|
MODEL_NAME = "yolov8n-seg"
|
||||||
|
|
||||||
|
model = YOLO(f"{ROOT}/{MODEL_NAME}.pt")
|
||||||
|
args = get_cfg(cfg=DEFAULT_CFG)
|
||||||
|
args.data = "coco128-seg.yaml"
|
||||||
|
|
||||||
|
Load model.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import openvino
|
||||||
|
|
||||||
|
|
||||||
|
model_path = Path(f"{ROOT}/{MODEL_NAME}_openvino_model/{MODEL_NAME}.xml")
|
||||||
|
if not model_path.exists():
|
||||||
|
model.export(format="openvino", dynamic=True, half=False)
|
||||||
|
|
||||||
|
ov_model = openvino.Core().read_model(model_path)
|
||||||
|
|
||||||
|
Define validator and data loader `⇑ <#top>`__
|
||||||
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
|
The original model
|
||||||
|
repository uses a ``Validator`` wrapper, which represents the accuracy
|
||||||
|
validation pipeline. It creates dataloader and evaluation metrics and
|
||||||
|
updates metrics on each data batch produced by the dataloader. Besides
|
||||||
|
that, it is responsible for data preprocessing and results
|
||||||
|
postprocessing. For class initialization, the configuration should be
|
||||||
|
provided. We will use the default setup, but it can be replaced with
|
||||||
|
some parameters overriding to test on custom data. The model has
|
||||||
|
connected the ``ValidatorClass`` method, which creates a validator class
|
||||||
|
instance.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
validator = model.ValidatorClass(args)
|
||||||
|
validator.data = check_det_dataset(args.data)
|
||||||
|
data_loader = validator.get_dataloader(f"{DATASETS_DIR}/coco128-seg", 1)
|
||||||
|
|
||||||
|
validator.is_coco = True
|
||||||
|
validator.class_map = ops.coco80_to_coco91_class()
|
||||||
|
validator.names = model.model.names
|
||||||
|
validator.metrics.names = validator.names
|
||||||
|
validator.nc = model.model.model[-1].nc
|
||||||
|
validator.nm = 32
|
||||||
|
validator.process = ops.process_mask
|
||||||
|
validator.plot_masks = []
|
||||||
|
|
||||||
|
Prepare calibration and validation datasets `⇑ <#top>`__
|
||||||
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
|
We can use one dataset as calibration and validation datasets. Name it
|
||||||
|
``quantization_dataset``.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
import nncf
|
||||||
|
|
||||||
|
|
||||||
|
def transform_fn(data_item: Dict):
|
||||||
|
input_tensor = validator.preprocess(data_item)["img"].numpy()
|
||||||
|
return input_tensor
|
||||||
|
|
||||||
|
|
||||||
|
quantization_dataset = nncf.Dataset(data_loader, transform_fn)
|
||||||
|
|
||||||
|
Prepare validation function `⇑ <#top>`__
|
||||||
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
from functools import partial
|
||||||
|
|
||||||
|
import torch
|
||||||
|
from nncf.quantization.advanced_parameters import AdvancedAccuracyRestorerParameters
|
||||||
|
|
||||||
|
|
||||||
|
def validation_ac(
|
||||||
|
compiled_model: openvino.CompiledModel,
|
||||||
|
validation_loader: torch.utils.data.DataLoader,
|
||||||
|
validator: Validator,
|
||||||
|
num_samples: int = None,
|
||||||
|
) -> float:
|
||||||
|
validator.seen = 0
|
||||||
|
validator.jdict = []
|
||||||
|
validator.stats = []
|
||||||
|
validator.batch_i = 1
|
||||||
|
validator.confusion_matrix = ConfusionMatrix(nc=validator.nc)
|
||||||
|
num_outputs = len(compiled_model.outputs)
|
||||||
|
|
||||||
|
counter = 0
|
||||||
|
for batch_i, batch in enumerate(validation_loader):
|
||||||
|
if num_samples is not None and batch_i == num_samples:
|
||||||
|
break
|
||||||
|
batch = validator.preprocess(batch)
|
||||||
|
results = compiled_model(batch["img"])
|
||||||
|
if num_outputs == 1:
|
||||||
|
preds = torch.from_numpy(results[compiled_model.output(0)])
|
||||||
|
else:
|
||||||
|
preds = [
|
||||||
|
torch.from_numpy(results[compiled_model.output(0)]),
|
||||||
|
torch.from_numpy(results[compiled_model.output(1)]),
|
||||||
|
]
|
||||||
|
preds = validator.postprocess(preds)
|
||||||
|
validator.update_metrics(preds, batch)
|
||||||
|
counter += 1
|
||||||
|
stats = validator.get_stats()
|
||||||
|
if num_outputs == 1:
|
||||||
|
stats_metrics = stats["metrics/mAP50-95(B)"]
|
||||||
|
else:
|
||||||
|
stats_metrics = stats["metrics/mAP50-95(M)"]
|
||||||
|
print(f"Validate: dataset length = {counter}, metric value = {stats_metrics:.3f}")
|
||||||
|
|
||||||
|
return stats_metrics
|
||||||
|
|
||||||
|
|
||||||
|
validation_fn = partial(validation_ac, validator=validator)
|
||||||
|
|
||||||
|
Run quantization with accuracy control `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
You should provide
|
||||||
|
the calibration dataset and the validation dataset. It can be the same
|
||||||
|
dataset. - parameter ``max_drop`` defines the accuracy drop threshold.
|
||||||
|
The quantization process stops when the degradation of accuracy metric
|
||||||
|
on the validation dataset is less than the ``max_drop``. The default
|
||||||
|
value is 0.01. NNCF will stop the quantization and report an error if
|
||||||
|
the ``max_drop`` value can’t be reached. - ``drop_type`` defines how the
|
||||||
|
accuracy drop will be calculated: ABSOLUTE (used by default) or
|
||||||
|
RELATIVE. - ``ranking_subset_size`` - size of a subset that is used to
|
||||||
|
rank layers by their contribution to the accuracy drop. Default value is
|
||||||
|
300, and the more samples it has the better ranking, potentially. Here
|
||||||
|
we use the value 25 to speed up the execution.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Execution can take tens of minutes and requires up to 15 GB
|
||||||
|
of free memory
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
quantized_model = nncf.quantize_with_accuracy_control(
|
||||||
|
ov_model,
|
||||||
|
quantization_dataset,
|
||||||
|
quantization_dataset,
|
||||||
|
validation_fn=validation_fn,
|
||||||
|
max_drop=0.01,
|
||||||
|
preset=nncf.QuantizationPreset.MIXED,
|
||||||
|
advanced_accuracy_restorer_parameters=AdvancedAccuracyRestorerParameters(
|
||||||
|
ranking_subset_size=25,
|
||||||
|
num_ranking_processes=1
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
Compare Accuracy and Performance of the Original and Quantized Models `⇑ <#top>`__
|
||||||
|
###############################################################################################################################
|
||||||
|
|
||||||
|
|
||||||
|
Now we can compare metrics of the Original non-quantized
|
||||||
|
OpenVINO IR model and Quantized OpenVINO IR model to make sure that the
|
||||||
|
``max_drop`` is not exceeded.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
import openvino
|
||||||
|
|
||||||
|
core = openvino.Core()
|
||||||
|
quantized_compiled_model = core.compile_model(model=quantized_model, device_name='CPU')
|
||||||
|
compiled_ov_model = core.compile_model(model=ov_model, device_name='CPU')
|
||||||
|
|
||||||
|
pt_result = validation_ac(compiled_ov_model, data_loader, validator)
|
||||||
|
quantized_result = validation_ac(quantized_compiled_model, data_loader, validator)
|
||||||
|
|
||||||
|
|
||||||
|
print(f'[Original OpenVino]: {pt_result:.4f}')
|
||||||
|
print(f'[Quantized OpenVino]: {quantized_result:.4f}')
|
||||||
|
|
||||||
|
And compare performance.
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
# Set model directory
|
||||||
|
MODEL_DIR = Path("model")
|
||||||
|
MODEL_DIR.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
ir_model_path = MODEL_DIR / 'ir_model.xml'
|
||||||
|
quantized_model_path = MODEL_DIR / 'quantized_model.xml'
|
||||||
|
|
||||||
|
# Save models to use them in the commandline banchmark app
|
||||||
|
openvino.save_model(ov_model, ir_model_path, compress_to_fp16=False)
|
||||||
|
openvino.save_model(quantized_model, quantized_model_path, compress_to_fp16=False)
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
# Inference Original model (OpenVINO IR)
|
||||||
|
! benchmark_app -m $ir_model_path -shape "[1,3,640,640]" -d CPU -api async
|
||||||
|
|
||||||
|
.. code:: ipython2
|
||||||
|
|
||||||
|
# Inference Quantized model (OpenVINO IR)
|
||||||
|
! benchmark_app -m $quantized_model_path -shape "[1,3,640,640]" -d CPU -api async
|
@ -1,7 +1,7 @@
|
|||||||
Monodepth Estimation with OpenVINO
|
Monodepth Estimation with OpenVINO
|
||||||
==================================
|
==================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates Monocular Depth Estimation with MidasNet in
|
This tutorial demonstrates Monocular Depth Estimation with MidasNet in
|
||||||
OpenVINO. Model information can be found
|
OpenVINO. Model information can be found
|
||||||
@ -30,6 +30,8 @@ Transfer,” <https://ieeexplore.ieee.org/document/9178977>`__ in IEEE
|
|||||||
Transactions on Pattern Analysis and Machine Intelligence, doi:
|
Transactions on Pattern Analysis and Machine Intelligence, doi:
|
||||||
``10.1109/TPAMI.2020.3019967``.
|
``10.1109/TPAMI.2020.3019967``.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Single Image Super Resolution with OpenVINO™
|
Single Image Super Resolution with OpenVINO™
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Super Resolution is the process of enhancing the quality of an image by
|
Super Resolution is the process of enhancing the quality of an image by
|
||||||
increasing the pixel count using deep learning. This notebook shows the
|
increasing the pixel count using deep learning. This notebook shows the
|
||||||
@ -16,6 +16,8 @@ Resolution,” <https://arxiv.org/abs/1807.06779>`__ 2018 24th
|
|||||||
International Conference on Pattern Recognition (ICPR), 2018,
|
International Conference on Pattern Recognition (ICPR), 2018,
|
||||||
pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.
|
pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Video Super Resolution with OpenVINO™
|
Video Super Resolution with OpenVINO™
|
||||||
=====================================
|
=====================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Super Resolution is the process of enhancing the quality of an image by
|
Super Resolution is the process of enhancing the quality of an image by
|
||||||
increasing the pixel count using deep learning. This notebook applies
|
increasing the pixel count using deep learning. This notebook applies
|
||||||
@ -23,6 +23,8 @@ pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.
|
|||||||
video.
|
video.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Industrial Meter Reader
|
Industrial Meter Reader
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook shows how to create a industrial meter reader with
|
This notebook shows how to create a industrial meter reader with
|
||||||
OpenVINO Runtime. We use the pre-trained
|
OpenVINO Runtime. We use the pre-trained
|
||||||
@ -21,6 +21,8 @@ to build up a multiple inference task pipeline:
|
|||||||
|
|
||||||
workflow
|
workflow
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Import <#import>`__
|
- `Import <#import>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Semantic Segmentation with OpenVINO™ using Segmenter
|
Semantic Segmentation with OpenVINO™ using Segmenter
|
||||||
====================================================
|
====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Semantic segmentation is a difficult computer vision problem with many
|
Semantic segmentation is a difficult computer vision problem with many
|
||||||
applications such as autonomous driving, robotics, augmented reality,
|
applications such as autonomous driving, robotics, augmented reality,
|
||||||
@ -28,6 +28,8 @@ paper: `Segmenter: Transformer for Semantic
|
|||||||
Segmentation <https://arxiv.org/abs/2105.05633>`__ or in the
|
Segmentation <https://arxiv.org/abs/2105.05633>`__ or in the
|
||||||
`repository <https://github.com/rstrudel/segmenter>`__.
|
`repository <https://github.com/rstrudel/segmenter>`__.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Get and prepare PyTorch model <#get-and-prepare-pytorch-model>`__
|
- `Get and prepare PyTorch model <#get-and-prepare-pytorch-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Image Background Removal with U^2-Net and OpenVINO™
|
Image Background Removal with U^2-Net and OpenVINO™
|
||||||
===================================================
|
===================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates background removal in images using
|
This notebook demonstrates background removal in images using
|
||||||
U\ :math:`^2`-Net and OpenVINO.
|
U\ :math:`^2`-Net and OpenVINO.
|
||||||
@ -17,6 +17,8 @@ The model source is available
|
|||||||
`here <https://github.com/xuebinqin/U-2-Net>`__.
|
`here <https://github.com/xuebinqin/U-2-Net>`__.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Photos to Anime with PaddleGAN and OpenVINO
|
Photos to Anime with PaddleGAN and OpenVINO
|
||||||
===========================================
|
===========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates converting a
|
This tutorial demonstrates converting a
|
||||||
`PaddlePaddle/PaddleGAN <https://github.com/PaddlePaddle/PaddleGAN>`__
|
`PaddlePaddle/PaddleGAN <https://github.com/PaddlePaddle/PaddleGAN>`__
|
||||||
@ -16,6 +16,8 @@ documentation <https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US
|
|||||||
|
|
||||||
anime
|
anime
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Super Resolution with PaddleGAN and OpenVINO™
|
Super Resolution with PaddleGAN and OpenVINO™
|
||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates converting the RealSR (real-world
|
This notebook demonstrates converting the RealSR (real-world
|
||||||
super-resolution) model from
|
super-resolution) model from
|
||||||
@ -18,6 +18,8 @@ from CVPR 2020.
|
|||||||
|
|
||||||
This notebook works best with small images (up to 800x600 resolution).
|
This notebook works best with small images (up to 800x600 resolution).
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Optical Character Recognition (OCR) with OpenVINO™
|
Optical Character Recognition (OCR) with OpenVINO™
|
||||||
==================================================
|
==================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to perform optical character recognition
|
This tutorial demonstrates how to perform optical character recognition
|
||||||
(OCR) with OpenVINO models. It is a continuation of the
|
(OCR) with OpenVINO models. It is a continuation of the
|
||||||
@ -21,6 +21,8 @@ Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__. For more
|
|||||||
information, refer to the
|
information, refer to the
|
||||||
`104-model-tools <104-model-tools-with-output.html>`__ tutorial.
|
`104-model-tools <104-model-tools-with-output.html>`__ tutorial.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Handwritten Chinese and Japanese OCR with OpenVINO™
|
Handwritten Chinese and Japanese OCR with OpenVINO™
|
||||||
===================================================
|
===================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
In this tutorial, we perform optical character recognition (OCR) for
|
In this tutorial, we perform optical character recognition (OCR) for
|
||||||
handwritten Chinese (simplified) and Japanese. An OCR tutorial using the
|
handwritten Chinese (simplified) and Japanese. An OCR tutorial using the
|
||||||
@ -19,6 +19,8 @@ and
|
|||||||
`scut_ept <https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/scut_ept.txt>`__
|
`scut_ept <https://github.com/openvinotoolkit/open_model_zoo/blob/master/data/dataset_classes/scut_ept.txt>`__
|
||||||
charlists are used. Both models are available on `Open Model Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__.
|
charlists are used. Both models are available on `Open Model Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Video Recognition using SlowFast and OpenVINO™
|
Video Recognition using SlowFast and OpenVINO™
|
||||||
==============================================
|
==============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Teaching machines to detect, understand and analyze the contents of
|
Teaching machines to detect, understand and analyze the contents of
|
||||||
images has been one of the more well-known and well-studied problems in
|
images has been one of the more well-known and well-studied problems in
|
||||||
@ -40,6 +40,8 @@ This tutorial consists of the following steps
|
|||||||
|
|
||||||
.. |image0| image:: https://user-images.githubusercontent.com/34324155/143044111-94676f64-7ba8-4081-9011-f8054bed7030.png
|
.. |image0| image:: https://user-images.githubusercontent.com/34324155/143044111-94676f64-7ba8-4081-9011-f8054bed7030.png
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prepare PyTorch Model <#prepare-pytorch-model>`__
|
- `Prepare PyTorch Model <#prepare-pytorch-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Speech to Text with OpenVINO™
|
Speech to Text with OpenVINO™
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates speech-to-text recognition with OpenVINO.
|
This tutorial demonstrates speech-to-text recognition with OpenVINO.
|
||||||
|
|
||||||
@ -13,6 +13,8 @@ with Connectionist Temporal Classification (CTC) loss. The model is
|
|||||||
available from `Open Model
|
available from `Open Model
|
||||||
Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__.
|
Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Speaker diarization
|
Speaker diarization
|
||||||
===================
|
===================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Speaker diarization is the process of partitioning an audio stream
|
Speaker diarization is the process of partitioning an audio stream
|
||||||
containing human speech into homogeneous segments according to the
|
containing human speech into homogeneous segments according to the
|
||||||
@ -39,6 +39,8 @@ card <https://huggingface.co/pyannote/speaker-diarization>`__,
|
|||||||
`repo <https://github.com/pyannote/pyannote-audio>`__ and
|
`repo <https://github.com/pyannote/pyannote-audio>`__ and
|
||||||
`paper <https://arxiv.org/abs/1911.01255>`__.
|
`paper <https://arxiv.org/abs/1911.01255>`__.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Interactive question answering with OpenVINO™
|
Interactive question answering with OpenVINO™
|
||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This demo shows interactive question answering with OpenVINO, using
|
This demo shows interactive question answering with OpenVINO, using
|
||||||
`small BERT-large-like
|
`small BERT-large-like
|
||||||
@ -11,6 +11,8 @@ larger BERT-large model. The model comes from `Open Model
|
|||||||
Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__. Final part
|
Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__. Final part
|
||||||
of this notebook provides live inference results from your inputs.
|
of this notebook provides live inference results from your inputs.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Grammatical Error Correction with OpenVINO
|
Grammatical Error Correction with OpenVINO
|
||||||
==========================================
|
==========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
AI-based auto-correction products are becoming increasingly popular due
|
AI-based auto-correction products are becoming increasingly popular due
|
||||||
to their ease of use, editing speed, and affordability. These products
|
to their ease of use, editing speed, and affordability. These products
|
||||||
@ -43,6 +43,8 @@ It consists of the following steps:
|
|||||||
Optimum <https://huggingface.co/blog/openvino>`__.
|
Optimum <https://huggingface.co/blog/openvino>`__.
|
||||||
- Create an inference pipeline for grammatical error checking
|
- Create an inference pipeline for grammatical error checking
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `How does it work? <#how-does-it-work>`__
|
- `How does it work? <#how-does-it-work>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Image In-painting with OpenVINO™
|
Image In-painting with OpenVINO™
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates how to use an image in-painting model with
|
This notebook demonstrates how to use an image in-painting model with
|
||||||
OpenVINO, using `GMCNN
|
OpenVINO, using `GMCNN
|
||||||
@ -11,6 +11,8 @@ given a tampered image, is able to create something very similar to the
|
|||||||
original image. The Following pipeline will be used in this notebook.
|
original image. The Following pipeline will be used in this notebook.
|
||||||
|pipeline|
|
|pipeline|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Download the Model <#download-the-model>`__
|
- `Download the Model <#download-the-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
The attention center model with OpenVINO™
|
The attention center model with OpenVINO™
|
||||||
=========================================
|
=========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates how to use the `attention center
|
This notebook demonstrates how to use the `attention center
|
||||||
model <https://github.com/google/attention-center/tree/main>`__ with
|
model <https://github.com/google/attention-center/tree/main>`__ with
|
||||||
@ -51,6 +51,8 @@ The attention center model has been trained with images from the `COCO
|
|||||||
dataset <https://cocodataset.org/#home>`__ annotated with saliency from
|
dataset <https://cocodataset.org/#home>`__ annotated with saliency from
|
||||||
the `SALICON dataset <http://salicon.net/>`__.
|
the `SALICON dataset <http://salicon.net/>`__.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
Deblur Photos with DeblurGAN-v2 and OpenVINO™
|
Deblur Photos with DeblurGAN-v2 and OpenVINO™
|
||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. _top:
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Vehicle Detection And Recognition with OpenVINO™
|
Vehicle Detection And Recognition with OpenVINO™
|
||||||
================================================
|
================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This tutorial demonstrates how to use two pre-trained models from `Open
|
This tutorial demonstrates how to use two pre-trained models from `Open
|
||||||
Model Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__:
|
Model Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__:
|
||||||
@ -19,6 +19,8 @@ As a result, you can get:
|
|||||||
|
|
||||||
result
|
result
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
OpenVINO optimizations for Knowledge graphs
|
OpenVINO optimizations for Knowledge graphs
|
||||||
===========================================
|
===========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The goal of this notebook is to showcase performance optimizations for
|
The goal of this notebook is to showcase performance optimizations for
|
||||||
the ConvE knowledge graph embeddings model using the Intel® Distribution
|
the ConvE knowledge graph embeddings model using the Intel® Distribution
|
||||||
@ -18,6 +18,8 @@ The ConvE model is an implementation of the paper -
|
|||||||
sample dataset can be downloaded from:
|
sample dataset can be downloaded from:
|
||||||
https://github.com/TimDettmers/ConvE/tree/master/countries/countries_S1
|
https://github.com/TimDettmers/ConvE/tree/master/countries/countries_S1
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Windows specific settings <#windows-specific-settings>`__
|
- `Windows specific settings <#windows-specific-settings>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Cross-lingual Books Alignment with Transformers and OpenVINO™
|
Cross-lingual Books Alignment with Transformers and OpenVINO™
|
||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Cross-lingual text alignment is the task of matching sentences in a pair
|
Cross-lingual text alignment is the task of matching sentences in a pair
|
||||||
of texts that are translations of each other. In this notebook, you’ll
|
of texts that are translations of each other. In this notebook, you’ll
|
||||||
@ -39,6 +39,8 @@ Prerequisites
|
|||||||
- ``seaborn`` - for alignment matrix visualization
|
- ``seaborn`` - for alignment matrix visualization
|
||||||
- ``ipywidgets`` - for displaying HTML and JS output in the notebook
|
- ``ipywidgets`` - for displaying HTML and JS output in the notebook
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Get Books <#get-books>`__
|
- `Get Books <#get-books>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Machine translation demo
|
Machine translation demo
|
||||||
========================
|
========================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This demo utilizes Intel’s pre-trained model that translates from
|
This demo utilizes Intel’s pre-trained model that translates from
|
||||||
English to German. More information about the model can be found
|
English to German. More information about the model can be found
|
||||||
@ -18,6 +18,8 @@ following structure: ``<s>`` + *tokenized sentence* + ``<s>`` +
|
|||||||
**Output** After the inference, we have a sequence of up to 200 tokens.
|
**Output** After the inference, we have a sequence of up to 200 tokens.
|
||||||
The structure is the same as the one for the input.
|
The structure is the same as the one for the input.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Downloading model <#downloading-model>`__
|
- `Downloading model <#downloading-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Image Colorization with OpenVINO
|
Image Colorization with OpenVINO
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates how to colorize images with OpenVINO using
|
This notebook demonstrates how to colorize images with OpenVINO using
|
||||||
the Colorization model
|
the Colorization model
|
||||||
@ -44,6 +44,8 @@ About Colorization-siggraph
|
|||||||
See the `colorization <https://github.com/richzhang/colorization>`__
|
See the `colorization <https://github.com/richzhang/colorization>`__
|
||||||
repository for more details.
|
repository for more details.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Text Prediction with OpenVINO™
|
Text Prediction with OpenVINO™
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook shows text prediction with OpenVINO. This notebook can
|
This notebook shows text prediction with OpenVINO. This notebook can
|
||||||
work in two different modes, Text Generation and Conversation, which the
|
work in two different modes, Text Generation and Conversation, which the
|
||||||
@ -73,6 +73,8 @@ above. The Generated response is added to the history with the
|
|||||||
and the sequence is passed back into the model.
|
and the sequence is passed back into the model.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Model Selection <#model-selection>`__
|
- `Model Selection <#model-selection>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Part Segmentation of 3D Point Clouds with OpenVINO™
|
Part Segmentation of 3D Point Clouds with OpenVINO™
|
||||||
===================================================
|
===================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates how to process `point
|
This notebook demonstrates how to process `point
|
||||||
cloud <https://en.wikipedia.org/wiki/Point_cloud>`__ data and run 3D
|
cloud <https://en.wikipedia.org/wiki/Point_cloud>`__ data and run 3D
|
||||||
@ -24,6 +24,8 @@ segmentation, to scene semantic parsing. It is highly efficient and
|
|||||||
effective, showing strong performance on par or even better than state
|
effective, showing strong performance on par or even better than state
|
||||||
of the art.
|
of the art.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Text-to-Image Generation with Stable Diffusion and OpenVINO™
|
Text-to-Image Generation with Stable Diffusion and OpenVINO™
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Stable Diffusion is a text-to-image latent diffusion model created by
|
Stable Diffusion is a text-to-image latent diffusion model created by
|
||||||
the researchers and engineers from
|
the researchers and engineers from
|
||||||
@ -41,6 +41,8 @@ Notebook contains the following steps:
|
|||||||
API.
|
API.
|
||||||
3. Run Stable Diffusion pipeline with OpenVINO.
|
3. Run Stable Diffusion pipeline with OpenVINO.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert and Optimize YOLOv7 with OpenVINO™
|
Convert and Optimize YOLOv7 with OpenVINO™
|
||||||
==========================================
|
==========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The YOLOv7 algorithm is making big waves in the computer vision and
|
The YOLOv7 algorithm is making big waves in the computer vision and
|
||||||
machine learning communities. It is a real-time object detection
|
machine learning communities. It is a real-time object detection
|
||||||
@ -40,6 +40,8 @@ The tutorial consists of the following steps:
|
|||||||
- Compare accuracy of the FP32 and quantized models.
|
- Compare accuracy of the FP32 and quantized models.
|
||||||
- Compare performance of the FP32 and quantized models.
|
- Compare performance of the FP32 and quantized models.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Get Pytorch model <#get-pytorch-model>`__
|
- `Get Pytorch model <#get-pytorch-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Video Subtitle Generation using Whisper and OpenVINO™
|
Video Subtitle Generation using Whisper and OpenVINO™
|
||||||
=====================================================
|
=====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
`Whisper <https://openai.com/blog/whisper/>`__ is an automatic speech
|
`Whisper <https://openai.com/blog/whisper/>`__ is an automatic speech
|
||||||
recognition (ASR) system trained on 680,000 hours of multilingual and
|
recognition (ASR) system trained on 680,000 hours of multilingual and
|
||||||
@ -26,6 +26,8 @@ Download the model. 2. Instantiate the PyTorch model pipeline. 3. Export
|
|||||||
the ONNX model and convert it to OpenVINO IR, using model conversion
|
the ONNX model and convert it to OpenVINO IR, using model conversion
|
||||||
API. 4. Run the Whisper pipeline with OpenVINO models.
|
API. 4. Run the Whisper pipeline with OpenVINO models.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Zero-shot Image Classification with OpenAI CLIP and OpenVINO™
|
Zero-shot Image Classification with OpenAI CLIP and OpenVINO™
|
||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Zero-shot image classification is a computer vision task to classify
|
Zero-shot image classification is a computer vision task to classify
|
||||||
images into one of several classes without any prior training or
|
images into one of several classes without any prior training or
|
||||||
@ -30,6 +30,8 @@ image classification. The notebook contains the following steps:
|
|||||||
conversion API.
|
conversion API.
|
||||||
4. Run CLIP with OpenVINO.
|
4. Run CLIP with OpenVINO.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Instantiate model <#instantiate-model>`__
|
- `Instantiate model <#instantiate-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Post-Training Quantization of OpenAI CLIP model with NNCF
|
Post-Training Quantization of OpenAI CLIP model with NNCF
|
||||||
=========================================================
|
=========================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The goal of this tutorial is to demonstrate how to speed up the model by
|
The goal of this tutorial is to demonstrate how to speed up the model by
|
||||||
applying 8-bit post-training quantization from
|
applying 8-bit post-training quantization from
|
||||||
@ -23,6 +23,8 @@ The optimization process contains the following steps:
|
|||||||
notebook first to generate OpenVINO IR model that is used for
|
notebook first to generate OpenVINO IR model that is used for
|
||||||
quantization.
|
quantization.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Sentiment Analysis with OpenVINO™
|
Sentiment Analysis with OpenVINO™
|
||||||
=================================
|
=================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
**Sentiment analysis** is the use of natural language processing, text
|
**Sentiment analysis** is the use of natural language processing, text
|
||||||
analysis, computational linguistics, and biometrics to systematically
|
analysis, computational linguistics, and biometrics to systematically
|
||||||
@ -9,6 +9,8 @@ identify, extract, quantify, and study affective states and subjective
|
|||||||
information. This notebook demonstrates how to convert and run a
|
information. This notebook demonstrates how to convert and run a
|
||||||
sequence classification model using OpenVINO.
|
sequence classification model using OpenVINO.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Convert and Optimize YOLOv8 with OpenVINO™
|
Convert and Optimize YOLOv8 with OpenVINO™
|
||||||
==========================================
|
==========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The YOLOv8 algorithm developed by Ultralytics is a cutting-edge,
|
The YOLOv8 algorithm developed by Ultralytics is a cutting-edge,
|
||||||
state-of-the-art (SOTA) model that is designed to be fast, accurate, and
|
state-of-the-art (SOTA) model that is designed to be fast, accurate, and
|
||||||
@ -39,6 +39,8 @@ The tutorial consists of the following steps:
|
|||||||
- Compare performance of the FP32 and quantized models.
|
- Compare performance of the FP32 and quantized models.
|
||||||
- Compare accuracy of the FP32 and quantized models.
|
- Compare accuracy of the FP32 and quantized models.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Get Pytorch model <#get-pytorch-model>`__
|
- `Get Pytorch model <#get-pytorch-model>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Image Editing with InstructPix2Pix and OpenVINO
|
Image Editing with InstructPix2Pix and OpenVINO
|
||||||
===============================================
|
===============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The InstructPix2Pix is a conditional diffusion model that edits images
|
The InstructPix2Pix is a conditional diffusion model that edits images
|
||||||
based on written instructions provided by the user. Generative image
|
based on written instructions provided by the user. Generative image
|
||||||
@ -31,6 +31,8 @@ Notebook contains the following steps:
|
|||||||
3. Run InstructPix2Pix pipeline with OpenVINO.
|
3. Run InstructPix2Pix pipeline with OpenVINO.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Visual Question Answering and Image Captioning using BLIP and OpenVINO
|
Visual Question Answering and Image Captioning using BLIP and OpenVINO
|
||||||
======================================================================
|
======================================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Humans perceive the world through vision and language. A longtime goal
|
Humans perceive the world through vision and language. A longtime goal
|
||||||
of AI is to build intelligent agents that can understand the world
|
of AI is to build intelligent agents that can understand the world
|
||||||
@ -24,6 +24,8 @@ The tutorial consists of the following parts:
|
|||||||
2. Convert the BLIP model to OpenVINO IR.
|
2. Convert the BLIP model to OpenVINO IR.
|
||||||
3. Run visual question answering and image captioning with OpenVINO.
|
3. Run visual question answering and image captioning with OpenVINO.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Background <#background>`__
|
- `Background <#background>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Audio compression with EnCodec and OpenVINO
|
Audio compression with EnCodec and OpenVINO
|
||||||
===========================================
|
===========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Compression is an important part of the Internet today because it
|
Compression is an important part of the Internet today because it
|
||||||
enables people to easily share high-quality photos, listen to audio
|
enables people to easily share high-quality photos, listen to audio
|
||||||
@ -28,6 +28,8 @@ and original `repo <https://github.com/facebookresearch/encodec>`__.
|
|||||||
|
|
||||||
image.png
|
image.png
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Text-to-Image Generation with ControlNet Conditioning
|
Text-to-Image Generation with ControlNet Conditioning
|
||||||
=====================================================
|
=====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Diffusion models make a revolution in AI-generated art. This technology
|
Diffusion models make a revolution in AI-generated art. This technology
|
||||||
enables creation of high-quality images simply by writing a text prompt.
|
enables creation of high-quality images simply by writing a text prompt.
|
||||||
@ -141,6 +141,8 @@ of the target in the image:
|
|||||||
This tutorial focuses mainly on conditioning by pose. However, the
|
This tutorial focuses mainly on conditioning by pose. However, the
|
||||||
discussed steps are also applicable to other annotation modes.
|
discussed steps are also applicable to other annotation modes.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Infinite Zoom Stable Diffusion v2 and OpenVINO™
|
Infinite Zoom Stable Diffusion v2 and OpenVINO™
|
||||||
===============================================
|
===============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Stable Diffusion v2 is the next generation of Stable Diffusion model a
|
Stable Diffusion v2 is the next generation of Stable Diffusion model a
|
||||||
Text-to-Image latent diffusion model created by the researchers and
|
Text-to-Image latent diffusion model created by the researchers and
|
||||||
@ -74,6 +74,8 @@ Notebook contains the following steps:
|
|||||||
3. Run Stable Diffusion v2 inpainting pipeline for generation infinity
|
3. Run Stable Diffusion v2 inpainting pipeline for generation infinity
|
||||||
zoom video
|
zoom video
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Stable Diffusion v2 Infinite Zoom Showcase <#stable-diffusion-v2-infinite-zoom-showcase>`__
|
- `Stable Diffusion v2 Infinite Zoom Showcase <#stable-diffusion-v2-infinite-zoom-showcase>`__
|
||||||
|
@ -1,10 +1,12 @@
|
|||||||
Stable Diffusion v2.1 using Optimum-Intel OpenVINO and multiple Intel Hardware
|
Stable Diffusion v2.1 using Optimum-Intel OpenVINO and multiple Intel Hardware
|
||||||
==============================================================================
|
==============================================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
|image0|
|
|image0|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Showing Info Available Devices <#showing-info-available-devices>`__
|
- `Showing Info Available Devices <#showing-info-available-devices>`__
|
||||||
|
@ -1,10 +1,12 @@
|
|||||||
Stable Diffusion v2.1 using Optimum-Intel OpenVINO
|
Stable Diffusion v2.1 using Optimum-Intel OpenVINO
|
||||||
==================================================
|
==================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
|image0|
|
|image0|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Showing Info Available Devices <#showing-info-available-devices>`__
|
- `Showing Info Available Devices <#showing-info-available-devices>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Stable Diffusion Text-to-Image Demo
|
Stable Diffusion Text-to-Image Demo
|
||||||
===================================
|
===================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Stable Diffusion is an innovative generative AI technique that allows us
|
Stable Diffusion is an innovative generative AI technique that allows us
|
||||||
to generate and manipulate images in interesting ways, including
|
to generate and manipulate images in interesting ways, including
|
||||||
@ -26,6 +26,8 @@ promising results for selecting a wide range of input text prompts!
|
|||||||
`236-stable-diffusion-v2-text-to-image <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image.ipynb>`__.
|
`236-stable-diffusion-v2-text-to-image <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image.ipynb>`__.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Step 0: Install and import prerequisites <#step-0-install-and-import-prerequisites>`__
|
- `Step 0: Install and import prerequisites <#step-0-install-and-import-prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Text-to-Image Generation with Stable Diffusion v2 and OpenVINO™
|
Text-to-Image Generation with Stable Diffusion v2 and OpenVINO™
|
||||||
===============================================================
|
===============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Stable Diffusion v2 is the next generation of Stable Diffusion model a
|
Stable Diffusion v2 is the next generation of Stable Diffusion model a
|
||||||
Text-to-Image latent diffusion model created by the researchers and
|
Text-to-Image latent diffusion model created by the researchers and
|
||||||
@ -81,6 +81,8 @@ Notebook contains the following steps:
|
|||||||
notebook <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image-demo.ipynb>`__.
|
notebook <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image-demo.ipynb>`__.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
Object masks from prompts with SAM and OpenVINO
|
Object masks from prompts with SAM and OpenVINO
|
||||||
===============================================
|
===============================================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. _top:
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
Image generation with DeepFloyd IF and OpenVINO™
|
Image generation with DeepFloyd IF and OpenVINO™
|
||||||
================================================
|
================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
DeepFloyd IF is an advanced open-source text-to-image model that
|
DeepFloyd IF is an advanced open-source text-to-image model that
|
||||||
delivers remarkable photorealism and language comprehension. DeepFloyd
|
delivers remarkable photorealism and language comprehension. DeepFloyd
|
||||||
IF consists of a frozen text encoder and three cascaded pixel diffusion
|
IF consists of a frozen text encoder and three cascaded pixel diffusion
|
||||||
@ -78,6 +76,10 @@ vector in embedded space.
|
|||||||
conventional Super Resolution network to get hi-res results.
|
conventional Super Resolution network to get hi-res results.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Binding multimodal data using ImageBind and OpenVINO
|
Binding multimodal data using ImageBind and OpenVINO
|
||||||
====================================================
|
====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Exploring the surrounding world, people get information using multiple
|
Exploring the surrounding world, people get information using multiple
|
||||||
senses, for example, seeing a busy street and hearing the sounds of car
|
senses, for example, seeing a busy street and hearing the sounds of car
|
||||||
@ -69,6 +69,8 @@ represented on the image below:
|
|||||||
In this tutorial, we consider how to use ImageBind for multimodal
|
In this tutorial, we consider how to use ImageBind for multimodal
|
||||||
zero-shot classification.
|
zero-shot classification.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Instruction following using Databricks Dolly 2.0 and OpenVINO
|
Instruction following using Databricks Dolly 2.0 and OpenVINO
|
||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The instruction following is one of the cornerstones of the current
|
The instruction following is one of the cornerstones of the current
|
||||||
generation of large language models(LLMs). Reinforcement learning with
|
generation of large language models(LLMs). Reinforcement learning with
|
||||||
@ -82,6 +82,8 @@ post <https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-v
|
|||||||
and `repo <https://github.com/databrickslabs/dolly>`__
|
and `repo <https://github.com/databrickslabs/dolly>`__
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Text-to-Music generation using Riffusion and OpenVINO
|
Text-to-Music generation using Riffusion and OpenVINO
|
||||||
=====================================================
|
=====================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
`Riffusion <https://huggingface.co/riffusion/riffusion-model-v1>`__ is a
|
`Riffusion <https://huggingface.co/riffusion/riffusion-model-v1>`__ is a
|
||||||
latent text-to-image diffusion model capable of generating spectrogram
|
latent text-to-image diffusion model capable of generating spectrogram
|
||||||
@ -76,6 +76,8 @@ The STFT is invertible, so the original audio can be reconstructed from
|
|||||||
a spectrogram. This idea is a behind approach to using Riffusion for
|
a spectrogram. This idea is a behind approach to using Riffusion for
|
||||||
audio generation.
|
audio generation.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
High-Quality Text-Free One-Shot Voice Conversion with FreeVC and OpenVINO™
|
High-Quality Text-Free One-Shot Voice Conversion with FreeVC and OpenVINO™
|
||||||
==========================================================================
|
==========================================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
`FreeVC <https://github.com/OlaWod/FreeVC>`__ allows alter the voice of
|
`FreeVC <https://github.com/OlaWod/FreeVC>`__ allows alter the voice of
|
||||||
a source speaker to a target style, while keeping the linguistic content
|
a source speaker to a target style, while keeping the linguistic content
|
||||||
@ -30,6 +30,8 @@ devices. It consists of the following steps:
|
|||||||
- Convert models to OpenVINO Intermediate Representation.
|
- Convert models to OpenVINO Intermediate Representation.
|
||||||
- Inference using only OpenVINO’s IR models.
|
- Inference using only OpenVINO’s IR models.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Selfie Segmentation using TFLite and OpenVINO
|
Selfie Segmentation using TFLite and OpenVINO
|
||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The Selfie segmentation pipeline allows developers to easily separate
|
The Selfie segmentation pipeline allows developers to easily separate
|
||||||
the background from users within a scene and focus on what matters.
|
the background from users within a scene and focus on what matters.
|
||||||
@ -36,6 +36,8 @@ The tutorial consists of following steps:
|
|||||||
2. Run inference on the image.
|
2. Run inference on the image.
|
||||||
3. Run interactive background blurring demo on video.
|
3. Run interactive background blurring demo on video.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Named entity recognition with OpenVINO™
|
Named entity recognition with OpenVINO™
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The Named Entity Recognition(NER) is a natural language processing
|
The Named Entity Recognition(NER) is a natural language processing
|
||||||
method that involves the detecting of key information in the
|
method that involves the detecting of key information in the
|
||||||
@ -27,6 +27,8 @@ To simplify the user experience, the `Hugging Face
|
|||||||
Optimum <https://huggingface.co/docs/optimum>`__ library is used to
|
Optimum <https://huggingface.co/docs/optimum>`__ library is used to
|
||||||
convert the model to OpenVINO™ IR format and quantize it.
|
convert the model to OpenVINO™ IR format and quantize it.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Image generation with Stable Diffusion XL and OpenVINO
|
Image generation with Stable Diffusion XL and OpenVINO
|
||||||
======================================================
|
======================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
Stable Diffusion XL or SDXL is the latest image generation model that is
|
Stable Diffusion XL or SDXL is the latest image generation model that is
|
||||||
tailored towards more photorealistic outputs with more detailed imagery
|
tailored towards more photorealistic outputs with more detailed imagery
|
||||||
@ -67,6 +67,8 @@ The tutorial consists of the following steps:
|
|||||||
Some demonstrated models can require at least 64GB RAM for
|
Some demonstrated models can require at least 64GB RAM for
|
||||||
conversion and running.
|
conversion and running.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Install Prerequisites <#install-prerequisites>`__
|
- `Install Prerequisites <#install-prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Controllable Music Generation with MusicGen and OpenVINO
|
Controllable Music Generation with MusicGen and OpenVINO
|
||||||
========================================================
|
========================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
MusicGen is a single-stage auto-regressive Transformer model capable of
|
MusicGen is a single-stage auto-regressive Transformer model capable of
|
||||||
generating high-quality music samples conditioned on text descriptions
|
generating high-quality music samples conditioned on text descriptions
|
||||||
@ -32,6 +32,8 @@ We will use a model implementation from the `Hugging Face
|
|||||||
Transformers <https://huggingface.co/docs/transformers/index>`__
|
Transformers <https://huggingface.co/docs/transformers/index>`__
|
||||||
library.
|
library.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Requirements and Imports <#prerequisites>`__
|
- `Requirements and Imports <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Image Generation with Tiny-SD and OpenVINO™
|
Image Generation with Tiny-SD and OpenVINO™
|
||||||
===========================================
|
===========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
In recent times, the AI community has witnessed a remarkable surge in
|
In recent times, the AI community has witnessed a remarkable surge in
|
||||||
the development of larger and more performant language models, such as
|
the development of larger and more performant language models, such as
|
||||||
@ -41,7 +41,9 @@ The notebook contains the following steps:
|
|||||||
3. Run Inference pipeline with OpenVINO.
|
3. Run Inference pipeline with OpenVINO.
|
||||||
4. Run Interactive demo for Tiny-SD model
|
4. Run Interactive demo for Tiny-SD model
|
||||||
|
|
||||||
**Table of content**:
|
.. _toc:
|
||||||
|
|
||||||
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
- `Create PyTorch Models pipeline <#create-pytorch-models-pipeline>`__
|
- `Create PyTorch Models pipeline <#create-pytorch-models-pipeline>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
`FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention <https://fastcomposer.mit.edu/>`__
|
`FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention <https://fastcomposer.mit.edu/>`__
|
||||||
=====================================================================================================================
|
=====================================================================================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
FastComposer uses subject embeddings extracted by an image encoder to
|
FastComposer uses subject embeddings extracted by an image encoder to
|
||||||
augment the generic text conditioning in diffusion models, enabling
|
augment the generic text conditioning in diffusion models, enabling
|
||||||
@ -32,6 +32,8 @@ different styles, actions, and contexts.
|
|||||||
drivers in the system - changes to have compatibility with
|
drivers in the system - changes to have compatibility with
|
||||||
transformers >= 4.30.1 (due to security vulnerability)
|
transformers >= 4.30.1 (due to security vulnerability)
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Install Prerequisites <#install-prerequisites>`__
|
- `Install Prerequisites <#install-prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Video generation with ZeroScope and OpenVINO
|
Video generation with ZeroScope and OpenVINO
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The ZeroScope model is a free and open-source text-to-video model that
|
The ZeroScope model is a free and open-source text-to-video model that
|
||||||
can generate realistic and engaging videos from text descriptions. It is
|
can generate realistic and engaging videos from text descriptions. It is
|
||||||
@ -34,6 +34,8 @@ Both versions of the ZeroScope model are available on Hugging Face:
|
|||||||
|
|
||||||
We will use the first one.
|
We will use the first one.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Install and import required packages <#install-and-import-required-packages>`__
|
- `Install and import required packages <#install-and-import-required-packages>`__
|
||||||
|
@ -11,6 +11,8 @@ A custom dataloader and metric will be defined, and accuracy and
|
|||||||
performance will be computed for the original IR model and the quantized
|
performance will be computed for the original IR model and the quantized
|
||||||
model.
|
model.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
From Training to Deployment with TensorFlow and OpenVINO™
|
From Training to Deployment with TensorFlow and OpenVINO™
|
||||||
=========================================================
|
=========================================================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. _top:
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Quantization Aware Training with NNCF, using PyTorch framework
|
Quantization Aware Training with NNCF, using PyTorch framework
|
||||||
==============================================================
|
==============================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook is based on `ImageNet training in
|
This notebook is based on `ImageNet training in
|
||||||
PyTorch <https://github.com/pytorch/examples/blob/master/imagenet/main.py>`__.
|
PyTorch <https://github.com/pytorch/examples/blob/master/imagenet/main.py>`__.
|
||||||
@ -34,6 +34,8 @@ hub <https://pytorch.org/hub/pytorch_vision_resnet/>`__.
|
|||||||
This notebook requires a C++ compiler.
|
This notebook requires a C++ compiler.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports and Settings <#imports-and-settings>`__
|
- `Imports and Settings <#imports-and-settings>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Quantization Aware Training with NNCF, using TensorFlow Framework
|
Quantization Aware Training with NNCF, using TensorFlow Framework
|
||||||
=================================================================
|
=================================================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
The goal of this notebook to demonstrate how to use the Neural Network
|
The goal of this notebook to demonstrate how to use the Neural Network
|
||||||
Compression Framework `NNCF <https://github.com/openvinotoolkit/nncf>`__
|
Compression Framework `NNCF <https://github.com/openvinotoolkit/nncf>`__
|
||||||
@ -23,6 +23,8 @@ Imagenette is a subset of 10 easily classified classes from the ImageNet
|
|||||||
dataset. Using the smaller model and dataset will speed up training and
|
dataset. Using the smaller model and dataset will speed up training and
|
||||||
download time.
|
download time.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports and Settings <#imports-and-settings>`__
|
- `Imports and Settings <#imports-and-settings>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Live Object Detection with OpenVINO™
|
Live Object Detection with OpenVINO™
|
||||||
====================================
|
====================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates live object detection with OpenVINO, using
|
This notebook demonstrates live object detection with OpenVINO, using
|
||||||
the `SSDLite
|
the `SSDLite
|
||||||
@ -17,6 +17,8 @@ Additionally, you can also upload a video file.
|
|||||||
with a webcam. If you run the notebook on a server, the webcam will not work.
|
with a webcam. If you run the notebook on a server, the webcam will not work.
|
||||||
However, you can still do inference on a video.
|
However, you can still do inference on a video.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Live Human Pose Estimation with OpenVINO™
|
Live Human Pose Estimation with OpenVINO™
|
||||||
=========================================
|
=========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates live pose estimation with OpenVINO, using the
|
This notebook demonstrates live pose estimation with OpenVINO, using the
|
||||||
OpenPose
|
OpenPose
|
||||||
@ -18,6 +18,8 @@ Additionally, you can also upload a video file.
|
|||||||
work. However, you can still do inference on a video in the final
|
work. However, you can still do inference on a video in the final
|
||||||
step.
|
step.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Human Action Recognition with OpenVINO™
|
Human Action Recognition with OpenVINO™
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates live human action recognition with OpenVINO,
|
This notebook demonstrates live human action recognition with OpenVINO,
|
||||||
using the `Action Recognition
|
using the `Action Recognition
|
||||||
@ -39,6 +39,8 @@ Transformer <https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)>
|
|||||||
and
|
and
|
||||||
`ResNet34 <https://pytorch.org/vision/main/models/generated/torchvision.models.resnet34.html>`__.
|
`ResNet34 <https://pytorch.org/vision/main/models/generated/torchvision.models.resnet34.html>`__.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Style Transfer with OpenVINO™
|
Style Transfer with OpenVINO™
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates style transfer with OpenVINO, using the Style
|
This notebook demonstrates style transfer with OpenVINO, using the Style
|
||||||
Transfer Models from `ONNX Model
|
Transfer Models from `ONNX Model
|
||||||
@ -32,6 +32,8 @@ Additionally, you can also upload a video file.
|
|||||||
but you can run inference, using a video file.
|
but you can run inference, using a video file.
|
||||||
|
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Preparation <#preparation>`__
|
- `Preparation <#preparation>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
PaddleOCR with OpenVINO™
|
PaddleOCR with OpenVINO™
|
||||||
========================
|
========================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This demo shows how to run PP-OCR model on OpenVINO natively. Instead of
|
This demo shows how to run PP-OCR model on OpenVINO natively. Instead of
|
||||||
exporting the PaddlePaddle model to ONNX and then converting to the
|
exporting the PaddlePaddle model to ONNX and then converting to the
|
||||||
@ -25,6 +25,8 @@ the PaddleOCR is as follows:
|
|||||||
with a webcam. If you run the notebook on a server, the webcam will not work.
|
with a webcam. If you run the notebook on a server, the webcam will not work.
|
||||||
You can still do inference on a video file.
|
You can still do inference on a video file.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Live 3D Human Pose Estimation with OpenVINO
|
Live 3D Human Pose Estimation with OpenVINO
|
||||||
===========================================
|
===========================================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates live 3D Human Pose Estimation with OpenVINO
|
This notebook demonstrates live 3D Human Pose Estimation with OpenVINO
|
||||||
via a webcam. We utilize the model
|
via a webcam. We utilize the model
|
||||||
@ -30,6 +30,8 @@ To ensure that the results are displayed correctly, run the code in a
|
|||||||
recommended browser on one of the following operating systems: Ubuntu,
|
recommended browser on one of the following operating systems: Ubuntu,
|
||||||
Windows: Chrome, macOS: Safari.
|
Windows: Chrome, macOS: Safari.
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Prerequisites <#prerequisites>`__
|
- `Prerequisites <#prerequisites>`__
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
Person Tracking with OpenVINO™
|
Person Tracking with OpenVINO™
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
.. _top:
|
|
||||||
|
|
||||||
This notebook demonstrates live person tracking with OpenVINO: it reads
|
This notebook demonstrates live person tracking with OpenVINO: it reads
|
||||||
frames from an input video sequence, detects people in the frames,
|
frames from an input video sequence, detects people in the frames,
|
||||||
@ -95,6 +95,8 @@ realtime tracking,” in ICIP, 2016, pp. 3464–3468.
|
|||||||
|
|
||||||
.. |deepsort| image:: https://user-images.githubusercontent.com/91237924/221744683-0042eff8-2c41-43b8-b3ad-b5929bafb60b.png
|
.. |deepsort| image:: https://user-images.githubusercontent.com/91237924/221744683-0042eff8-2c41-43b8-b3ad-b5929bafb60b.png
|
||||||
|
|
||||||
|
.. _top:
|
||||||
|
|
||||||
**Table of contents**:
|
**Table of contents**:
|
||||||
|
|
||||||
- `Imports <#imports>`__
|
- `Imports <#imports>`__
|
||||||
|
@ -131,6 +131,15 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools.
|
|||||||
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| `120-tensorflow-object-detection-to-openvino <notebooks/120-tensorflow-object-detection-to-openvino-with-output.html>`__ |br| |n120| |br| |c120| | Convert TensorFlow Object Detection models to OpenVINO IR |
|
| `120-tensorflow-object-detection-to-openvino <notebooks/120-tensorflow-object-detection-to-openvino-with-output.html>`__ |br| |n120| |br| |c120| | Convert TensorFlow Object Detection models to OpenVINO IR |
|
||||||
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| `122-speech-recognition-quantization-wav2vec2 <notebooks/122-speech-recognition-quantization-wav2vec2-with-output.html>`__ | Quantize Speech Recognition Models with accuracy control using NNCF PTQ API. |
|
||||||
|
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
| `122-yolov8-quantization-with-accuracy-control <notebooks/122-yolov8-quantization-with-accuracy-control-with-output.html>`__ | Convert and Optimize YOLOv8 with OpenVINO™. |
|
||||||
|
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Model Demos
|
Model Demos
|
||||||
|
Loading…
Reference in New Issue
Block a user