update-notebooks (#19450)

Add notebook 252-fastcomposer-image-generation. Fix indentation, admonitions, broken links and images.
This commit is contained in:
Sebastian Golebiewski 2023-08-28 14:23:23 +02:00 committed by GitHub
parent 94c21b53b3
commit f306007e59
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
57 changed files with 1540 additions and 253 deletions

View File

@ -292,13 +292,15 @@ TensorFlow Model
TensorFlow models saved in frozen graph format can also be passed to TensorFlow models saved in frozen graph format can also be passed to
``read_model`` starting in OpenVINO 2022.3. ``read_model`` starting in OpenVINO 2022.3.
**NOTE**: Directly loading TensorFlow models is available as a .. note::
Directly loading TensorFlow models is available as a
preview feature in the OpenVINO 2022.3 release. Fully functional preview feature in the OpenVINO 2022.3 release. Fully functional
support will be provided in the upcoming 2023 releases. Currently support will be provided in the upcoming 2023 releases. Currently
support is limited to only frozen graph inference format. Other support is limited to only frozen graph inference format. Other
TensorFlow model formats must be converted to OpenVINO IR using TensorFlow model formats must be converted to OpenVINO IR using
`model conversion `model conversion API <https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html>`__.
API <https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html>`__.
.. code:: ipython3 .. code:: ipython3
@ -563,9 +565,11 @@ classes (``C``). The output is returned as 32-bit floating point.
Doing Inference on a Model Doing Inference on a Model
-------------------------- --------------------------
**NOTE** this notebook demonstrates only the basic synchronous .. note::
inference API. For an async inference example, please refer to `Async
API notebook <115-async-api-with-output.html>`__ This notebook demonstrates only the basic synchronous
inference API. For an async inference example, please refer to
`Async API notebook <115-async-api-with-output.html>`__
The diagram below shows a typical inference pipeline with OpenVINO The diagram below shows a typical inference pipeline with OpenVINO
@ -926,7 +930,9 @@ model will be loaded to the GPU. After running this cell once, the model
will be cached, so subsequent runs of this cell will load the model from will be cached, so subsequent runs of this cell will load the model from
the cache. the cache.
*Note: Model Caching is also available on CPU devices* .. note::
Model Caching is also available on CPU devices
.. code:: ipython3 .. code:: ipython3

View File

@ -237,14 +237,17 @@ Optimizer Python API should be used for these purposes. More details
regarding PyTorch model conversion can be found in OpenVINO regarding PyTorch model conversion can be found in OpenVINO
`documentation <https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html>`__ `documentation <https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html>`__
**Note**: Please, take into account that direct support PyTorch .. note::
Please, take into account that direct support PyTorch
models conversion is an experimental feature. Model coverage will be models conversion is an experimental feature. Model coverage will be
increased in the next releases. For cases, when PyTorch model increased in the next releases. For cases, when PyTorch model
conversion failed, you still can try to export the model to ONNX conversion failed, you still can try to export the model to ONNX
format. Please refer to this format. Please, refer to this
`tutorial <102-pytorch-to-openvino-with-output.html>`__ `tutorial <102-pytorch-to-openvino-with-output.html>`__
which explains how to convert PyTorch model to ONNX, then to OpenVINO which explains how to convert PyTorch model to ONNX, then to OpenVINO
The ``convert_model`` function accepts the PyTorch model object and The ``convert_model`` function accepts the PyTorch model object and
returns the ``openvino.runtime.Model`` instance ready to load on a returns the ``openvino.runtime.Model`` instance ready to load on a
device using ``core.compile_model`` or save on disk for next usage using device using ``core.compile_model`` or save on disk for next usage using
@ -501,8 +504,8 @@ Run OpenVINO Model Inference with Static Input Shape `⇑ <#top>`__
5: hamper - 2.35% 5: hamper - 2.35%
Benchmark OpenVINO Model Inference with Static Input Shape Benchmark OpenVINO Model Inference with Static Input Shape `⇑ <#top>`__
`⇑ <#top>`__ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. code:: ipython3 .. code:: ipython3
@ -645,8 +648,9 @@ OpenVINO IR is similar to the original PyTorch model.
5: hamper - 2.35% 5: hamper - 2.35%
Benchmark OpenVINO Model Inference Converted From Scripted Model Benchmark OpenVINO Model Inference Converted From Scripted Model `⇑ <#top>`__
`⇑ <#top>`__ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. code:: ipython3 .. code:: ipython3
@ -772,8 +776,8 @@ similar to the original PyTorch model.
5: hamper - 2.35% 5: hamper - 2.35%
Benchmark OpenVINO Model Inference Converted From Traced Model Benchmark OpenVINO Model Inference Converted From Traced Model `⇑ <#top>`__
`⇑ <#top>`__ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. code:: ipython3 .. code:: ipython3

View File

@ -18,7 +18,7 @@ Source of the
**Table of contents**: **Table of contents**:
- `Preparation <#1preparation>`__ - `Preparation <#preparation>`__
- `Imports <#imports>`__ - `Imports <#imports>`__
- `Settings <#settings>`__ - `Settings <#settings>`__

View File

@ -423,7 +423,9 @@ In the next cell, define the ``benchmark_model()`` function that calls
``benchmark_app``. This makes it easy to try different combinations. In ``benchmark_app``. This makes it easy to try different combinations. In
the cell below that, you display available devices on the system. the cell below that, you display available devices on the system.
**Note**: In this notebook, ``benchmark_app`` runs for 15 seconds to .. note::
In this notebook, ``benchmark_app`` runs for 15 seconds to
give a quick indication of performance. For more accurate give a quick indication of performance. For more accurate
performance, it is recommended to run inference for at least one performance, it is recommended to run inference for at least one
minute by setting the ``t`` parameter to 60 or higher, and run minute by setting the ``t`` parameter to 60 or higher, and run
@ -432,6 +434,7 @@ the cell below that, you display available devices on the system.
command prompt where you have activated the ``openvino_env`` command prompt where you have activated the ``openvino_env``
environment. environment.
.. code:: ipython3 .. code:: ipython3
def benchmark_model(model_xml, device="CPU", seconds=60, api="async", batch=1): def benchmark_model(model_xml, device="CPU", seconds=60, api="async", batch=1):
@ -525,8 +528,6 @@ Benchmark command:
benchmark_model(model_path, device="GPU", seconds=15, api="async") benchmark_model(model_path, device="GPU", seconds=15, api="async")
.. raw:: html .. raw:: html
<div class="alert alert-warning">Running this cell requires a GPU device, which is not available on this system. The following device is available: CPU <div class="alert alert-warning">Running this cell requires a GPU device, which is not available on this system. The following device is available: CPU
@ -537,7 +538,6 @@ Benchmark command:
benchmark_model(model_path, device="MULTI:CPU,GPU", seconds=15, api="async") benchmark_model(model_path, device="MULTI:CPU,GPU", seconds=15, api="async")
.. raw:: html .. raw:: html
<div class="alert alert-warning">Running this cell requires a GPU device, which is not available on this system. The following device is available: CPU <div class="alert alert-warning">Running this cell requires a GPU device, which is not available on this system. The following device is available: CPU

View File

@ -593,7 +593,9 @@ Finally, measure the inference performance of OpenVINO ``FP32`` and
`Benchmark Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__ `Benchmark Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__
in OpenVINO. in OpenVINO.
**Note**: The ``benchmark_app`` tool is able to measure the .. note::
The ``benchmark_app`` tool is able to measure the
performance of the OpenVINO Intermediate Representation (OpenVINO IR) performance of the OpenVINO Intermediate Representation (OpenVINO IR)
models only. For more accurate performance, run ``benchmark_app`` in models only. For more accurate performance, run ``benchmark_app`` in
a terminal/command prompt after closing other applications. Run a terminal/command prompt after closing other applications. Run
@ -602,6 +604,7 @@ in OpenVINO.
Run ``benchmark_app --help`` to see an overview of all command-line Run ``benchmark_app --help`` to see an overview of all command-line
options. options.
.. code:: ipython3 .. code:: ipython3
# Inference FP32 model (OpenVINO IR) # Inference FP32 model (OpenVINO IR)

View File

@ -36,18 +36,18 @@ first inference.
- `Import modules and create Core <#import-modules-and-create-core>`__ - `Import modules and create Core <#import-modules-and-create-core>`__
- `Convert the model to OpenVINO IR format <#convert-the-model-to-openvino-ir-format>`__ - `Convert the model to OpenVINO IR format <#convert-the-model-to-openvino-ir-format>`__
- `(1) Simplify selection logic <#1-simplify-selection-logic>`__ - `(1) Simplify selection logic <#simplify-selection-logic>`__
- `Default behavior of Core::compile_model API without device_name <#default-behavior-of-core::compile_model-api-without-device_name>`__ - `Default behavior of Core::compile_model API without device_name <#default-behavior-of-core::compile_model-api-without-device_name>`__
- `Explicitly pass AUTO as device_name to Core::compile_model API <#explicitly-pass-auto-as-device_name-to-core::compile_model-api>`__ - `Explicitly pass AUTO as device_name to Core::compile_model API <#explicitly-pass-auto-as-device_name-to-core::compile_model-api>`__
- `(2) Improve the first inference latency <#2-improve-the-first-inference-latency>`__ - `(2) Improve the first inference latency <#improve-the-first-inference-latency>`__
- `Load an Image <#load-an-image>`__ - `Load an Image <#load-an-image>`__
- `Load the model to GPU device and perform inference <#load-the-model-to-gpu-device-and-perform-inference>`__ - `Load the model to GPU device and perform inference <#load-the-model-to-gpu-device-and-perform-inference>`__
- `Load the model using AUTO device and do inference <#load-the-model-using-auto-device-and-do-inference>`__ - `Load the model using AUTO device and do inference <#load-the-model-using-auto-device-and-do-inference>`__
- `(3) Achieve different performance for different targets <#3-achieve-different-performance-for-different-targets>`__ - `(3) Achieve different performance for different targets <#achieve-different-performance-for-different-targets>`__
- `Class and callback definition <#class-and-callback-definition>`__ - `Class and callback definition <#class-and-callback-definition>`__
- `Inference with THROUGHPUT hint <#inference-with-throughput-hint>`__ - `Inference with THROUGHPUT hint <#inference-with-throughput-hint>`__

View File

@ -342,11 +342,11 @@ Create a quantized model from the pre-trained ``FP16`` model and the
calibration dataset. The optimization process contains the following calibration dataset. The optimization process contains the following
steps: steps:
::
1. Create a Dataset for quantization. 1. Create a Dataset for quantization.
2. Run `nncf.quantize` for getting an optimized model. The `nncf.quantize` function provides an interface for model quantization. It requires an instance of the OpenVINO Model and quantization dataset. Optionally, some additional parameters for the configuration quantization process (number of samples for quantization, preset, ignored scope, etc.) can be provided. For more accurate results, we should keep the operation in the postprocessing subgraph in floating point precision, using the `ignored_scope` parameter. `advanced_parameters` can be used to specify advanced quantization parameters for fine-tuning the quantization algorithm. In this tutorial we pass range estimator parameters for activations. For more information see [Tune quantization parameters](https://docs.openvino.ai/2023.0/basic_quantization_flow.html#tune-quantization-parameters). 2. Run ``nncf.quantize`` for getting an optimized model. The ``nncf.quantize`` function provides an interface for model quantization. It requires an instance of the OpenVINO Model and quantization dataset. Optionally, some additional parameters for the configuration quantization process (number of samples for quantization, preset, ignored scope, etc.) can be provided. For more accurate results, we should keep the operation in the postprocessing subgraph in floating point precision, using the ``ignored_scope`` parameter. ``advanced_parameters`` can be used to specify advanced quantization parameters for fine-tuning the quantization algorithm. In this tutorial we pass range estimator parameters for activations. For more information see
3. Serialize OpenVINO IR model using `openvino.runtime.serialize` function. `Tune quantization parameters <https://docs.openvino.ai/2023.0/basic_quantization_flow.html#tune-quantization-parameters>`__.
3. Serialize OpenVINO IR model using ``openvino.runtime.serialize`` function.
.. code:: ipython3 .. code:: ipython3
@ -663,7 +663,9 @@ Tool <https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_
is used to measure the inference performance of the ``FP16`` and is used to measure the inference performance of the ``FP16`` and
``INT8`` models. ``INT8`` models.
**NOTE**: For more accurate performance, it is recommended to run .. note::
For more accurate performance, it is recommended to run
``benchmark_app`` in a terminal/command prompt after closing other ``benchmark_app`` in a terminal/command prompt after closing other
applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark
async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to

View File

@ -553,16 +553,19 @@ manually specify devices to use. Below is an example showing how to use
``compiled_model = core.compile_model(model=model, device_name="AUTO", config={"PERFORMANCE_HINT": "CUMULATIVE_THROUGHPUT"})`` ``compiled_model = core.compile_model(model=model, device_name="AUTO", config={"PERFORMANCE_HINT": "CUMULATIVE_THROUGHPUT"})``
**Important**: **The “THROUGHPUT”, “MULTI”, and .. important::
The “THROUGHPUT”, “MULTI”, and
“CUMULATIVE_THROUGHPUT” modes are only applicable to asynchronous “CUMULATIVE_THROUGHPUT” modes are only applicable to asynchronous
inferencing pipelines. The example at the end of this article shows inferencing pipelines. The example at the end of this article shows
how to set up an asynchronous pipeline that takes advantage of how to set up an asynchronous pipeline that takes advantage of
parallelism to increase throughput.** To learn more, see parallelism to increase throughput. To learn more, see
`Asynchronous `Asynchronous
Inferencing <https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_async_infer_request.html>`__ Inferencing <https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_async_infer_request.html>`__
in OpenVINO as well as the `Asynchronous Inference in OpenVINO as well as the `Asynchronous Inference
notebook <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/115-async-api>`__. notebook <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/115-async-api>`__.
Performance Comparison with benchmark_app `⇑ <#top>`__ Performance Comparison with benchmark_app `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################

View File

@ -21,7 +21,9 @@ many hints simultaneously, like more inference threads + shared memory.
It should give even better performance, but we recommend testing it It should give even better performance, but we recommend testing it
anyway. anyway.
**NOTE**: We especially recommend trying .. note::
We especially recommend trying
``OpenVINO IR model + CPU + shared memory in latency mode`` or ``OpenVINO IR model + CPU + shared memory in latency mode`` or
``OpenVINO IR model + CPU + shared memory + more inference threads``. ``OpenVINO IR model + CPU + shared memory + more inference threads``.
@ -29,12 +31,14 @@ The quantization and pre-post-processing API are not included here as
they change the precision (quantization) or processing graph they change the precision (quantization) or processing graph
(prepostprocessor). You can find examples of how to apply them to (prepostprocessor). You can find examples of how to apply them to
optimize performance on OpenVINO IR files in optimize performance on OpenVINO IR files in
`111-detection-quantization <../111-detection-quantization>`__ and `111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__ and
`118-optimize-preprocessing <../118-optimize-preprocessing>`__. `118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__.
|image0| |image0|
**NOTE**: Many of the steps presented below will give you better .. note::
Many of the steps presented below will give you better
performance. However, some of them may not change anything if they performance. However, some of them may not change anything if they
are strongly dependent on either the hardware or the model. Please are strongly dependent on either the hardware or the model. Please
run this notebook on your computer with your model to learn which of run this notebook on your computer with your model to learn which of
@ -45,7 +49,7 @@ optimize performance on OpenVINO IR files in
result in different performance. result in different performance.
A similar notebook focused on the throughput mode is available A similar notebook focused on the throughput mode is available
`here <109-throughput-tricks.ipynb>`__. `here <109-throughput-tricks-with-output.html>`__.
**Table of contents**: **Table of contents**:
@ -193,7 +197,9 @@ Hardware `⇑ <#top>`__
The code below lists the available hardware we will use in the The code below lists the available hardware we will use in the
benchmarking process. benchmarking process.
**NOTE**: The hardware you have is probably completely different from .. note::
The hardware you have is probably completely different from
ours. It means you can see completely different results. ours. It means you can see completely different results.
.. code:: ipython3 .. code:: ipython3
@ -606,9 +612,9 @@ Other tricks `⇑ <#top>`__
There are other tricks for performance improvement, such as quantization There are other tricks for performance improvement, such as quantization
and pre-post-processing or dedicated to throughput mode. To get even and pre-post-processing or dedicated to throughput mode. To get even
more from your model, please visit more from your model, please visit
`111-detection-quantization <../111-detection-quantization>`__, `111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__,
`118-optimize-preprocessing <../118-optimize-preprocessing>`__, and `118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__, and
`109-throughput-tricks <109-throughput-tricks.ipynb>`__. `109-throughput-tricks <109-latency-tricks-with-output.html>`__.
Performance comparison `⇑ <#top>`__ Performance comparison `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################

View File

@ -26,12 +26,14 @@ The quantization and pre-post-processing API are not included here as
they change the precision (quantization) or processing graph they change the precision (quantization) or processing graph
(prepostprocessor). You can find examples of how to apply them to (prepostprocessor). You can find examples of how to apply them to
optimize performance on OpenVINO IR files in optimize performance on OpenVINO IR files in
`111-detection-quantization <../111-detection-quantization>`__ and `111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__ and
`118-optimize-preprocessing <../118-optimize-preprocessing>`__. `118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__.
|image0| |image0|
**NOTE**: Many of the steps presented below will give you better .. note::
Many of the steps presented below will give you better
performance. However, some of them may not change anything if they performance. However, some of them may not change anything if they
are strongly dependent on either the hardware or the model. Please are strongly dependent on either the hardware or the model. Please
run this notebook on your computer with your model to learn which of run this notebook on your computer with your model to learn which of
@ -42,7 +44,7 @@ optimize performance on OpenVINO IR files in
result in different performance. result in different performance.
A similar notebook focused on the latency mode is available A similar notebook focused on the latency mode is available
`here <109-latency-tricks.ipynb>`__. `here <109-latency-tricks-with-output.html>`__.
**Table of contents**: **Table of contents**:
@ -180,7 +182,9 @@ Hardware `⇑ <#top>`__
The code below lists the available hardware we will use in the The code below lists the available hardware we will use in the
benchmarking process. benchmarking process.
**NOTE**: The hardware you have is probably completely different from .. note::
The hardware you have is probably completely different from
ours. It means you can see completely different results. ours. It means you can see completely different results.
.. code:: ipython3 .. code:: ipython3
@ -616,7 +620,9 @@ automatically spawns the pool of InferRequest objects (also called
“jobs”) and provides synchronization mechanisms to control the flow of “jobs”) and provides synchronization mechanisms to control the flow of
the pipeline. the pipeline.
**NOTE**: Asynchronous processing cannot guarantee outputs to be in .. note::
Asynchronous processing cannot guarantee outputs to be in
the same order as inputs, so be careful in case of applications when the same order as inputs, so be careful in case of applications when
the order of frames matters, e.g., videos. the order of frames matters, e.g., videos.
@ -662,9 +668,9 @@ options, quantization and pre-post-processing or dedicated to latency
mode. To get even more from your model, please visit `advanced mode. To get even more from your model, please visit `advanced
throughput throughput
options <https://docs.openvino.ai/2023.0/openvino_docs_deployment_optimization_guide_tput_advanced.html>`__, options <https://docs.openvino.ai/2023.0/openvino_docs_deployment_optimization_guide_tput_advanced.html>`__,
`109-latency-tricks <109-latency-tricks.ipynb>`__, `109-latency-tricks <109-latency-tricks-with-output.html>`__,
`111-detection-quantization <../111-detection-quantization>`__, and `111-detection-quantization <111-yolov5-quantization-migration-with-output.html>`__, and
`118-optimize-preprocessing <../118-optimize-preprocessing>`__. `118-optimize-preprocessing <118-optimize-preprocessing-with-output.html>`__.
Performance comparison `⇑ <#top>`__ Performance comparison `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################

View File

@ -116,7 +116,9 @@ To measure the inference performance of the IR model, use
is a command-line application that can be run in the notebook with is a command-line application that can be run in the notebook with
``! benchmark_app`` or ``%sx benchmark_app`` commands. ``! benchmark_app`` or ``%sx benchmark_app`` commands.
**Note**: The ``benchmark_app`` tool is able to measure the .. note::
The ``benchmark_app`` tool is able to measure the
performance of the OpenVINO Intermediate Representation (OpenVINO IR) performance of the OpenVINO Intermediate Representation (OpenVINO IR)
models only. For more accurate performance, run ``benchmark_app`` in models only. For more accurate performance, run ``benchmark_app`` in
a terminal/command prompt after closing other applications. Run a terminal/command prompt after closing other applications. Run
@ -125,6 +127,7 @@ is a command-line application that can be run in the notebook with
Run ``benchmark_app --help`` to see an overview of all command-line Run ``benchmark_app --help`` to see an overview of all command-line
options. options.
.. code:: ipython3 .. code:: ipython3
core = Core() core = Core()

View File

@ -455,19 +455,20 @@ this notebook.
advanced algorithms for Neural Networks inference optimization in advanced algorithms for Neural Networks inference optimization in
OpenVINO with minimal accuracy drop. OpenVINO with minimal accuracy drop.
**Note**: NNCF Post-training Quantization is available in OpenVINO .. note::
NNCF Post-training Quantization is available in OpenVINO
2023.0 release. 2023.0 release.
Create a quantized model from the pre-trained ``FP32`` model and the Create a quantized model from the pre-trained ``FP32`` model and the
calibration dataset. The optimization process contains the following calibration dataset. The optimization process contains the following
steps: steps:
:: 1. Create a Dataset for quantization.
2. Run ``nncf.quantize`` for getting an optimized model.
1. Create a Dataset for quantization. 3. Export the quantized model to ONNX and then convert to OpenVINO IR model.
2. Run `nncf.quantize` for getting an optimized model. 4. Serialize the INT8 model using ``openvino.runtime.serialize`` function for benchmarking.
3. Export the quantized model to ONNX and then convert to OpenVINO IR model.
4. Serialize the INT8 model using `openvino.runtime.serialize` function for benchmarking.
.. code:: ipython3 .. code:: ipython3
@ -580,13 +581,16 @@ command line application, part of OpenVINO development tools, that can
be run in the notebook with ``! benchmark_app`` or be run in the notebook with ``! benchmark_app`` or
``%sx benchmark_app``. ``%sx benchmark_app``.
**NOTE**: For the most accurate performance estimation, it is .. note::
For the most accurate performance estimation, it is
recommended to run ``benchmark_app`` in a terminal/command prompt recommended to run ``benchmark_app`` in a terminal/command prompt
after closing other applications. Run after closing other applications. Run
``benchmark_app -m model.xml -d CPU`` to benchmark async inference on ``benchmark_app -m model.xml -d CPU`` to benchmark async inference on
CPU for one minute. Change ``CPU`` to ``GPU`` to benchmark on GPU. CPU for one minute. Change ``CPU`` to ``GPU`` to benchmark on GPU.
Run ``benchmark_app --help`` to see all command line options. Run ``benchmark_app --help`` to see all command line options.
.. code:: ipython3 .. code:: ipython3
# ! benchmark_app --help # ! benchmark_app --help
@ -759,10 +763,13 @@ slices are annotated as kidney.
Run this cell again to show results on a different subset. The random Run this cell again to show results on a different subset. The random
seed is displayed to enable reproducing specific runs of this cell. seed is displayed to enable reproducing specific runs of this cell.
**NOTE**: the images are shown after optional augmenting and .. note::
The images are shown after optional augmenting and
resizing. In the Kits19 dataset all but one of the cases has the resizing. In the Kits19 dataset all but one of the cases has the
``(512, 512)`` input shape. ``(512, 512)`` input shape.
.. code:: ipython3 .. code:: ipython3
# The sigmoid function is used to transform the result of the network # The sigmoid function is used to transform the result of the network
@ -841,7 +848,9 @@ inference on the specified CT scan has completed, the total time and
throughput (fps), including preprocessing and displaying, will be throughput (fps), including preprocessing and displaying, will be
printed. printed.
**NOTE**: If you experience flickering on Firefox, consider using .. note::
If you experience flickering on Firefox, consider using
Chrome or Edge to run this notebook. Chrome or Edge to run this notebook.
Load Model and List of Image Files `⇑ <#top>`__ Load Model and List of Image Files `⇑ <#top>`__

View File

@ -87,10 +87,7 @@ Download the YOLOv5 model `⇑ <#top>`__
.. parsed-literal:: .. parsed-literal::
Download Ultralytics Yolov5 project source: Download Ultralytics Yolov5 project source:
``git clone https://github.com/ultralytics/yolov5.git -b v7.0``
``git clone https://github.com/ultralytics/yolov5.git -b v7.0``
Conversion of the YOLOv5 model to OpenVINO `⇑ <#top>`__ Conversion of the YOLOv5 model to OpenVINO `⇑ <#top>`__

View File

@ -20,7 +20,9 @@ downsized to 64×64 colored images. The tutorial will demonstrate that
only a tiny part of the dataset is needed for the post-training only a tiny part of the dataset is needed for the post-training
quantization, not demanding the fine-tuning of the model. quantization, not demanding the fine-tuning of the model.
**NOTE**: This notebook requires that a C++ compiler is accessible on .. note::
This notebook requires that a C++ compiler is accessible on
the default binary search path of the OS you are running the the default binary search path of the OS you are running the
notebook. notebook.
@ -355,8 +357,7 @@ Create and load original uncompressed model `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ResNet-50 from the ```torchivision`` ResNet-50 from the `torchivision repository <https://github.com/pytorch/vision>`__ is pre-trained on
repository <https://github.com/pytorch/vision>`__ is pre-trained on
ImageNet with more prediction classes than Tiny ImageNet, so the model ImageNet with more prediction classes than Tiny ImageNet, so the model
is adjusted by swapping the last FC layer to one with fewer output is adjusted by swapping the last FC layer to one with fewer output
values. values.
@ -672,7 +673,9 @@ Benchmark Tool runs inference for 60 seconds in asynchronous mode on
CPU. It returns inference speed as latency (milliseconds per image) and CPU. It returns inference speed as latency (milliseconds per image) and
throughput (frames per second) values. throughput (frames per second) values.
**NOTE**: This notebook runs benchmark_app for 15 seconds to give a .. note::
This notebook runs benchmark_app for 15 seconds to give a
quick indication of performance. For more accurate performance, it is quick indication of performance. For more accurate performance, it is
recommended to run benchmark_app in a terminal/command prompt after recommended to run benchmark_app in a terminal/command prompt after
closing other applications. Run ``benchmark_app -m model.xml -d CPU`` closing other applications. Run ``benchmark_app -m model.xml -d CPU``
@ -680,6 +683,7 @@ throughput (frames per second) values.
to benchmark on GPU. Run ``benchmark_app --help`` to see an overview to benchmark on GPU. Run ``benchmark_app --help`` to see an overview
of all command-line options. of all command-line options.
.. code:: ipython3 .. code:: ipython3
device device

View File

@ -324,13 +324,16 @@ models, using `Benchmark
Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__ Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__
- an inference performance measurement tool in OpenVINO. - an inference performance measurement tool in OpenVINO.
**NOTE**: For more accurate performance, it is recommended to run .. note::
For more accurate performance, it is recommended to run
benchmark_app in a terminal/command prompt after closing other benchmark_app in a terminal/command prompt after closing other
applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark
async inference on CPU for one minute. Change CPU to GPU to benchmark async inference on CPU for one minute. Change CPU to GPU to benchmark
on GPU. Run ``benchmark_app --help`` to see an overview of all on GPU. Run ``benchmark_app --help`` to see an overview of all
command-line options. command-line options.
.. code:: ipython3 .. code:: ipython3
# Inference FP16 model (OpenVINO IR) # Inference FP16 model (OpenVINO IR)

View File

@ -132,7 +132,7 @@ Load model using OpenVINO TensorFlow Lite Frontend `⇑ <#top>`__
TensorFlow Lite models are supported via ``FrontEnd`` API. You may skip TensorFlow Lite models are supported via ``FrontEnd`` API. You may skip
conversion to IR and read models directly by OpenVINO runtime API. For conversion to IR and read models directly by OpenVINO runtime API. For
more examples supported formats reading via Frontend API, please look more examples supported formats reading via Frontend API, please look
this `tutorial <../002-openvino-api>`__. this `tutorial <002-openvino-api-with-output.html>`__.
.. code:: ipython3 .. code:: ipython3
@ -224,14 +224,16 @@ Estimate Model Performance `⇑ <#top>`__
is used to measure the inference performance of the model on CPU and is used to measure the inference performance of the model on CPU and
GPU. GPU.
.. note::
**NOTE**: For more accurate performance, it is recommended to run For more accurate performance, it is recommended to run
``benchmark_app`` in a terminal/command prompt after closing other ``benchmark_app`` in a terminal/command prompt after closing other
applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark
async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to
benchmark on GPU. Run ``benchmark_app --help`` to see an overview of benchmark on GPU. Run ``benchmark_app --help`` to see an overview of
all command-line options. all command-line options.
.. code:: ipython3 .. code:: ipython3
print("Benchmark model inference on CPU") print("Benchmark model inference on CPU")

View File

@ -44,7 +44,7 @@ pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.
- `Superresolution on full input image <#superresolution-on-full-input-image>`__ - `Superresolution on full input image <#superresolution-on-full-input-image>`__
- `Compute patches <#compute-patches>`__ - `Compute patches <#compute-patches>`__
- `Do Inference <#do-inference>`__ - `Do Inference <#do-the-inference>`__
- `Save superresolution image and the bicubic image <#save-superresolution-image-and-the-bicubic-image>`__ - `Save superresolution image and the bicubic image <#save-superresolution-image-and-the-bicubic-image>`__
Preparation `⇑ <#top>`__ Preparation `⇑ <#top>`__
@ -260,10 +260,13 @@ Load and Show the Input Image `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################
**NOTE**: For the best results, use raw images (like ``TIFF``, .. note::
For the best results, use raw images (like ``TIFF``,
``BMP`` or ``PNG``). Compressed images (like ``JPEG``) may appear ``BMP`` or ``PNG``). Compressed images (like ``JPEG``) may appear
distorted after processing with the super resolution model. distorted after processing with the super resolution model.
.. code:: ipython3 .. code:: ipython3
IMAGE_PATH = Path("./data/tower.jpg") IMAGE_PATH = Path("./data/tower.jpg")
@ -493,9 +496,12 @@ This may take a while. For the video, the superresolution and bicubic
image are resized by a factor of 2 to improve processing speed. This image are resized by a factor of 2 to improve processing speed. This
gives an indication of the superresolution effect. The video is saved as gives an indication of the superresolution effect. The video is saved as
an ``.avi`` file. You can click on the link to download the video, or an ``.avi`` file. You can click on the link to download the video, or
open it directly from the ``output/`` directory, and play it locally. > open it directly from the ``output/`` directory, and play it locally.
Note: If you run the example in Google Colab, download video files using
the ``Files`` tool. .. note::
If you run the example in Google Colab, download video files using the ``Files`` tool.
.. code:: ipython3 .. code:: ipython3
@ -612,6 +618,8 @@ Compute patches `⇑ <#top>`__
The output image will have a width of 11280 and a height of 7280 The output image will have a width of 11280 and a height of 7280
.. _do-the-inference:
Do Inference `⇑ <#top>`__ Do Inference `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

View File

@ -16,10 +16,13 @@ Resolution,” <https://arxiv.org/abs/1807.06779>`__ 2018 24th
International Conference on Pattern Recognition (ICPR), 2018, International Conference on Pattern Recognition (ICPR), 2018,
pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.
**NOTE**: The Single Image Super Resolution (SISR) model used in this .. note::
The Single Image Super Resolution (SISR) model used in this
demo is not optimized for a video. Results may vary depending on the demo is not optimized for a video. Results may vary depending on the
video. video.
**Table of contents**: **Table of contents**:
- `Preparation <#preparation>`__ - `Preparation <#preparation>`__
@ -220,10 +223,13 @@ with superresolution.
By default, only the first 100 frames of the video are processed. Change By default, only the first 100 frames of the video are processed. Change
``NUM_FRAMES`` in the cell below to modify this. ``NUM_FRAMES`` in the cell below to modify this.
**NOTE**: The resulting video does not contain audio. The input video .. note::
The resulting video does not contain audio. The input video
should be a landscape video and have an input resolution of 360p should be a landscape video and have an input resolution of 360p
(640x360) for the 1032 model, or 480p (720x480) for the 1033 model. (640x360) for the 1032 model, or 480p (720x480) for the 1033 model.
Settings `⇑ <#top>`__ Settings `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

View File

@ -384,7 +384,7 @@ file and create torch dummy input. Input dimensions are in our case
- ``H`` - model input image height - ``H`` - model input image height
- ``W`` - model input image width - ``W`` - model input image width
.. .. note::
Note that H and W are here fixed to 512, as this is required by the Note that H and W are here fixed to 512, as this is required by the
model. Resizing is done inside the inference function from the model. Resizing is done inside the inference function from the
@ -604,19 +604,20 @@ Finally, use the OpenVINO `Benchmark
Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__ Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__
to measure the inference performance of the model. to measure the inference performance of the model.
NOTE: For more accurate performance, it is recommended to run Note that for more accurate performance, it is recommended to run
``benchmark_app`` in a terminal/command prompt after closing other ``benchmark_app`` in a terminal/command prompt after closing other
applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark
async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to
benchmark on GPU. Run ``benchmark_app --help`` to see an overview of benchmark on GPU. Run ``benchmark_app --help`` to see an overview of
all command-line options. all command-line options.
.. .. note::
Keep in mind that the authors of original paper used V100 GPU, which Keep in mind that the authors of original paper used V100 GPU, which
is significantly more powerful than the CPU used to obtain the is significantly more powerful than the CPU used to obtain the
following throughput. Therefore, FPS cant be compared directly. following throughput. Therefore, FPS cant be compared directly.
.. code:: ipython3 .. code:: ipython3
device device

View File

@ -359,8 +359,7 @@ level to ``CRITICAL`` to ignore warnings that are irrelevant for this
demo. For information about setting the parameters, see this demo. For information about setting the parameters, see this
`page <https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html>`__. `page <https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html>`__.
**Convert ONNX Model to OpenVINO IR with**\ `Model Conversion Python **Convert ONNX Model to OpenVINO IR with** `Model Conversion Python API <https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html>`__
API <https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html>`__
.. code:: ipython3 .. code:: ipython3

View File

@ -38,7 +38,7 @@ information, refer to the
- `Text Recognition <#text-recognition>`__ - `Text Recognition <#text-recognition>`__
- `Load Text Recognition Model <#load-text-recognition-model>`__ - `Load Text Recognition Model <#load-text-recognition-model>`__
- `Do Inference <#do-inference>`__ - `Do Inference <#do-the-inference>`__
- `Show Results <#show-results>`__ - `Show Results <#show-results>`__
@ -536,6 +536,9 @@ Load Text Recognition Model `⇑ <#top>`__
# Get the height and width of the input layer. # Get the height and width of the input layer.
_, _, H, W = recognition_input_layer.shape _, _, H, W = recognition_input_layer.shape
.. _do-the-inference:
Do Inference `⇑ <#top>`__ Do Inference `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

View File

@ -114,7 +114,9 @@ method by providing a path to the directory with pipeline configuration
or identification from `HuggingFace or identification from `HuggingFace
hub <https://huggingface.co/pyannote/speaker-diarization>`__. hub <https://huggingface.co/pyannote/speaker-diarization>`__.
**Note**: This tutorial uses a non-official version of model .. note::
This tutorial uses a non-official version of model
``philschmid/pyannote-speaker-diarization-endpoint``, provided only ``philschmid/pyannote-speaker-diarization-endpoint``, provided only
for demo purposes. The original model for demo purposes. The original model
(``pyannote/speaker-diarization``) requires you to accept the model (``pyannote/speaker-diarization``) requires you to accept the model
@ -128,6 +130,7 @@ hub <https://huggingface.co/pyannote/speaker-diarization>`__.
You can log in on HuggingFace Hub in the notebook environment using You can log in on HuggingFace Hub in the notebook environment using
the following code: the following code:
.. code:: python .. code:: python

View File

@ -70,7 +70,9 @@ model is already downloaded. The selected model comes from the public
directory, which means it must be converted into OpenVINO Intermediate directory, which means it must be converted into OpenVINO Intermediate
Representation (OpenVINO IR). Representation (OpenVINO IR).
**Note**: To change the model, replace the name of the model in the .. note::
To change the model, replace the name of the model in the
code below, for example to ``"vehicle-detection-0201"`` or code below, for example to ``"vehicle-detection-0201"`` or
``"vehicle-detection-0202"``. Keep in mind that they support ``"vehicle-detection-0202"``. Keep in mind that they support
different image input sizes in detection. Also, you can change the different image input sizes in detection. Also, you can change the
@ -81,6 +83,7 @@ Representation (OpenVINO IR).
``"FP16"``, and ``"FP16-INT8"``. A different type has a different ``"FP16"``, and ``"FP16-INT8"``. A different type has a different
model size and a precision value. model size and a precision value.
.. code:: ipython3 .. code:: ipython3
# A directory where the model will be downloaded. # A directory where the model will be downloaded.

View File

@ -536,9 +536,8 @@ https://docs.openvino.ai/2023.0/openvino_docs_optimization_guide_dldt_optimizati
References `⇑ <#top>`__ References `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. Convolutional 2D Knowledge Graph 1. Convolutional 2D Knowledge Graph Embeddings, Tim Dettmers et al. (https://arxiv.org/abs/1707.01476)
Embeddings, Tim Dettmers et al. (https://arxiv.org/abs/1707.01476) 2. 2. Model implementation: https://github.com/TimDettmers/ConvE
Model implementation: https://github.com/TimDettmers/ConvE
The ConvE model implementation used in this notebook is licensed under The ConvE model implementation used in this notebook is licensed under
the MIT License. The license is displayed below: MIT License the MIT License. The license is displayed below: MIT License

View File

@ -53,7 +53,7 @@ Prerequisites
- `Visualize Sentence Alignment <#visualize-sentence-alignment>`__ - `Visualize Sentence Alignment <#visualize-sentence-alignment>`__
- `Speed up Embeddings Computation <#speed-up-embeddings-computation>`__ - `Speed up Embeddings Computation <#speed-up-embeddings-computation>`__
.. |image0| image:: https://user-images.githubusercontent.com/51917466/254582697-18f3ab38-e264-4b2c-a088-8e54b855c1b2.png%22 .. |image0| image:: https://user-images.githubusercontent.com/51917466/254582697-18f3ab38-e264-4b2c-a088-8e54b855c1b2.png
.. code:: ipython3 .. code:: ipython3
@ -356,10 +356,13 @@ code <https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes>`__, as the
rules for splitting text into sentences may vary for different rules for splitting text into sentences may vary for different
languages. languages.
**Hint**: The ``book_metadata`` obtained from the Gutendex contains .. hint::
The ``book_metadata`` obtained from the Gutendex contains
the language code as well, enabling automation of this part of the the language code as well, enabling automation of this part of the
pipeline. pipeline.
.. code:: ipython3 .. code:: ipython3
import pysbd import pysbd
@ -410,7 +413,7 @@ translation pairs.
This makes LaBSE a great choice for our task and it can be reused for This makes LaBSE a great choice for our task and it can be reused for
different language pairs still producing good results. different language pairs still producing good results.
.. |image01| image:: https://user-images.githubusercontent.com/51917466/254582913-51531880-373b-40cb-bbf6-1965859df2eb.png%22 .. |image01| image:: https://user-images.githubusercontent.com/51917466/254582913-51531880-373b-40cb-bbf6-1965859df2eb.png
.. code:: ipython3 .. code:: ipython3
@ -952,9 +955,12 @@ advance and fill it in as the inference requests are executed.
Lets compare the models and plot the results. Lets compare the models and plot the results.
**Note**: To get a more accurate benchmark, use the `Benchmark Python .. note::
To get a more accurate benchmark, use the `Benchmark Python
Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__ Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html>`__
.. code:: ipython3 .. code:: ipython3
number_of_chars = 15_000 number_of_chars = 15_000

View File

@ -223,7 +223,7 @@ respectively
Loading the Model `⇑ <#top>`__ Loading the Model `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################
Load the model in OpenVINO Runtime with Load the model in OpenVINO Runtime with
``ie.read_model`` and compile it for the specified device with ``ie.read_model`` and compile it for the specified device with
``ie.compile_model``. ``ie.compile_model``.

View File

@ -494,7 +494,8 @@ The ``text`` variable below is the input used to generate a predicted sequence.
Selected Model is PersonaGPT. Please select GPT-Neo or GPT-2 in the first cell to generate text sequences Selected Model is PersonaGPT. Please select GPT-Neo or GPT-2 in the first cell to generate text sequences
# Conversation with PersonaGPT using OpenVINO™ `⇑ <#top>`__ Conversation with PersonaGPT using OpenVINO™ `⇑ <#top>`__
###############################################################################################################################
User Input is tokenized with ``eos_token`` concatenated in the end. User Input is tokenized with ``eos_token`` concatenated in the end.
Model input is tokenized text, which serves as initial condition for Model input is tokenized text, which serves as initial condition for

View File

@ -64,7 +64,9 @@ Prerequisites `⇑ <#top>`__
**The following is needed only if you want to use the original model. If **The following is needed only if you want to use the original model. If
not, you do not have to do anything. Just run the notebook.** not, you do not have to do anything. Just run the notebook.**
**Note**: The original model (for example, ``stable-diffusion-v1-4``) .. note::
The original model (for example, ``stable-diffusion-v1-4``)
requires you to accept the model license before downloading or using requires you to accept the model license before downloading or using
its weights. Visit the `stable-diffusion-v1-4 its weights. Visit the `stable-diffusion-v1-4
card <https://huggingface.co/CompVis/stable-diffusion-v1-4>`__ to card <https://huggingface.co/CompVis/stable-diffusion-v1-4>`__ to
@ -76,6 +78,7 @@ not, you do not have to do anything. Just run the notebook.**
You can login on Hugging Face Hub in notebook environment, using You can login on Hugging Face Hub in notebook environment, using
following code: following code:
.. code:: python .. code:: python
@ -870,9 +873,12 @@ Now, you can define a text prompt for image generation and run inference
pipeline. Optionally, you can also change the random generator seed for pipeline. Optionally, you can also change the random generator seed for
latent state initialization and number of steps. latent state initialization and number of steps.
**Note**: Consider increasing ``steps`` to get more precise results. .. note::
Consider increasing ``steps`` to get more precise results.
A suggested value is ``50``, but it will take longer time to process. A suggested value is ``50``, but it will take longer time to process.
.. code:: ipython3 .. code:: ipython3
import ipywidgets as widgets import ipywidgets as widgets

View File

@ -772,10 +772,13 @@ OpenVINO with minimal accuracy drop. We will use 8-bit quantization in
post-training mode (without the fine-tuning pipeline) to optimize post-training mode (without the fine-tuning pipeline) to optimize
YOLOv7. YOLOv7.
**Note**: NNCF Post-training Quantization is available as a preview .. note::
NNCF Post-training Quantization is available as a preview
feature in OpenVINO 2022.3 release. Fully functional support will be feature in OpenVINO 2022.3 release. Fully functional support will be
provided in the next releases. provided in the next releases.
The optimization process contains the following steps: The optimization process contains the following steps:
1. Create a Dataset for quantization. 1. Create a Dataset for quantization.
@ -910,13 +913,16 @@ Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_
to measure the inference performance of the ``FP32`` and ``INT8`` to measure the inference performance of the ``FP32`` and ``INT8``
models. models.
**NOTE**: For more accurate performance, it is recommended to run .. note::
For more accurate performance, it is recommended to run
``benchmark_app`` in a terminal/command prompt after closing other ``benchmark_app`` in a terminal/command prompt after closing other
applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark applications. Run ``benchmark_app -m model.xml -d CPU`` to benchmark
async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to async inference on CPU for one minute. Change ``CPU`` to ``GPU`` to
benchmark on GPU. Run ``benchmark_app --help`` to see an overview of benchmark on GPU. Run ``benchmark_app --help`` to see an overview of
all command-line options. all command-line options.
.. code:: ipython3 .. code:: ipython3
device device

View File

@ -16,9 +16,9 @@ The optimization process contains the following steps:
3. Compare model size of converted and quantized models. 3. Compare model size of converted and quantized models.
4. Compare performance of converted and quantized models. 4. Compare performance of converted and quantized models.
.. .. note::
**NOTE**: you should run You should run
`228-clip-zero-shot-convert <228-clip-zero-shot-convert.ipynb>`__ `228-clip-zero-shot-convert <228-clip-zero-shot-convert.ipynb>`__
notebook first to generate OpenVINO IR model that is used for notebook first to generate OpenVINO IR model that is used for
quantization. quantization.
@ -180,9 +180,12 @@ model.
Create a quantized model from the pre-trained ``FP16`` model. Create a quantized model from the pre-trained ``FP16`` model.
**NOTE**: Quantization is time and memory consuming operation. .. note::
Quantization is time and memory consuming operation.
Running quantization code below may take a long time. Running quantization code below may take a long time.
.. code:: ipython3 .. code:: ipython3
import logging import logging
@ -342,10 +345,13 @@ Compare inference time of the FP16 IR and quantized models
we can approximately estimate the speed up of the dynamic quantized we can approximately estimate the speed up of the dynamic quantized
models. models.
**NOTE**: For the most accurate performance estimation, it is .. note::
For the most accurate performance estimation, it is
recommended to run ``benchmark_app`` in a terminal/command prompt recommended to run ``benchmark_app`` in a terminal/command prompt
after closing other applications with static shapes. after closing other applications with static shapes.
.. code:: ipython3 .. code:: ipython3
import time import time

View File

@ -685,10 +685,13 @@ in the YOLOv8 repo, we also need to download annotations in the format
used by the author of the model, for use with the original model used by the author of the model, for use with the original model
evaluation function. evaluation function.
**Note**: The initial dataset download may take a few minutes to .. note::
The initial dataset download may take a few minutes to
complete. The download speed will vary depending on the quality of complete. The download speed will vary depending on the quality of
your internet connection. your internet connection.
.. code:: ipython3 .. code:: ipython3
from zipfile import ZipFile from zipfile import ZipFile
@ -863,13 +866,19 @@ validator class instance.
After definition test function and validator creation, we are ready for After definition test function and validator creation, we are ready for
getting accuracy metrics >\ **Note**: Model evaluation is time consuming getting accuracy metrics.
process and can take several minutes, depending on the hardware. For
reducing calculation time, we define ``num_samples`` parameter with .. note::
evaluation subset size, but in this case, accuracy can be noncomparable
with originally reported by the authors of the model, due to validation Model evaluation is time consuming
subset difference. *To validate the models on the full dataset set process and can take several minutes, depending on the hardware. For
``NUM_TEST_SAMPLES = None``.* reducing calculation time, we define ``num_samples`` parameter with
evaluation subset size, but in this case, accuracy can be noncomparable
with originally reported by the authors of the model, due to validation
subset difference.
To validate the models on the full dataset set
``NUM_TEST_SAMPLES = None``.
.. code:: ipython3 .. code:: ipython3
@ -1005,9 +1014,12 @@ asymmetric quantization of activations. For more accurate results, we
should keep the operation in the postprocessing subgraph in floating should keep the operation in the postprocessing subgraph in floating
point precision, using the ``ignored_scope`` parameter. point precision, using the ``ignored_scope`` parameter.
**Note**: Model post-training quantization is time-consuming process. .. note::
Model post-training quantization is time-consuming process.
Be patient, it can take several minutes depending on your hardware. Be patient, it can take several minutes depending on your hardware.
.. code:: ipython3 .. code:: ipython3
ignored_scope = nncf.IgnoredScope( ignored_scope = nncf.IgnoredScope(
@ -1189,7 +1201,9 @@ Tool <https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_
to measure the inference performance of the ``FP32`` and ``INT8`` to measure the inference performance of the ``FP32`` and ``INT8``
models. models.
**Note**: For more accurate performance, it is recommended to run .. note::
For more accurate performance, it is recommended to run
``benchmark_app`` in a terminal/command prompt after closing other ``benchmark_app`` in a terminal/command prompt after closing other
applications. Run applications. Run
``benchmark_app -m <model_path> -d CPU -shape "<input_shape>"`` to ``benchmark_app -m <model_path> -d CPU -shape "<input_shape>"`` to
@ -1198,6 +1212,7 @@ models.
``benchmark_app --help`` to see an overview of all command-line ``benchmark_app --help`` to see an overview of all command-line
options. options.
Compare performance object detection models `⇑ <#top>`__ Compare performance object detection models `⇑ <#top>`__
------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------
@ -1637,13 +1652,13 @@ meets passing criteria.
Next steps `⇑ <#top>`__ Next steps `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################
This section contains suggestions on how to This section contains suggestions on how to
additionally improve the performance of your application using OpenVINO. additionally improve the performance of your application using OpenVINO.
Async inference pipeline `⇑ <#top>`__ Async inference pipeline `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The key advantage of the Async The key advantage of the Async
API is that when a device is busy with inference, the application can API is that when a device is busy with inference, the application can
perform other tasks in parallel (for example, populating inputs or perform other tasks in parallel (for example, populating inputs or
scheduling other requests) rather than wait for the current inference to scheduling other requests) rather than wait for the current inference to
@ -1692,7 +1707,7 @@ preprocessing and postprocessing steps for a model.
Define input data format `⇑ <#top>`__ Define input data format `⇑ <#top>`__
------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------
To address particular input of To address particular input of
a model/preprocessor, the ``input(input_id)`` method, where ``input_id`` a model/preprocessor, the ``input(input_id)`` method, where ``input_id``
is a positional index or input tensor name for input in is a positional index or input tensor name for input in
``model.inputs``, if a model has a single input, ``input_id`` can be ``model.inputs``, if a model has a single input, ``input_id`` can be
@ -1917,13 +1932,16 @@ using a front-facing camera. Some web browsers, especially Mozilla
Firefox, may cause flickering. If you experience flickering, Firefox, may cause flickering. If you experience flickering,
set \ ``use_popup=True``. set \ ``use_popup=True``.
**NOTE**: To use this notebook with a webcam, you need to run the .. note::
To use this notebook with a webcam, you need to run the
notebook on a computer with a webcam. If you run the notebook on a notebook on a computer with a webcam. If you run the notebook on a
remote server (for example, in Binder or Google Colab service), the remote server (for example, in Binder or Google Colab service), the
webcam will not work. By default, the lower cell will run model webcam will not work. By default, the lower cell will run model
inference on a video file. If you want to try live inference on your inference on a video file. If you want to try live inference on your
webcam set ``WEBCAM_INFERENCE = True`` webcam set ``WEBCAM_INFERENCE = True``
Run the object detection: Run the object detection:
.. code:: ipython3 .. code:: ipython3

View File

@ -117,7 +117,9 @@ just a few lines of code provided as part
First, we load the pre-trained weights of all components of the model. First, we load the pre-trained weights of all components of the model.
**NOTE**: Initially, model loading can take some time due to .. note::
Initially, model loading can take some time due to
downloading the weights. Also, the download speed depends on your downloading the weights. Also, the download speed depends on your
internet connection. internet connection.
@ -961,9 +963,12 @@ by the model on this
need inspiration. Optionally, you can also change the random generator need inspiration. Optionally, you can also change the random generator
seed for latent state initialization and number of steps. seed for latent state initialization and number of steps.
**Note**: Consider increasing ``steps`` to get more precise results. .. note::
Consider increasing ``steps`` to get more precise results.
A suggested value is ``100``, but it will take more time to process. A suggested value is ``100``, but it will take more time to process.
.. code:: ipython3 .. code:: ipython3
style = {'description_width': 'initial'} style = {'description_width': 'initial'}
@ -986,9 +991,10 @@ seed for latent state initialization and number of steps.
VBox(children=(Text(value=' Make it in galaxy', description='your text'), IntSlider(value=42, description='see… VBox(children=(Text(value=' Make it in galaxy', description='your text'), IntSlider(value=42, description='see…
.. note::
Diffusion process can take some time, depending on what hardware you select.
**Note**: Diffusion process can take some time, depending on what
hardware you select.
.. code:: ipython3 .. code:: ipython3

View File

@ -1265,9 +1265,11 @@ Configure Inference Pipeline `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Configuration steps: 1. Load models on device 2. Configure tokenizer and Configuration steps:
scheduler 3. Create instance of ``OVStableDiffusionInpaintingPipeline``
class 1. Load models on device.
2. Configure tokenizer and scheduler.
3. Create instance of ``OVStableDiffusionInpaintingPipeline`` class.
.. code:: ipython3 .. code:: ipython3

View File

@ -20,7 +20,10 @@ accelerate end-to-end pipelines on Intel architectures. More details in
this this
`repository <https://github.com/huggingface/optimum-intel#openvino>`__. `repository <https://github.com/huggingface/optimum-intel#openvino>`__.
``Note: We suggest you to create a different environment and run the following installation command there.`` .. note::
We suggest you to create a different environment and run the following installation command there.
.. code:: ipython3 .. code:: ipython3
@ -43,11 +46,14 @@ you have integrated GPU (iGPU) and discrete GPU (dGPU), it will show
If you just have either an iGPU or dGPU that will be assigned to If you just have either an iGPU or dGPU that will be assigned to
``"GPU"`` ``"GPU"``
Note: For more details about GPU with OpenVINO visit this .. note::
`link <https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html>`__.
If you have been facing any issue in Ubuntu 20.04 or Windows 11 read For more details about GPU with OpenVINO visit this
this `link <https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html>`__.
`blog <https://blog.openvino.ai/blog-posts/install-gpu-drivers-windows-ubuntu>`__. If you have been facing any issue in Ubuntu 20.04 or Windows 11 read
this
`blog <https://blog.openvino.ai/blog-posts/install-gpu-drivers-windows-ubuntu>`__.
.. code:: ipython3 .. code:: ipython3

View File

@ -20,16 +20,18 @@ accelerate end-to-end pipelines on Intel architectures. More details in
this this
`repository <https://github.com/huggingface/optimum-intel#openvino>`__. `repository <https://github.com/huggingface/optimum-intel#openvino>`__.
``Note: We suggest you to create a different environment and run the following installation command there.`` .. note::
We suggest you to create a different environment and run the following installation command there.
.. code:: ipython3 .. code:: ipython3
%pip install -q "optimum-intel[openvino,diffusers]" "ipywidgets" %pip install -q "optimum-intel[openvino,diffusers]" "ipywidgets"
.. parsed-literal:: .. hint::
Note: you may need to restart the kernel to use updated packages. You may need to restart the kernel to use updated packages.
Stable Diffusion pipeline should brings 6 elements together, a text Stable Diffusion pipeline should brings 6 elements together, a text
@ -65,11 +67,13 @@ you have integrated GPU (iGPU) and discrete GPU (dGPU), it will show
If you just have either an iGPU or dGPU that will be assigned to If you just have either an iGPU or dGPU that will be assigned to
``"GPU"`` ``"GPU"``
Note: For more details about GPU with OpenVINO visit this .. note::
`link <https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html>`__.
If you have been facing any issue in Ubuntu 20.04 or Windows 11 read For more details about GPU with OpenVINO visit this
this `link <https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html>`__.
`blog <https://blog.openvino.ai/blog-posts/install-gpu-drivers-windows-ubuntu>`__. If you have been facing any issue in Ubuntu 20.04 or Windows 11 read
this
`blog <https://blog.openvino.ai/blog-posts/install-gpu-drivers-windows-ubuntu>`__.
.. code:: ipython3 .. code:: ipython3

View File

@ -13,15 +13,18 @@ including being able to use more data, employ more training, and has
less restrictive filtering of the dataset. All of these features give us less restrictive filtering of the dataset. All of these features give us
promising results for selecting a wide range of input text prompts! promising results for selecting a wide range of input text prompts!
**Note:** This is a shorter version of the .. note::
`236-stable-diffusion-v2-text-to-image <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image.ipynb>`__
notebook for demo purposes and to get started quickly. This version does This is a shorter version of the
not have the full implementation of the helper utilities needed to `236-stable-diffusion-v2-text-to-image <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image.ipynb>`__
convert the models from PyTorch to ONNX to OpenVINO, and the OpenVINO notebook for demo purposes and to get started quickly. This version does
``OVStableDiffusionPipeline`` within the notebook directly. If you would not have the full implementation of the helper utilities needed to
like to see the full implementation of stable diffusion for text to convert the models from PyTorch to ONNX to OpenVINO, and the OpenVINO
image, please visit ``OVStableDiffusionPipeline`` within the notebook directly. If you would
`236-stable-diffusion-v2-text-to-image <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image.ipynb>`__. like to see the full implementation of stable diffusion for text to
image, please visit
`236-stable-diffusion-v2-text-to-image <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image.ipynb>`__.
**Table of contents**: **Table of contents**:

View File

@ -73,10 +73,13 @@ Notebook contains the following steps:
API. API.
3. Run Stable Diffusion v2 Text-to-Image pipeline with OpenVINO. 3. Run Stable Diffusion v2 Text-to-Image pipeline with OpenVINO.
**Note:** This is the full version of the Stable Diffusion text-to-image .. note::
implementation. If you would like to get started and run the notebook
quickly, check out `236-stable-diffusion-v2-text-to-image-demo This is the full version of the Stable Diffusion text-to-image
notebook <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image-demo.ipynb>`__. implementation. If you would like to get started and run the notebook
quickly, check out `236-stable-diffusion-v2-text-to-image-demo
notebook <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/236-stable-diffusion-v2/236-stable-diffusion-v2-text-to-image-demo.ipynb>`__.
**Table of contents**: **Table of contents**:
@ -380,8 +383,10 @@ When running Text-to-Image pipeline, we will see that we **only need the
VAE decoder**, but preserve VAE encoder conversion, it will be useful in VAE decoder**, but preserve VAE encoder conversion, it will be useful in
next chapter of our tutorial. next chapter of our tutorial.
Note: This process will take a few minutes and use significant amount of .. note::
RAM (recommended at least 32GB).
This process will take a few minutes and use significant amount of RAM (recommended at least 32GB).
.. code:: ipython3 .. code:: ipython3
@ -964,9 +969,12 @@ Now, you can define a text prompts for image generation and run
inference pipeline. Optionally, you can also change the random generator inference pipeline. Optionally, you can also change the random generator
seed for latent state initialization and number of steps. seed for latent state initialization and number of steps.
**Note**: Consider increasing ``steps`` to get more precise results. .. note::
Consider increasing ``steps`` to get more precise results.
A suggested value is ``50``, but it will take longer time to process. A suggested value is ``50``, but it will take longer time to process.
.. code:: ipython3 .. code:: ipython3
import gradio as gr import gradio as gr

View File

@ -1425,7 +1425,9 @@ result, we will use a ``mixed`` quantization preset. It provides
symmetric quantization of weights and asymmetric quantization of symmetric quantization of weights and asymmetric quantization of
activations. activations.
**Note**: Model post-training quantization is time-consuming process. .. note::
Model post-training quantization is time-consuming process.
Be patient, it can take several minutes depending on your hardware. Be patient, it can take several minutes depending on your hardware.
.. code:: ipython3 .. code:: ipython3

View File

@ -149,6 +149,8 @@ Currently, there is only one ImageBind model available for downloading,
``imagebind_huge``, more details about it can be found in `model ``imagebind_huge``, more details about it can be found in `model
card <https://github.com/facebookresearch/ImageBind/blob/main/model_card.md>`__. card <https://github.com/facebookresearch/ImageBind/blob/main/model_card.md>`__.
.. note::
Please note, depending on internet connection speed, the model Please note, depending on internet connection speed, the model
downloading process can take some time. It also requires at least 5 downloading process can take some time. It also requires at least 5
GB of free space on disk for saving model checkpoint. GB of free space on disk for saving model checkpoint.

View File

@ -427,8 +427,8 @@ Helpers for output parsing `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Model was retrained to finish generation using special token ``### End`` Model was retrained to finish generation using special token ``### End``.
the code below find its id for using it as generation stop-criteria. The code below find its id for using it as generation stop-criteria.
.. code:: ipython3 .. code:: ipython3

View File

@ -44,17 +44,22 @@ Prerequisites `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################
This steps can be done manually or will be performed automatically during the execution of the notebook, but in This steps can be done manually or will be performed automatically during the execution of the notebook, but in
minimum necessary scope. 1. Clone this repo: git clone minimum necessary scope.
https://github.com/OlaWod/FreeVC.git. 2. Download
`WavLM-Large <https://github.com/microsoft/unilm/tree/master/wavlm>`__ 1. Clone this repo:
and put it under directory ``FreeVC/wavlm/``. 3. You can download the
`VCTK <https://datashare.ed.ac.uk/handle/10283/3443>`__ dataset. For .. code-block:: sh
this example we download only two of them from `Hugging Face FreeVC
example <https://huggingface.co/spaces/OlaWod/FreeVC/tree/main>`__. 4. git clone https://github.com/OlaWod/FreeVC.git
Download `pretrained
models <https://1drv.ms/u/s!AnvukVnlQ3ZTx1rjrOZ2abCwuBAh?e=UlhRR5>`__ 2. Download `WavLM-Large <https://github.com/microsoft/unilm/tree/master/wavlm>`__
and put it under directory checkpoints (for current example only and put it under directory ``FreeVC/wavlm/``.
``freevc.pth`` are required). 3. You can download the `VCTK <https://datashare.ed.ac.uk/handle/10283/3443>`__ dataset. For
this example we download only two of them from
`Hugging Face FreeVC example <https://huggingface.co/spaces/OlaWod/FreeVC/tree/main>`__.
4. Download `pretrained models <https://1drv.ms/u/s!AnvukVnlQ3ZTx1rjrOZ2abCwuBAh?e=UlhRR5>`__
and put it under directory checkpoints (for current example only
``freevc.pth`` are required).
Install extra requirements Install extra requirements

View File

@ -534,13 +534,16 @@ using a front-facing camera. Some web browsers, especially Mozilla
Firefox, may cause flickering. If you experience flickering, Firefox, may cause flickering. If you experience flickering,
set \ ``use_popup=True``. set \ ``use_popup=True``.
**NOTE**: To use this notebook with a webcam, you need to run the .. note::
To use this notebook with a webcam, you need to run the
notebook on a computer with a webcam. If you run the notebook on a notebook on a computer with a webcam. If you run the notebook on a
remote server (for example, in Binder or Google Colab service), the remote server (for example, in Binder or Google Colab service), the
webcam will not work. By default, the lower cell will run model webcam will not work. By default, the lower cell will run model
inference on a video file. If you want to try to live inference on inference on a video file. If you want to try to live inference on
your webcam set ``WEBCAM_INFERENCE = True`` your webcam set ``WEBCAM_INFERENCE = True``
.. code:: ipython3 .. code:: ipython3
WEBCAM_INFERENCE = False WEBCAM_INFERENCE = False

View File

@ -1011,12 +1011,12 @@ directories and files which were created during the download process.
Concluding notes Concluding notes
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
1. The code for this tutorial is adapted from the `VI-Depth 1. The code for this tutorial is adapted from the `VI-Depth
repository <https://github.com/isl-org/VI-Depth>`__. repository <https://github.com/isl-org/VI-Depth>`__.
2. Users may choose to download the original and raw datasets from 2. Users may choose to download the original and raw datasets from
the `VOID the `VOID
dataset <https://github.com/alexklwong/void-dataset/>`__. dataset <https://github.com/alexklwong/void-dataset/>`__.
3. The `isl-org/VI-Depth <https://github.com/isl-org/VI-Depth>`__ 3. The `isl-org/VI-Depth <https://github.com/isl-org/VI-Depth>`__
works on a slightly older version of released model assets from works on a slightly older version of released model assets from
its `MiDaS sibling its `MiDaS sibling
repository <https://github.com/isl-org/MiDaS>`__. However, the new repository <https://github.com/isl-org/MiDaS>`__. However, the new

View File

@ -4,11 +4,11 @@ Programming Language Classification with OpenVINO
Overview Overview
-------- --------
This tutorial will be divided in 2 parts: 1. Create a simple inference This tutorial will be divided in 2 parts:
pipeline with a pre-trained model using the OpenVINO™ IR format. 2.
Conduct `post-training quantization <https://docs.openvino.ai/latest/ptq_introduction.html>`__ 1. Create a simple inference pipeline with a pre-trained model using the OpenVINO™ IR format.
on a pre-trained model using Hugging Face Optimum and benchmark 2. Conduct `post-training quantization <https://docs.openvino.ai/latest/ptq_introduction.html>`__
performance. on a pre-trained model using Hugging Face Optimum and benchmark performance.
Feel free to use the notebook outline in Jupyter or your IDE for easy Feel free to use the notebook outline in Jupyter or your IDE for easy
navigation. navigation.
@ -69,7 +69,7 @@ will allow to automatically convert models to the OpenVINO™ IR format.
Install prerequisites Install prerequisites
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
First, complete the `repository installation steps <../../README.md>`__. First, complete the `repository installation steps <../notebooks_installation.html>`__.
Then, the following cell will install: - HuggingFace Optimum with Then, the following cell will install: - HuggingFace Optimum with
OpenVINO support - HuggingFace Evaluate to benchmark results OpenVINO support - HuggingFace Evaluate to benchmark results
@ -305,8 +305,10 @@ Define constants and functions
Load resources Load resources
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
NOTE: the base model is loaded using .. note::
``AutoModelForSequenceClassification`` from ``Transformers``
The base model is loaded using ``AutoModelForSequenceClassification`` from ``Transformers``.
.. code:: ipython3 .. code:: ipython3
@ -330,8 +332,10 @@ Load calibration dataset
The ``get_dataset_sample()`` function will sample up to ``num_samples``, The ``get_dataset_sample()`` function will sample up to ``num_samples``,
with an equal number of examples across the 6 programming languages. with an equal number of examples across the 6 programming languages.
NOTE: Uncomment the method below to download and use the full dataset .. note::
(5+ Gb).
Uncomment the method below to download and use the full dataset (5+ Gb).
.. code:: ipython3 .. code:: ipython3
@ -491,8 +495,9 @@ dataset to quantize and save the model
Load quantized model Load quantized model
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
NOTE: the argument ``export=True`` is not required since the quantized .. note::
model is already in the OpenVINO format.
The argument ``export=True`` is not required since the quantized model is already in the OpenVINO format.
.. code:: ipython3 .. code:: ipython3
@ -531,8 +536,10 @@ Inference on new input using quantized model
Load evaluation set Load evaluation set
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
NOTE: Uncomment the method below to download and use the full dataset .. note::
(5+ Gb).
Uncomment the method below to download and use the full dataset (5+ Gb).
.. code:: ipython3 .. code:: ipython3

View File

@ -62,9 +62,9 @@ The tutorial consists of the following steps:
Optimum <https://huggingface.co/blog/openvino>`__. Optimum <https://huggingface.co/blog/openvino>`__.
- Run 2-stages Stable Diffusion XL pipeline - Run 2-stages Stable Diffusion XL pipeline
.. .. note::
**Note**: Some demonstrated models can require at least 64GB RAM for Some demonstrated models can require at least 64GB RAM for
conversion and running. conversion and running.
**Table of contents**: **Table of contents**:

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8760283d06f1b29e26a3f684c22afe65d809208a2cd624b70acda3e6a9b87a1f
size 16854851

View File

@ -331,8 +331,10 @@ OpenVINO with minimal accuracy drop.
Create a quantized model from the pre-trained FP32 model and the Create a quantized model from the pre-trained FP32 model and the
calibration dataset. The optimization process contains the following calibration dataset. The optimization process contains the following
steps: 1. Create a Dataset for quantization. 2. Run nncf.quantize for steps:
getting an optimized model.
1. Create a Dataset for quantization.
2. Run ``nncf.quantize`` for getting an optimized model.
The validation dataset already defined in the training notebook. The validation dataset already defined in the training notebook.
@ -601,10 +603,13 @@ In the next cells, inference speed will be measured for the original and
quantized model on CPU. If an iGPU is available, inference speed will be quantized model on CPU. If an iGPU is available, inference speed will be
measured for CPU+GPU as well. The number of seconds is set to 15. measured for CPU+GPU as well. The number of seconds is set to 15.
**NOTE**: For the most accurate performance estimation, it is .. note::
For the most accurate performance estimation, it is
recommended to run ``benchmark_app`` in a terminal/command prompt recommended to run ``benchmark_app`` in a terminal/command prompt
after closing other applications. after closing other applications.
.. code:: ipython3 .. code:: ipython3
# print the available devices on this system # print the available devices on this system

View File

@ -397,8 +397,10 @@ range by using a Rescaling layer.
normalization_layer = layers.Rescaling(1./255) normalization_layer = layers.Rescaling(1./255)
Note: The Keras Preprocessing utilities and layers introduced in this
section are currently experimental and may change. .. note::
The Keras Preprocessing utilities and layers introduced in this section are currently experimental and may change.
There are two ways to use this layer. You can apply it to the dataset by There are two ways to use this layer. You can apply it to the dataset by
calling map: calling map:
@ -428,11 +430,13 @@ calling map:
Or, you can include the layer inside your model definition, which can Or, you can include the layer inside your model definition, which can
simplify deployment. Lets use the second approach here. simplify deployment. Lets use the second approach here.
Note: you previously resized images using the ``image_size`` argument of .. note::
``image_dataset_from_directory``. If you want to include the resizing
logic in your model as well, you can use the You previously resized images using the ``image_size`` argument of
`Resizing <https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing>`__ ``image_dataset_from_directory``. If you want to include the resizing
layer. logic in your model as well, you can use the
`Resizing <https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing>`__
layer.
Create the Model `⇑ <#top>`__ Create the Model `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################
@ -482,7 +486,9 @@ Model Summary `⇑ <#top>`__
View all the layers of the network using the models ``summary`` method. View all the layers of the network using the models ``summary`` method.
**NOTE:** This section is commented out for performance reasons. .. note::
This section is commented out for performance reasons.
Please feel free to uncomment these to compare the results. Please feel free to uncomment these to compare the results.
.. code:: ipython3 .. code:: ipython3
@ -816,8 +822,10 @@ Predict on New Data `⇑ <#top>`__
Finally, let us use the model to classify an image that was not included Finally, let us use the model to classify an image that was not included
in the training or validation sets. in the training or validation sets.
**Note**: Data augmentation and Dropout layers are inactive at .. note::
inference time.
Data augmentation and Dropout layers are inactive at inference time.
.. code:: ipython3 .. code:: ipython3

View File

@ -29,7 +29,10 @@ notebook. Using the smaller model and dataset will speed up training and
download time. To see other ResNet models, visit `PyTorch download time. To see other ResNet models, visit `PyTorch
hub <https://pytorch.org/hub/pytorch_vision_resnet/>`__. hub <https://pytorch.org/hub/pytorch_vision_resnet/>`__.
**NOTE**: This notebook requires a C++ compiler. .. note::
This notebook requires a C++ compiler.
**Table of contents**: **Table of contents**:
@ -58,7 +61,9 @@ for the model, and the image width and height that will be used for the
network. Also define paths where PyTorch, ONNX and OpenVINO IR versions network. Also define paths where PyTorch, ONNX and OpenVINO IR versions
of the models will be stored. of the models will be stored.
**NOTE**: All NNCF logging messages below ERROR level (INFO and .. note::
All NNCF logging messages below ERROR level (INFO and
WARNING) are disabled to simplify the tutorial. For production use, WARNING) are disabled to simplify the tutorial. For production use,
it is recommended to enable logging by removing it is recommended to enable logging by removing
``set_log_level(logging.ERROR)``. ``set_log_level(logging.ERROR)``.
@ -732,7 +737,9 @@ Benchmark Tool runs inference for 60 seconds in asynchronous mode on
CPU. It returns inference speed as latency (milliseconds per image) and CPU. It returns inference speed as latency (milliseconds per image) and
throughput (frames per second) values. throughput (frames per second) values.
**NOTE**: This notebook runs ``benchmark_app`` for 15 seconds to give .. note::
This notebook runs ``benchmark_app`` for 15 seconds to give
a quick indication of performance. For more accurate performance, it a quick indication of performance. For more accurate performance, it
is recommended to run ``benchmark_app`` in a terminal/command prompt is recommended to run ``benchmark_app`` in a terminal/command prompt
after closing other applications. Run after closing other applications. Run
@ -741,6 +748,7 @@ throughput (frames per second) values.
``benchmark_app --help`` to see an overview of all command-line ``benchmark_app --help`` to see an overview of all command-line
options. options.
.. code:: ipython3 .. code:: ipython3
def parse_benchmark_output(benchmark_output): def parse_benchmark_output(benchmark_output):

View File

@ -41,11 +41,14 @@ Import NNCF and all auxiliary packages from your Python code. Set a name for the
size, used batch size, and the learning rate. Also, define paths where size, used batch size, and the learning rate. Also, define paths where
Frozen Graph and OpenVINO IR versions of the models will be stored. Frozen Graph and OpenVINO IR versions of the models will be stored.
**NOTE**: All NNCF logging messages below ERROR level (INFO and .. note::
All NNCF logging messages below ERROR level (INFO and
WARNING) are disabled to simplify the tutorial. For production use, WARNING) are disabled to simplify the tutorial. For production use,
it is recommended to enable logging by removing it is recommended to enable logging by removing
``set_log_level(logging.ERROR)``. ``set_log_level(logging.ERROR)``.
.. code:: ipython3 .. code:: ipython3
!pip install -q "openvino-dev>=2023.0.0" "nncf>=2.5.0" !pip install -q "openvino-dev>=2023.0.0" "nncf>=2.5.0"
@ -261,10 +264,13 @@ Pre-train a Floating-Point Model `⇑ <#top>`__
Using NNCF for model compression assumes that the user has a pre-trained Using NNCF for model compression assumes that the user has a pre-trained
model and a training pipeline. model and a training pipeline.
**NOTE** For the sake of simplicity of the tutorial, it is .. note::
For the sake of simplicity of the tutorial, it is
recommended to skip ``FP32`` model training and load the weights that recommended to skip ``FP32`` model training and load the weights that
are provided. are provided.
.. code:: ipython3 .. code:: ipython3
# Load the floating-point weights. # Load the floating-point weights.
@ -471,7 +477,9 @@ Benchmark Tool runs inference for 60 seconds in asynchronous mode on
CPU. It returns inference speed as latency (milliseconds per image) and CPU. It returns inference speed as latency (milliseconds per image) and
throughput (frames per second) values. throughput (frames per second) values.
**NOTE**: This notebook runs ``benchmark_app`` for 15 seconds to give .. note::
This notebook runs ``benchmark_app`` for 15 seconds to give
a quick indication of performance. For more accurate performance, it a quick indication of performance. For more accurate performance, it
is recommended to run ``benchmark_app`` in a terminal/command prompt is recommended to run ``benchmark_app`` in a terminal/command prompt
after closing other applications. Run after closing other applications. Run
@ -480,6 +488,7 @@ throughput (frames per second) values.
``benchmark_app --help`` to see an overview of all command-line ``benchmark_app --help`` to see an overview of all command-line
options. options.
.. code:: ipython3 .. code:: ipython3
serialize(model_ir_fp32, str(fp32_ir_path)) serialize(model_ir_fp32, str(fp32_ir_path))

View File

@ -11,7 +11,9 @@ Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__. Final part
of this notebook shows live inference results from a webcam. of this notebook shows live inference results from a webcam.
Additionally, you can also upload a video file. Additionally, you can also upload a video file.
**NOTE**: To use a webcam, you must run this Jupyter notebook on a .. note::
To use a webcam, you must run this Jupyter notebook on a
computer with a webcam. If you run on a server, the webcam will not computer with a webcam. If you run on a server, the webcam will not
work. However, you can still do inference on a video in the final work. However, you can still do inference on a video in the final
step. step.
@ -73,7 +75,10 @@ selected model.
If you want to download another model, replace the name of the model and If you want to download another model, replace the name of the model and
precision in the code below. precision in the code below.
**NOTE**: This may require a different pose decoder. .. note::
This may require a different pose decoder.
.. code:: ipython3 .. code:: ipython3
@ -421,12 +426,15 @@ using a front-facing camera. Some web browsers, especially Mozilla
Firefox, may cause flickering. If you experience flickering, set Firefox, may cause flickering. If you experience flickering, set
``use_popup=True``. ``use_popup=True``.
**NOTE**: To use this notebook with a webcam, you need to run the .. note::
To use this notebook with a webcam, you need to run the
notebook on a computer with a webcam. If you run the notebook on a notebook on a computer with a webcam. If you run the notebook on a
server (for example, Binder), the webcam will not work. Popup mode server (for example, Binder), the webcam will not work. Popup mode
may not work if you run this notebook on a remote computer (for may not work if you run this notebook on a remote computer (for
example, Binder). example, Binder).
Run the pose estimation: Run the pose estimation:
.. code:: ipython3 .. code:: ipython3

View File

@ -153,10 +153,13 @@ This tutorial uses `Kinetics-400
dataset <https://deepmind.com/research/open-source/kinetics>`__, and dataset <https://deepmind.com/research/open-source/kinetics>`__, and
also provides the text file embedded into this notebook. also provides the text file embedded into this notebook.
**NOTE**: If you want to run .. note::
If you want to run
``"driver-action-recognition-adas-0002"`` model, replace the ``"driver-action-recognition-adas-0002"`` model, replace the
``kinetics.txt`` file to ``driver_actions.txt``. ``kinetics.txt`` file to ``driver_actions.txt``.
.. code:: ipython3 .. code:: ipython3
labels = "../data/text/kinetics.txt" labels = "../data/text/kinetics.txt"
@ -664,11 +667,14 @@ Run Action Recognition Using a Webcam `⇑ <#top>`__
Now, try to see yourself in your webcam. Now, try to see yourself in your webcam.
**NOTE**: To use a webcam, you must run this Jupyter notebook on a .. note::
To use a webcam, you must run this Jupyter notebook on a
computer with a webcam. If you run on a server, the webcam will not computer with a webcam. If you run on a server, the webcam will not
work. However, you can still do inference on a video file in the work. However, you can still do inference on a video file in the
final step. final step.
.. code:: ipython3 .. code:: ipython3
run_action_recognition(source=0, flip=False, use_popup=False, skip_first_frames=0) run_action_recognition(source=0, flip=False, use_popup=False, skip_first_frames=0)

View File

@ -397,11 +397,14 @@ starting at 0. Set ``flip=True`` when using a front-facing camera. Some
web browsers, especially Mozilla Firefox, may cause flickering. If you web browsers, especially Mozilla Firefox, may cause flickering. If you
experience flickering, set ``use_popup=True``. experience flickering, set ``use_popup=True``.
**NOTE**: To use a webcam, you must run this Jupyter notebook on a .. note::
To use a webcam, you must run this Jupyter notebook on a
computer with a webcam. If you run it on a server, you will not be computer with a webcam. If you run it on a server, you will not be
able to access the webcam. However, you can still perform inference able to access the webcam. However, you can still perform inference
on a video file in the final step. on a video file in the final step.
.. code:: ipython3 .. code:: ipython3
run_style_transfer(source=0, flip=True, use_popup=False) run_style_transfer(source=0, flip=True, use_popup=False)

View File

@ -16,17 +16,19 @@ extension <https://github.com/jupyter-widgets/pythreejs#jupyterlab>`__\ **and
been using JupyterLab to run the demo as suggested in the been using JupyterLab to run the demo as suggested in the
``README.md``** ``README.md``**
**NOTE**: *To use a webcam, you must run this Jupyter notebook on a .. note::
To use a webcam, you must run this Jupyter notebook on a
computer with a webcam. If you run on a remote server, the webcam computer with a webcam. If you run on a remote server, the webcam
will not work. However, you can still do inference on a video file in will not work. However, you can still do inference on a video file in
the final step. This demo utilizes the Python interface in the final step. This demo utilizes the Python interface in
``Three.js`` integrated with WebGL to process data from the model ``Three.js`` integrated with WebGL to process data from the model
inference. These results are processed and displayed in the inference. These results are processed and displayed in the
notebook.* notebook.
*To ensure that the results are displayed correctly, run the code in a To ensure that the results are displayed correctly, run the code in a
recommended browser on one of the following operating systems:* *Ubuntu, recommended browser on one of the following operating systems: Ubuntu,
Windows: Chrome* *macOS: Safari* Windows: Chrome, macOS: Safari.
**Table of contents**: **Table of contents**:
@ -178,7 +180,7 @@ directory structure and downloads the selected model.
Convert Model to OpenVINO IR format `⇑ <#top>`__ Convert Model to OpenVINO IR format `⇑ <#top>`__
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The selected model The selected model
comes from the public directory, which means it must be converted into comes from the public directory, which means it must be converted into
OpenVINO Intermediate Representation (OpenVINO IR). We use OpenVINO Intermediate Representation (OpenVINO IR). We use
``omz_converter`` to convert the ONNX format model to the OpenVINO IR ``omz_converter`` to convert the ONNX format model to the OpenVINO IR
@ -588,7 +590,7 @@ using a front-facing camera. Some web browsers, especially Mozilla
Firefox, may cause flickering. If you experience flickering, set Firefox, may cause flickering. If you experience flickering, set
``use_popup=True``. ``use_popup=True``.
**NOTE**: .. note::
*1. To use this notebook with a webcam, you need to run the notebook *1. To use this notebook with a webcam, you need to run the notebook
on a computer with a webcam. If you run the notebook on a server on a computer with a webcam. If you run the notebook on a server
@ -597,6 +599,7 @@ Firefox, may cause flickering. If you experience flickering, set
*2. Popup mode may not work if you run this notebook on a remote *2. Popup mode may not work if you run this notebook on a remote
computer (e.g. Binder).* computer (e.g. Binder).*
Using the following method, you can click and move your mouse over the Using the following method, you can click and move your mouse over the
picture on the left to interact. picture on the left to interact.

View File

@ -252,10 +252,12 @@ Load model `⇑ <#top>`__
Define a common class for model loading and predicting. Define a common class for model loading and predicting.
There are four main steps for OpenVINO model initialization, and they There are four main steps for OpenVINO model initialization, and they
are required to run for only once before inference loop. 1. Initialize are required to run for only once before inference loop.
OpenVINO Runtime. 2. Read the network from ``*.bin`` and ``*.xml`` files
(weights and architecture). 3. Compile the model for device. 4. Get 1. Initialize OpenVINO Runtime.
input and output names of nodes. 2. Read the network from ``*.bin`` and ``*.xml`` files (weights and architecture).
3. Compile the model for device.
4. Get input and output names of nodes.
In this case, we can put them all in a class constructor function. In this case, we can put them all in a class constructor function.
@ -344,10 +346,12 @@ Select device from dropdown list for running inference using OpenVINO:
Data Processing `⇑ <#top>`__ Data Processing `⇑ <#top>`__
############################################################################################################################### ###############################################################################################################################
Data Processing includes data preprocess and postprocess functions. - Data preprocess function is used to change Data Processing includes data preprocess and postprocess functions.
the layout and shape of input data, according to requirement of the
network input format. - Data postprocess function is used to extract the - Data preprocess function is used to change the layout and shape of input data,
useful information from networks original output and visualize it. according to requirement of the network input format.
- Data postprocess function is used to extract the useful information from
networks original output and visualize it.
.. code:: ipython3 .. code:: ipython3

View File

@ -87,7 +87,7 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools.
+------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ +------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `103-paddle-onnx-to-openvino <notebooks/103-paddle-to-openvino-classification-with-output.html>`__ |br| |n103| | Convert PaddlePaddle models to OpenVINO IR. | |n103-img1| | | `103-paddle-onnx-to-openvino <notebooks/103-paddle-to-openvino-classification-with-output.html>`__ |br| |n103| | Convert PaddlePaddle models to OpenVINO IR. | |n103-img1| |
+------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ +------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `104-model-tools <notebooks/104-model-tools-with-output.html>`__ |br| |n104| | Download, convert and benchmark models from Open Model Zoo. | |n104-img1| | | `121-convert-to-openvino <notebooks/121-convert-to-openvino-with-output.html>`__ |br| |n121| |br| |c121| | Learn OpenVINO model conversion API | |
+------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ +------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
.. dropdown:: Explore more notebooks here. .. dropdown:: Explore more notebooks here.
@ -97,6 +97,8 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools.
+====================================================================================================================================================+==================================================================================================================================+ +====================================================================================================================================================+==================================================================================================================================+
| `102-pytorch-onnx-to-openvino <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__ | Convert PyTorch models to OpenVINO IR. | | `102-pytorch-onnx-to-openvino <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__ | Convert PyTorch models to OpenVINO IR. |
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `104-model-tools <notebooks/104-model-tools-with-output.html>`__ |br| |n104| | Download, convert and benchmark models from Open Model Zoo. |
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `105-language-quantize-bert <notebooks/105-language-quantize-bert-with-output.html>`__ | Optimize and quantize a pre-trained BERT model. | | `105-language-quantize-bert <notebooks/105-language-quantize-bert-with-output.html>`__ | Optimize and quantize a pre-trained BERT model. |
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `106-auto-device <notebooks/106-auto-device-with-output.html>`__ |br| |n106| | Demonstrates how to use AUTO Device. | | `106-auto-device <notebooks/106-auto-device-with-output.html>`__ |br| |n106| | Demonstrates how to use AUTO Device. |
@ -129,8 +131,6 @@ Tutorials that explain how to optimize and quantize models with OpenVINO tools.
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `120-tensorflow-object-detection-to-openvino <notebooks/120-tensorflow-object-detection-to-openvino-with-output.html>`__ |br| |n120| |br| |c120| | Convert TensorFlow Object Detection models to OpenVINO IR | | `120-tensorflow-object-detection-to-openvino <notebooks/120-tensorflow-object-detection-to-openvino-with-output.html>`__ |br| |n120| |br| |c120| | Convert TensorFlow Object Detection models to OpenVINO IR |
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| `121-convert-to-openvino <notebooks/121-convert-to-openvino-with-output.html>`__ |br| |n121| |br| |c121| | Learn OpenVINO model conversion API |
+----------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
Model Demos Model Demos
@ -261,6 +261,8 @@ Demos that demonstrate inference on a particular model.
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ +-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `251-tiny-sd-image-generation <notebooks/251-tiny-sd-image-generation-with-output.html>`__ |br| |c251| | Image Generation with Tiny-SD and OpenVINO™. | |n251-img1| | | `251-tiny-sd-image-generation <notebooks/251-tiny-sd-image-generation-with-output.html>`__ |br| |c251| | Image Generation with Tiny-SD and OpenVINO™. | |n251-img1| |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+ +-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
| `252-fastcomposer-image-generation <notebooks/252-fastcomposer-image-generation-with-output.html>`__ | Image generation with FastComposer and OpenVINO™. | |
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
Model Training Model Training