DOCS shift to rst - POT API examples (#16627)
This commit is contained in:
parent
2f5be5e81c
commit
3a5b819685
@ -5,13 +5,14 @@
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
|
||||
API Examples <pot_example_README>
|
||||
Command-line Example <pot_configs_examples_README>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
This section provides a set of examples that demonstrate how to apply the post-training optimization methods to optimize various models from different domains. It contains optimization recipes for concrete models, that unnecessarily cover your case, but which should be sufficient to reuse these recipes to optimize custom models:
|
||||
|
||||
- [API Examples](@ref pot_example_README)
|
||||
- [Commanad-line Example](@ref pot_configs_examples_README)
|
||||
- :doc:`API Examples <pot_example_README>`
|
||||
- :doc:`Command-line Example <pot_configs_examples_README>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,29 +1,40 @@
|
||||
# Quantizatiing 3D Segmentation Model {#pot_example_3d_segmentation_README}
|
||||
# Quantizing 3D Segmentation Model {#pot_example_3d_segmentation_README}
|
||||
|
||||
This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a 3D segmentation model.
|
||||
The [Brain Tumor Segmentation](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0002) model from PyTorch* is used for this purpose.
|
||||
A custom `DataLoader` is created to load images in NIfTI format from [Medical Segmentation Decathlon BRATS 2017](http://medicaldecathlon.com/) dataset for 3D semantic segmentation task
|
||||
and the implementation of Dice Index metric is used for the model evaluation. In addition, this example demonstrates how one can use image metadata obtained during image reading and
|
||||
preprocessing to post-process the model raw output. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/3d_segmentation).
|
||||
@sphinxdirective
|
||||
|
||||
## How to prepare the data
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a 3D segmentation model.
|
||||
The `Brain Tumor Segmentation <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/brain-tumor-segmentation-0002>`__ model from PyTorch is used for this purpose. A custom ``DataLoader`` is created to load images in NIfTI format from `Medical Segmentation Decathlon BRATS 2017 <http://medicaldecathlon.com/>`__ dataset for 3D semantic segmentation task and the implementation of Dice Index metric is used for the model evaluation. In addition, this example demonstrates how one can use image metadata obtained during image reading and preprocessing to post-process the model raw output. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/3d_segmentation>`__.
|
||||
|
||||
How to prepare the data
|
||||
#######################
|
||||
|
||||
To run this example, you will need to download the Brain Tumors 2017 part of the Medical Segmentation Decathlon image database http://medicaldecathlon.com/.
|
||||
3D MRI data in NIfTI format can be found in the `imagesTr` folder, and segmentation masks are in `labelsTr`.
|
||||
3D MRI data in NIfTI format can be found in the ``imagesTr`` folder, and segmentation masks are in ``labelsTr``.
|
||||
|
||||
How to Run the example
|
||||
######################
|
||||
|
||||
1. Launch :doc:`Model Downloader <omz_tools_downloader>` tool to download ``brain-tumor-segmentation-0002`` model from the Open Model Zoo repository.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_downloader --name brain-tumor-segmentation-0002
|
||||
|
||||
|
||||
## How to Run the example
|
||||
2. Launch :doc:`Model Converter <omz_tools_downloader>` tool to generate Intermediate Representation (IR) files for the model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_converter --name brain-tumor-segmentation-0002
|
||||
|
||||
|
||||
1. Launch [Model Downloader](@ref omz_tools_downloader) tool to download `brain-tumor-segmentation-0002` model from the Open Model Zoo repository.
|
||||
```sh
|
||||
omz_downloader --name brain-tumor-segmentation-0002
|
||||
```
|
||||
2. Launch [Model Converter](@ref omz_tools_downloader) tool to generate Intermediate Representation (IR) files for the model:
|
||||
```sh
|
||||
omz_converter --name brain-tumor-segmentation-0002
|
||||
```
|
||||
3. Launch the example script from the example directory:
|
||||
```sh
|
||||
python3 ./3d_segmentation_example.py -m <PATH_TO_IR_XML> -d <BraTS_2017/imagesTr> --mask-dir <BraTS_2017/labelsTr>
|
||||
```
|
||||
Optional: you can specify .bin file of IR directly using the `-w`, `--weights` options.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
python3 ./3d_segmentation_example.py -m <PATH_TO_IR_XML> -d <BraTS_2017/imagesTr> --mask-dir <BraTS_2017/labelsTr>
|
||||
|
||||
|
||||
Optional: you can specify .bin file of IR directly using the ``-w``, ``--weights`` options.
|
||||
|
||||
@endsphinxdirective
|
@ -5,7 +5,7 @@
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
|
||||
Quantizing Image Classification Model <pot_example_classification_README>
|
||||
Quantizing Object Detection Model with Accuracy Control <pot_example_object_detection_README>
|
||||
Quantizing Cascaded Model <pot_example_face_detection_README>
|
||||
@ -13,56 +13,63 @@
|
||||
Quantizing 3D Segmentation Model <pot_example_3d_segmentation_README>
|
||||
Quantizing for GNA Device <pot_example_speech_README>
|
||||
|
||||
|
||||
The Post-training Optimization Tool contains multiple examples that demonstrate how to use its :doc:`API <pot_compression_api_README>`
|
||||
to optimize DL models. All available examples can be found on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples>`__.
|
||||
|
||||
The following examples demonstrate the implementation of ``Engine``, ``Metric``, and ``DataLoader`` interfaces for various use cases:
|
||||
|
||||
1. :doc:`Quantizing Image Classification model <pot_example_classification_README>`
|
||||
|
||||
- Uses single ``MobilenetV2`` model from TensorFlow
|
||||
- Implements ``DataLoader`` to load .JPEG images and annotations of Imagenet database
|
||||
- Implements ``Metric`` interface to calculate Accuracy at top-1 metric
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
2. :doc:`Quantizing Object Detection Model with Accuracy Control <pot_example_object_detection_README>`
|
||||
|
||||
- Uses single ``MobileNetV1 FPN`` model from TensorFlow
|
||||
- Implements ``Dataloader`` to load images of COCO database
|
||||
- Implements ``Metric`` interface to calculate ``mAP@[.5:.95]`` metric
|
||||
- Uses ``AccuracyAwareQuantization`` algorithm for quantization model
|
||||
|
||||
3. :doc:`Quantizing Semantic Segmentation Model <pot_example_segmentation_README>`
|
||||
|
||||
- Uses single ``DeepLabV3`` model from TensorFlow
|
||||
- Implements ``DataLoader`` to load .JPEG images and annotations of Pascal VOC 2012 database
|
||||
- Implements ``Metric`` interface to calculate Mean Intersection Over Union metric
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
4. :doc:`Quantizing 3D Segmentation Model <pot_example_3d_segmentation_README>`
|
||||
|
||||
- Uses single ``Brain Tumor Segmentation`` model from PyTorch
|
||||
- Implements ``DataLoader`` to load images in NIfTI format from Medical Segmentation Decathlon BRATS 2017 database
|
||||
- Implements ``Metric`` interface to calculate Dice Index metric
|
||||
- Demonstrates how to use image metadata obtained during data loading to post-process the raw model output
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
5. :doc:`Quantizing Cascaded model <pot_example_face_detection_README>`
|
||||
|
||||
- Uses cascaded (composite) ``MTCNN`` model from Caffe that consists of three separate models in an OpenVINO™ Intermediate Representation (IR)
|
||||
- Implements ``Dataloader`` to load .jpg images of WIDER FACE database
|
||||
- Implements ``Metric`` interface to calculate Recall metric
|
||||
- Implements ``Engine`` class that is inherited from ``IEEngine`` to create a complex staged pipeline to sequentially execute each of the three stages of the MTCNN model, represented by multiple models in IR. It uses engine helpers to set model in OpenVINO Inference Engine and process raw model output for the correct statistics collection
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
6. :doc:`Quantizing for GNA Device <pot_example_speech_README>`
|
||||
|
||||
- Uses models from Kaldi
|
||||
- Implements ``DataLoader`` to load data in .ark format
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
After execution of each example above the quantized model is placed into the folder ``optimized``. The accuracy validation of the quantized model is performed right after the quantization.
|
||||
|
||||
See the tutorials
|
||||
####################
|
||||
|
||||
* `Quantization of Image Classification model <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/301-tensorflow-training-openvino>`__
|
||||
* `Quantization of Object Detection model from Model Zoo <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/111-detection-quantization)>`__
|
||||
* `Quantization of Segmentation model for medical data <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/110-ct-segmentation-quantize>`__
|
||||
* `Quantization of BERT for Text Classification <https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/105-language-quantize-bert>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The Post-training Optimization Tool contains multiple examples that demonstrate how to use its [API](@ref pot_compression_api_README)
|
||||
to optimize DL models. All available examples can be found on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples).
|
||||
|
||||
The following examples demonstrate the implementation of `Engine`, `Metric`, and `DataLoader` interfaces for various use cases:
|
||||
|
||||
1. [Quantizing Image Classification model](./classification/README.md)
|
||||
- Uses single `MobilenetV2` model from TensorFlow
|
||||
- Implements `DataLoader` to load .JPEG images and annotations of Imagenet database
|
||||
- Implements `Metric` interface to calculate Accuracy at top-1 metric
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
2. [Quantizing Object Detection Model with Accuracy Control](./object_detection/README.md)
|
||||
- Uses single `MobileNetV1 FPN` model from TensorFlow
|
||||
- Implements `Dataloader` to load images of COCO database
|
||||
- Implements `Metric` interface to calculate mAP@[.5:.95] metric
|
||||
- Uses `AccuracyAwareQuantization` algorithm for quantization model
|
||||
|
||||
3. [Quantizing Semantic Segmentation Model](./segmentation/README.md)
|
||||
- Uses single `DeepLabV3` model from TensorFlow
|
||||
- Implements `DataLoader` to load .JPEG images and annotations of Pascal VOC 2012 database
|
||||
- Implements `Metric` interface to calculate Mean Intersection Over Union metric
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
4. [Quantizing 3D Segmentation Model](./3d_segmentation/README.md)
|
||||
- Uses single `Brain Tumor Segmentation` model from PyTorch
|
||||
- Implements `DataLoader` to load images in NIfTI format from Medical Segmentation Decathlon BRATS 2017 database
|
||||
- Implements `Metric` interface to calculate Dice Index metric
|
||||
- Demonstrates how to use image metadata obtained during data loading to post-process the raw model output
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
5. [Quantizing Cascaded model](./face_detection/README.md)
|
||||
- Uses cascaded (composite) `MTCNN` model from Caffe that consists of three separate models in an OpenVino™ Intermediate Representation (IR)
|
||||
- Implements `Dataloader` to load .jpg images of WIDER FACE database
|
||||
- Implements `Metric` interface to calculate Recall metric
|
||||
- Implements `Engine` class that is inherited from `IEEngine` to create a complex staged pipeline to sequentially execute
|
||||
each of the three stages of the MTCNN model, represented by multiple models in IR. It uses engine helpers to set model in
|
||||
OpenVino™ Inference Engine and process raw model output for the correct statistics collection
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
6. [Quantizing for GNA Device](./speech/README.md)
|
||||
- Uses models from Kaldi
|
||||
- Implements `DataLoader` to load data in .ark format
|
||||
- Uses DefaultQuantization algorithm for quantization model
|
||||
|
||||
After execution of each example above the quantized model is placed into the folder `optimized`. The accuracy validation of the quantized model is performed right after the quantization.
|
||||
|
||||
## See the tutorials
|
||||
* [Quantization of Image Classification model](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/301-tensorflow-training-openvino)
|
||||
* [Quantization of Object Detection model from Model Zoo](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/111-detection-quantization)
|
||||
* [Quantization of Segmentation model for medical data](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/110-ct-segmentation-quantize)
|
||||
* [Quantization of BERT for Text Classification](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/105-language-quantize-bert)
|
||||
|
@ -1,27 +1,39 @@
|
||||
# Quantizing Image Classification Model {#pot_example_classification_README}
|
||||
|
||||
This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a classification model.
|
||||
The [MobilenetV2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224) model from TensorFlow* is used for this purpose.
|
||||
A custom `DataLoader` is created to load the [ImageNet](http://www.image-net.org/) classification dataset and the implementation of Accuracy at top-1 metric is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/classification).
|
||||
@sphinxdirective
|
||||
|
||||
## How to prepare the data
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a classification model.
|
||||
The `MobilenetV2 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-1.0-224>`__ model from TensorFlow is used for this purpose.
|
||||
A custom ``DataLoader`` is created to load the `ImageNet <http://www.image-net.org/>`__ classification dataset and the implementation of Accuracy at top-1 metric is used for the model evaluation. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/classification>`__.
|
||||
|
||||
To run this example, you need to [download](http://www.image-net.org/download-faq) the validation part of the ImageNet image database and place it in a separate folder,
|
||||
which will be later referred as `<IMAGES_DIR>`. Annotations to images should be stored in a separate .txt file (`<IMAGENET_ANNOTATION_FILE>`) in the format `image_name label`.
|
||||
How to prepare the data
|
||||
#######################
|
||||
|
||||
To run this example, you need to `download <https://image-net.org/download.php>`__ the validation part of the ImageNet image database and place it in a separate folder,
|
||||
which will be later referred as ``<IMAGES_DIR>``. Annotations to images should be stored in a separate .txt file (``<IMAGENET_ANNOTATION_FILE>``) in the format ``image_name label``.
|
||||
|
||||
|
||||
## How to Run the example
|
||||
How to Run the example
|
||||
######################
|
||||
|
||||
1. Launch :doc:`Model Downloader <omz_tools_downloader>` tool to download ``mobilenet-v2-1.0-224`` model from the Open Model Zoo repository.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_downloader --name mobilenet-v2-1.0-224
|
||||
|
||||
2. Launch :doc:`Model Converter <omz_tools_downloader>` tool to generate Intermediate Representation (IR) files for the model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_converter --name mobilenet-v2-1.0-224 --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
|
||||
1. Launch [Model Downloader](@ref omz_tools_downloader) tool to download `mobilenet-v2-1.0-224` model from the Open Model Zoo repository.
|
||||
```sh
|
||||
omz_downloader --name mobilenet-v2-1.0-224
|
||||
```
|
||||
2. Launch [Model Converter](@ref omz_tools_downloader) tool to generate Intermediate Representation (IR) files for the model:
|
||||
```sh
|
||||
omz_converter --name mobilenet-v2-1.0-224 --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
```
|
||||
3. Launch the example script from the example directory:
|
||||
```sh
|
||||
python3 ./classification_example.py -m <PATH_TO_IR_XML> -a <IMAGENET_ANNOTATION_FILE> -d <IMAGES_DIR>
|
||||
```
|
||||
Optional: you can specify .bin file of IR directly using the `-w`, `--weights` options.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
python3 ./classification_example.py -m <PATH_TO_IR_XML> -a <IMAGENET_ANNOTATION_FILE> -d <IMAGES_DIR>
|
||||
|
||||
Optional: you can specify .bin file of IR directly using the ``-w``, ``--weights`` options.
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,32 +1,47 @@
|
||||
# Quantizing Cascaded Face detection Model {#pot_example_face_detection_README}
|
||||
|
||||
This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a face detection model.
|
||||
The [MTCNN](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mtcnn) model from Caffe* is used for this purpose.
|
||||
A custom `DataLoader` is created to load [WIDER FACE](http://shuoyang1213.me/WIDERFACE/) dataset for a face detection task
|
||||
and the implementation of Recall metric is used for the model evaluation. In addition, this example demonstrates how one can implement
|
||||
an engine to infer a cascaded (composite) model that is represented by multiple submodels in an OpenVino™ Intermediate Representation (IR)
|
||||
and has a complex staged inference pipeline. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/face_detection).
|
||||
@sphinxdirective
|
||||
|
||||
## How to prepare the data
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a face detection model.
|
||||
The `MTCNN <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mtcnn>`__ model from Caffe is used for this purpose.
|
||||
A custom ``DataLoader`` is created to load `WIDER FACE <http://shuoyang1213.me/WIDERFACE/>`__ dataset for a face detection task
|
||||
and the implementation of Recall metric is used for the model evaluation. In addition, this example demonstrates how one can implement
|
||||
an engine to infer a cascaded (composite) model that is represented by multiple submodels in an OpenVINO™ Intermediate Representation (IR)
|
||||
and has a complex staged inference pipeline. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/face_detection>`__.
|
||||
|
||||
How to prepare the data
|
||||
#######################
|
||||
|
||||
To run this example, you need to download the validation part of the Wider Face dataset http://shuoyang1213.me/WIDERFACE/.
|
||||
Images with faces divided into categories are placed in the `WIDER_val/images` folder.
|
||||
Annotations in .txt format containing the coordinates of the face bounding boxes of the validation part of the dataset
|
||||
can be downloaded separately and are located in the `wider_face_split/wider_face_val_bbx_gt.txt` file.
|
||||
Images with faces divided into categories are placed in the ``WIDER_val/images`` folder.
|
||||
Annotations in .txt format containing the coordinates of the face bounding boxes of the
|
||||
validation part of the dataset can be downloaded separately and are located in the ``wider_face_split/wider_face_val_bbx_gt.txt`` file.
|
||||
|
||||
How to Run the example
|
||||
######################
|
||||
|
||||
1. Launch :doc:`Model Downloader <omz_tools_downloader>` tool to download ``mtcnn`` model from the Open Model Zoo repository.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_downloader --name mtcnn*
|
||||
|
||||
|
||||
2. Launch :doc:`Model Converter <omz_tools_downloader>` tool to generate Intermediate Representation (IR) files for the model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_converter --name mtcnn* --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
|
||||
## How to Run the example
|
||||
|
||||
1. Launch [Model Downloader](@ref omz_tools_downloader) tool to download `mtcnn` model from the Open Model Zoo repository.
|
||||
```sh
|
||||
omz_downloader --name mtcnn*
|
||||
```
|
||||
2. Launch [Model Converter](@ref omz_tools_downloader) tool to generate Intermediate Representation (IR) files for the model:
|
||||
```sh
|
||||
omz_converter --name mtcnn* --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
```
|
||||
3. Launch the example script from the example directory:
|
||||
```sh
|
||||
python3 ./face_detection_example.py -pm <PATH_TO_IR_XML_OF_PNET_MODEL>
|
||||
-rm <PATH_TO_IR_XML_OF_RNET_MODEL> -om <PATH_TO_IR_XML_OF_ONET_MODEL> -d <WIDER_val/images> -a <wider_face_split/wider_face_val_bbx_gt.txt>
|
||||
```
|
||||
Optional: you can specify .bin files of corresponding IRs directly using the `-pw/--pnet-weights`, `-rw/--rnet-weights` and `-ow/--onet-weights` options.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
python3 ./face_detection_example.py -pm <PATH_TO_IR_XML_OF_PNET_MODEL>
|
||||
-rm <PATH_TO_IR_XML_OF_RNET_MODEL> -om <PATH_TO_IR_XML_OF_ONET_MODEL> -d <WIDER_val/images> -a <wider_face_split/wider_face_val_bbx_gt.txt>
|
||||
|
||||
|
||||
Optional: you can specify .bin files of corresponding IRs directly using the ``-pw/--pnet-weights``, ``-rw/--rnet-weights`` and ``-ow/--onet-weights`` options.
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -1,26 +1,38 @@
|
||||
# Quantizing Object Detection Model with Accuracy Control {#pot_example_object_detection_README}
|
||||
|
||||
This example demonstrates the use of the [Post-training Optimization Toolkit API](@ref pot_compression_api_README) to
|
||||
quantize an object detection model in the [accuracy-aware mode](@ref accuracy_aware_README).
|
||||
The [MobileNetV1 FPN](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd_mobilenet_v1_fpn_coco) model from TensorFlow for object detection task is used for this purpose.
|
||||
A custom `DataLoader` is created to load the [COCO](https://cocodataset.org/) dataset for object detection task
|
||||
and the implementation of mAP COCO is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/object_detection).
|
||||
@sphinxdirective
|
||||
|
||||
## How to prepare the data
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Toolkit API <pot_compression_api_README>` to quantize an object detection model in the :doc:`accuracy-aware mode <accuracy_aware_README>`. The `MobileNetV1 FPN <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd_mobilenet_v1_fpn_coco>`__ model from TensorFlow for object detection task is used for this purpose. A custom ``DataLoader`` is created to load the `COCO <https://cocodataset.org/>`__ dataset for object detection task and the implementation of mAP COCO is used for the model evaluation. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/object_detection>`__.
|
||||
|
||||
How to prepare the data
|
||||
#######################
|
||||
|
||||
To run this example, you will need to download the validation part of the `COCO <https://cocodataset.org/>`__. The images should be placed in a separate folder, which will be later referred to as ``<IMAGES_DIR>`` and the annotation file ``instances_val2017.json`` later referred to as ``<ANNOTATION_FILE>``.
|
||||
|
||||
How to Run the example
|
||||
######################
|
||||
|
||||
1. Launch :doc:`Model Downloader <omz_tools_downloader>` tool to download ``ssd_mobilenet_v1_fpn_coco`` model from the Open Model Zoo repository.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_downloader --name ssd_mobilenet_v1_fpn_coco
|
||||
|
||||
|
||||
2. Launch :doc:`Model Converter <omz_tools_downloader>` tool to generate Intermediate Representation (IR) files for the model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_converter --name ssd_mobilenet_v1_fpn_coco --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
|
||||
To run this example, you will need to download the validation part of the [COCO](https://cocodataset.org/). The images should be placed in a separate folder, which will be later referred to as `<IMAGES_DIR>` and the annotation file `instances_val2017.json` later referred to as `<ANNOTATION_FILE>`.
|
||||
## How to Run the example
|
||||
|
||||
1. Launch [Model Downloader](@ref omz_tools_downloader) tool to download `ssd_mobilenet_v1_fpn_coco` model from the Open Model Zoo repository.
|
||||
```sh
|
||||
omz_downloader --name ssd_mobilenet_v1_fpn_coco
|
||||
2. Launch [Model Converter](@ref omz_tools_downloader) tool to generate Intermediate Representation (IR) files for the model:
|
||||
```sh
|
||||
omz_converter --name ssd_mobilenet_v1_fpn_coco --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
```
|
||||
3. Launch the example script from the example directory:
|
||||
```sh
|
||||
python ./object_detection_example.py -m <PATH_TO_IR_XML> -d <IMAGES_DIR> --annotation-path <ANNOTATION_FILE>
|
||||
```
|
||||
|
||||
* Optional: you can specify .bin file of IR directly using the `-w`, `--weights` options.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
python ./object_detection_example.py -m <PATH_TO_IR_XML> -d <IMAGES_DIR> --annotation-path <ANNOTATION_FILE>
|
||||
|
||||
|
||||
* Optional: you can specify .bin file of IR directly using the ``-w``, ``--weights`` options.
|
||||
|
||||
@endsphinxdirective
|
@ -1,29 +1,39 @@
|
||||
# Quantizing Semantic Segmentation Model {#pot_example_segmentation_README}
|
||||
|
||||
This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a segmentation model.
|
||||
The [DeepLabV3](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3) model from TensorFlow* is used for this purpose.
|
||||
A custom `DataLoader` is created to load the [Pascal VOC 2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/) dataset for semantic segmentation task
|
||||
and the implementation of Mean Intersection Over Union metric is used for the model evaluation. The code of the example is available on [GitHub](https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/segmentation).
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a segmentation model.
|
||||
The `DeepLabV3 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/deeplabv3>` model from TensorFlow is used for this purpose.
|
||||
A custom `DataLoader` is created to load the `Pascal VOC 2012 <http://host.robots.ox.ac.uk/pascal/VOC/voc2012/>`__ dataset for semantic segmentation task
|
||||
and the implementation of Mean Intersection Over Union metric is used for the model evaluation. The code of the example is available on `GitHub <https://github.com/openvinotoolkit/openvino/tree/master/tools/pot/openvino/tools/pot/api/samples/segmentation>`__.
|
||||
|
||||
## How to prepare the data
|
||||
How to prepare the data
|
||||
#######################
|
||||
|
||||
To run this example, you will need to download the validation part of the Pascal VOC 2012 image database http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#data.
|
||||
Images are placed in the `JPEGImages` folder, ImageSet file with the list of image names for the segmentation task can be found at `ImageSets/Segmentation/val.txt`
|
||||
and segmentation masks are kept in the `SegmentationClass` directory.
|
||||
Images are placed in the ``JPEGImages`` folder, ImageSet file with the list of image names for the segmentation task can be found at ``ImageSets/Segmentation/val.txt``
|
||||
and segmentation masks are kept in the ``SegmentationClass`` directory.
|
||||
|
||||
How to Run the example
|
||||
######################
|
||||
|
||||
1. Launch :doc:`Model Downloader <omz_tools_downloader>` tool to download ``deeplabv3`` model from the Open Model Zoo repository.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_downloader --name deeplabv3
|
||||
|
||||
|
||||
## How to Run the example
|
||||
2. Launch :doc:`Model Converter <omz_tools_downloader>` tool to generate Intermediate Representation (IR) files for the model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
omz_converter --name deeplabv3 --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
|
||||
|
||||
1. Launch [Model Downloader](@ref omz_tools_downloader) tool to download `deeplabv3` model from the Open Model Zoo repository.
|
||||
```sh
|
||||
omz_downloader --name deeplabv3
|
||||
```
|
||||
2. Launch [Model Converter](@ref omz_tools_downloader) tool to generate Intermediate Representation (IR) files for the model:
|
||||
```sh
|
||||
omz_converter --name deeplabv3 --mo <PATH_TO_MODEL_OPTIMIZER>/mo.py
|
||||
```
|
||||
3. Launch the example script from the example directory:
|
||||
```sh
|
||||
python3 ./segmentation_example.py -m <PATH_TO_IR_XML> -d <VOCdevkit/VOC2012/JPEGImages> --imageset-file <VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt> --mask-dir <VOCdevkit/VOC2012/SegmentationClass>
|
||||
```
|
||||
Optional: you can specify .bin file of IR directly using the `-w`, `--weights` options.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
python3 ./segmentation_example.py -m <PATH_TO_IR_XML> -d <VOCdevkit/VOC2012/JPEGImages> --imageset-file <VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt> --mask-dir <VOCdevkit/VOC2012/SegmentationClass>
|
||||
|
||||
|
||||
Optional: you can specify .bin file of IR directly using the ``-w``, ``--weights`` options.
|
||||
|
@ -1,33 +1,46 @@
|
||||
# Quantizing for GNA Device {#pot_example_speech_README}
|
||||
|
||||
This example demonstrates the use of the [Post-training Optimization Tool API](@ref pot_compression_api_README) for the task of quantizing a speech model for [GNA](@ref openvino_docs_OV_UG_supported_plugins_GNA) device.
|
||||
Quantization for GNA is different from CPU quantization due to device specific: GNA supports quantized inputs in INT16 and INT32 (for activations) precision and quantized weights in INT8 and INT16 precision.
|
||||
@sphinxdirective
|
||||
|
||||
This example contains pre-selected quantization options based on the DefaultQuantization algorithm and created for models from [Kaldi](http://kaldi-asr.org/doc/) framework, and its data format.
|
||||
A custom `ArkDataLoader` is created to load the dataset from files with .ark extension for speech analysis task.
|
||||
This example demonstrates the use of the :doc:`Post-training Optimization Tool API <pot_compression_api_README>` for the task of quantizing a speech model for :doc:`GNA <openvino_docs_OV_UG_supported_plugins_GNA>` device. Quantization for GNA is different from CPU quantization due to device specific: GNA supports quantized inputs in INT16 and INT32 (for activations) precision and quantized weights in INT8 and INT16 precision.
|
||||
|
||||
## How to prepare the data
|
||||
This example contains pre-selected quantization options based on the DefaultQuantization algorithm and created for models from `Kaldi <http://kaldi-asr.org/doc/>`__ framework, and its data format.
|
||||
A custom ``ArkDataLoader`` is created to load the dataset from files with .ark extension for speech analysis task.
|
||||
|
||||
To run this example, you will need to use the .ark files for each model input from your `<DATA_FOLDER>`.
|
||||
For generating data from original formats to .ark, please, follow the [Kaldi data preparation tutorial](https://kaldi-asr.org/doc/data_prep.html).
|
||||
How to prepare the data
|
||||
#######################
|
||||
|
||||
To run this example, you will need to use the .ark files for each model input from your ``<DATA_FOLDER>``.
|
||||
For generating data from original formats to .ark, please, follow the `Kaldi data preparation tutorial <https://kaldi-asr.org/doc/data_prep.html>`__.
|
||||
|
||||
How to Run the example
|
||||
######################
|
||||
|
||||
1. Launch :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` with the necessary options (for details follow the :doc:`instructions for Kaldi <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>` to generate Intermediate Representation (IR) files for the model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <PATH_TO_KALDI_MODEL> [MODEL_OPTIMIZER_OPTIONS]
|
||||
|
||||
## How to Run the example
|
||||
|
||||
1. Launch [Model Optimizer](@ref openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide) with the necessary options (for details follow the [instructions for Kaldi](@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi) to generate Intermediate Representation (IR) files for the model:
|
||||
```sh
|
||||
mo --input_model <PATH_TO_KALDI_MODEL> [MODEL_OPTIMIZER_OPTIONS]
|
||||
```
|
||||
2. Launch the example script:
|
||||
```sh
|
||||
python3 <POT_DIR>/api/examples/speech/gna_example.py -m <PATH_TO_IR_XML> -w <PATH_TO_IR_BIN> -d <DATA_FOLDER> --input_names [LIST_OF_MODEL_INPUTS] --files_for_input [LIST_OF_INPUT_FILES]
|
||||
```
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
python3 <POT_DIR>/api/examples/speech/gna_example.py -m <PATH_TO_IR_XML> -w <PATH_TO_IR_BIN> -d <DATA_FOLDER> --input_names [LIST_OF_MODEL_INPUTS] --files_for_input [LIST_OF_INPUT_FILES]
|
||||
|
||||
|
||||
Required parameters:
|
||||
- `-i`, `--input_names` option. Defines list of model inputs;
|
||||
- `-f`, `--files_for_input` option. Defines list of filenames (.ark) mapped with input names. You should define names without extension, for example: FILENAME_1, FILENAME_2 maps with INPUT_1, INPUT_2.
|
||||
|
||||
|
||||
|
||||
- ``-i``, ``--input_names`` option. Defines list of model inputs;
|
||||
- ``-f``, ``--files_for_input`` option. Defines list of filenames (.ark) mapped with input names. You should define names without extension, for example: FILENAME_1, FILENAME_2 maps with INPUT_1, INPUT_2.
|
||||
|
||||
Optional parameters:
|
||||
- `-p`, `--preset` option. Defines preset for quantization: `performance` for INT8 weights, `accuracy` for INT16 weights;
|
||||
- `-s`, `--subset_size` option. Defines subset size for calibration;
|
||||
- `-o`, `--output` option. Defines output folder for quantized model.
|
||||
3. Validate your INT8 model using `./speech_example` from the Inference Engine examples. Follow the [speech example description link](@ref openvino_inference_engine_samples_speech_sample_README) for details.
|
||||
|
||||
- ``-p``, ``--preset`` option. Defines preset for quantization: ``performance`` for INT8 weights, ``accuracy`` for INT16 weights;
|
||||
- ``-s``, ``--subset_size`` option. Defines subset size for calibration;
|
||||
- ``-o``, ``--output`` option. Defines output folder for quantized model.
|
||||
|
||||
3. Validate your INT8 model using ``./speech_example`` from the Inference Engine examples. Follow the :doc:`speech example description link <openvino_inference_engine_samples_speech_sample_README>` for details.
|
||||
|
||||
@endsphinxdirective
|
||||
|
Loading…
Reference in New Issue
Block a user