Add convert GluonCV docs (#1413)

* Add convert GluonCV docs
This commit is contained in:
iliya mironov 2020-08-07 14:37:55 +03:00 committed by GitHub
parent c8d74632f9
commit 52ad786b6c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 29 additions and 1 deletions

View File

@ -38,7 +38,8 @@ A summary of the steps for optimizing and deploying a model that was trained wit
**Other supported topologies**
* Style transfer [model](https://github.com/zhaw/neural_style) can be converted using [instruction](mxnet_specific/Convert_Style_Transfer_From_MXNet.md),
* [GluonCV SSD and YOLO-v3 models](https://gluon-cv.mxnet.io/model_zoo/detection.html) can be converted using the following [instructions](mxnet_specific/Convert_GluonCV_Models.md).
* [Style transfer model](https://github.com/zhaw/neural_style) can be converted using the following [instructions](mxnet_specific/Convert_Style_Transfer_From_MXNet.md).
## Convert an MXNet* Model <a name="ConvertMxNet"></a>

View File

@ -0,0 +1,26 @@
# Converting GluonCV* Models {#openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models}
This document provides the instructions and examples on how to use Model Optimizer to convert [GluonCV SSD and YOLO-v3 models](https://gluon-cv.mxnet.io/model_zoo/detection.html) to IR.
1. Choose the topology available from the [GluonCV Moodel Zoo](https://gluon-cv.mxnet.io/model_zoo/detection.html) and export to the MXNet format using the GluonCV API. For example, for the `ssd_512_mobilenet1.0` topology:
```python
from gluoncv import model_zoo, data, utils
from gluoncv.utils import export_block
net = model_zoo.get_model('ssd_512_mobilenet1.0_voc', pretrained=True)
export_block('ssd_512_mobilenet1.0_voc', net, preprocess=True, layout='HWC')
```
As a result, you will get an MXNet model representation in `ssd_512_mobilenet1.0.params` and `ssd_512_mobilenet1.0.json` files generated in the current directory.
2. Run the Model Optimizer tool specifying the `--enable_ssd_gluoncv` option. Make sure the `--input_shape` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrates running the Model Optimizer for the SSD and YOLO-v3 models trained with the NHWC layout and located in the `<model_directory>`:
* **For GluonCV SSD topologies:**
```sh
python3 mo_mxnet.py --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data
```
* **For YOLO-v3 topology:**
* To convert the model:
```sh
python3 mo_mxnet.py --input_model <model_directory>/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3]
```
* To convert the model with replacing the subgraph with RegionYolo layers:
```sh
python3 mo_mxnet.py --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "mo/extensions/front/mxnet/yolo_v3_mobilenet1_voc.json"
```

View File

@ -62,6 +62,7 @@
</tab>
<tab type="usergroup" title="Converting a MXNet* Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet">
<tab type="user" title="Converting a Style Transfer Model from MXNet" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet"/>
<tab type="user" title="Converting GluonCV* Models" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models"/>
</tab>
<tab type="usergroup" title="Converting Your Kaldi* Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi">
<tab type="user" title="Convert Kaldi* ASpIRE Chain Time Delay Neural Network (TDNN) Model to the Intermediate Representation" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model"/>